December 21, 2015

Windows 10 and AMD Radeon HD4850

Someday I installed Windows 10 on my work PC. Well I did it after I installed Win10 on my home PC and have been living on it for a while. As I didn't encounter anything terrible wrong I decided to migrate my work PC also. But not everything was great in this process.

The first issue I encountered was with video adapter. I have Radeon HD4850 on that machine. After Windows rebooted to complete installing I got Microsoft Basic Adapter in Device manager and lost multiple monitors support. Basic Adapter doesn't support multiple monitors and just shows the same picture on all of them. It's funny to have stereo picture on both monitors indeed but actually it's a disaster. Updating drivers via Device manager didn't work so I went to AMD site to download a new driver. But it turned out that AMD decided to not support 4000 Radeon series in Windows 10 at all. Just no drivers. "Nobody makes you to upgrade to Windows 10", they say. Nice shot AMD.
Here's a example of thread - https://community.amd.com/message/2660992, which AMD forums full of.
It's a nice answer: "The 4xxx series cards are not supported on Windows 10 as they do not meet the minimum requirements". Ridiculous!
Here's Windows 10 requirements:
Graphics card:
DirectX 9 or later with WDDM 1.0 driver

What the hell AMD?!

So the first conclusion for this day: do not buy AMD cards anymore. Ever. Just don't.

When I was close to despair I decided to try integrated video card on my motherboard and rebooted to go into BIOS. In BIOS there is a switch for "active video" with options: Auto (was enabled), PCI, PCI-E, Internal:
I choose "Internal" (IGD) and rebooted. But I forgot to switch vga/dvi cables and decided to wait to see what will happen. To my surprise nothing changed - both monitors still attached to my HD4850 (which is on PCI-E obviously) worked as before. I logged into Windows, went to Device manager and tried to update driver for "Microsoft Basic Adapter". And it was a miracle! Windows started to download a new driver and installed "ATI Radeon HD 4800 Series".

After the new driver was installed multiple monitor support works as it should.



August 12, 2015

Organize your image files

These days we have many sources of our photos: smartphones, cameras, old cameras, old phones and so on. It is common to use some sort of cloud services for keeping our photos. It is very handy to have them automatically uploaded to the cloud from a device. But currently it works good for smartphones mostly. So despite of using cloud services like Google Photos/Drive, OneDrive, DropBox, Amazon Cloud Drive and the same you probably have all your photos on your hard drives and organize them in some kind of folder structure.
There are a lot of software to help in organizing files and photos in particular. For me is was always a problem here - I don't want a magical piece of software to hide my files from me. This is why I like Google Picasa. It provides nice UI over folder structure and synchronize with it. But this time I wanted to talk not about organizing files in folder structure but managing files themselves.

For image files we have the following important properties:
  • file name
  • modified date - file attribute
  • date taken - EXIF metadata
All these file properties can help in managing files. It's nice when a file's name contains its timestamp - the date and time the photo was taken. It nice when a file's modified date (it's a file attribute supported by any OS/file system) is the same as its timestamp.

Unfortunately often some of these properties are incorrect or mixed up. Let's review all of them.

Name

For files from cameras it's common to have file names like IMG_1234.jpg. For files from smartphones it's common to have 20150812_174054.jpg. It's much better but different OSes/devices use different patterns. So it could be also IMG_20150812_174054.jpg or something else.

It can be helpful to have all names in the same pattern. Obviously the name '20150812_174054.jpg' is more informative than 'IMG_1234.jpg'. It tells us when the photo was taken without looking up in its metadata.

Modification date

It's important to have correct modification date for files. It allows us to sort all files in our collection by their "modified data" attribute values. Not all software support reading EXIF metadata. Moreover that metadata can be missing. Especially it's important for cloud services as they usually allows you to look through all your photos in one timeline.

Unfortunately modification date values are often updated by software on some operations with files. For example when we rotate a photo in Windows Image Viewer it updates modification date. Technically it's correct as the file did change. But usually I don't care on such modifications and want to keep a date when the photo was taken instead of a date it was changed in software.

EXIF

It's metadata put inside image files by cameras/smartphotos to keep some additional info like camera model, exposure, f-stop and so on. It's source of the truth for timestamps. We need 'date taken' meta attribute.

Organize them all

Here's some common steps which helps us organize photo files:
  • Extract EXIF metadata if they exist and put 'date taken' value into 'modification date' file attribute.
  • If no EXIF metadata then try to extract timestamps from file names (20150812_174054.jpg)
  • Rename all files in such a way a file name contains its timestamp (20150812_174054.jpg)
  • Remove unneeded prefixes like "IMG_" (IMG_20150812_174054.jpg), but keep those ones which provide some info (like "PANO_" for panoramas)
I was looking for software which would support these tasks for me but finally gave up and created a simple script. Meet Fix-TS.ps1.

Let's get to know with it by examples.

powershell ./fix-ts.ps1 /path/to/ -source exif
Update all files timestamps in a folder using values from EXIF metadata (if they exist).

powershell ./fix-ts.ps1 /path/to/ -filter *.jpg,*.png
Update *.jpg and *.png files timestamps in a folder using values parsed from file names,
e.g. for '20151231_245959.jpg' timestamp will be 2012 December 31 24:59:59.

powershell ./fix-ts.ps1 /path/to/ -rename remove-prefix
Remove all prefixes before year part, e.g. 'IMG_20151207_245959.jpg' will be renamed to '20151207_245959.jpg'.

powershell ./fix-ts.ps1 /path/to/ -rename remove-prefix:!PANO
Remove all prefixes except 'PANO', i.e. 'PANO_20151207_245959.jpg' will not change but 'IMG_20151207_245959.jpg' will become '20151207_245959.jpg'.

powershell ./fix-ts.ps1 /path/to/ -rename add-prefix:jpg=IMG_|mp4=VID_|avi=VID_
Add prefix 'IMG_' for all *.jpg, add prefix 'VID_' for all *.mp3 and *.avi files.

powershell ./fix-ts.ps1 /path/to/ -rename rebuild -source exif
Rename all files using pattern `yyyyMMdd_hhmmss` with timestamps from their EXIF metadata.

Please note that if run the script without `-fix` switch it won't change anything, only reports about found issues/proposed fixes. And only running with `-fix` switch makes it apply fixes.

You can find the script on Github.

Hope it helps someone to keep things more organized.

June 1, 2015

Publish files to Artifactory with artifactory-publisher


In the previous post I already shared some experience on setting up Artifactory. Here I'll continue to play with Artifactory. Now let's talk about publishing artifacts.

I needed to publish a lot of NuGet packages we already have into our new Artifactory. There are three possibilities to publish an artifact to Artifactory:
  • use CLI tool (NuGet, NPM and so on) of a package manager
  • use web UI ("Deploy" tab) in Artifactory server
  • use REST API
Using web UI is obviously tedious as we need to publish a lot of packages.
Publishing via NuGet.exe cannot control folder/files layout in Artifactory. In this case the layout is determined by Repository Layout which is actually a regular expression and so it's pretty limited.
So the most powerful method is to use REST API. We just sending a file in a POST request to a desired url.

I came up with a tiny tool `artifactory-publisher` to publish files to Artifactory via its REST API.
You can find source code on Github. And install it from npmjs.com.

The detailed documentation can be found in README. Here's an usage example.

Let's consider that we have a local folder with a lot of nuget packages (*.nupkg) which we want to publish into a repository but structure them into different folders.

For example we have the following packages locally:
XFW3.Core.1.14.1.nupkg 
XFW3.Core.1.14.2.nupkg
XFW3.Core.1.15.0.nupkg
XFW3.SmartClient.1.14.1.nupkg
XFW3.SmartClient.1.14.2.nupkg
XFW3.SmartClient.1.15.0.nupkg
XFW3.WebClient.0.12.1.nupkg
XFW3.WebClient.0.14.2.nupkg
XFW3.WebClient.0.15.0.nupkg

and we want to have them in our Artifactory in the following structure:
/my-repo
    /XFW3
        /1.14
            XFW3.Core.1.14.1.nupkg 
            XFW3.Core.1.14.2.nupkg
        /1.15
            XFW3.Core.1.15.0.nupk
    /XFW3.SmartClient
        /1.14
            XFW3.SmartClient.1.14.1.nupkg
            XFW3.SmartClient.1.14.2.nupkg
        /1.15
            XFW3.SmartClient.1.15.0.nupkg
    /WebClient
        /0.12
            XFW3.WebClient.0.12.1.nupkg
        /0.13
            XFW3.WebClient.0.13.2.nupkg
        /0.14
            XFW3.WebClient.0.14.0.nupkg

Here's a sample code how to do this with help of `artifactory-publisher` tool.
To run this code we'll need to install dependencies:
"dependencies": {
    "artifactory-publisher": "~1.0.0",
    "async": "^1.1.0",
    "q": "^1.4.1"
  }



It's just an example when some processing is needed before determining the final url of files being published.

May 29, 2015

Setting up Artifactory as npm repository behind Apache

Recently I was struggling with Artifactory to make it work as our Npm repository. Here's some experience.

Scoped packages and encoded slash

Npm since version 2.0.0 supports scoped package. It's a great feature for mastering in-house components which should not be published publicly to npmjs.org. Technically it a prefix in package name `@myorg/` which can be easily associated with a registry.

For example we need to create an in-house Yeoman-generator and publish it for devs of our company. We can create a package with scoped name "@myorg/generator-webapp" (package.json):


Next we can associate the scope "@myorg" with a registry. For example we created an NPM repository in Artifactory with name "myorg-npm". By accessing the url "http://artifacts.mydomain.org/artifactory/api/npm/myorg-npm/auth/myorg" under an authenticated user we'll get from Artifactory settings for NPM configuration in .npmrc (~/.npmrc):

@myorg:registry=http://artifacts.mydomain.org/artifactory/api/npm/myorg-npm/
//artifacts.mydomain.org/artifactory/api/npm/myorg-npm/:_password=Q...z
//artifacts.mydomain.org/artifactory/api/npm/myorg-npm/:username=user1
//artifacts.mydomain.org/artifactory/api/npm/myorg-npm/:email=user1@myorg.org
//artifacts.mydomain.org/artifactory/api/npm/myorg-npm/:always-auth=true 

That's all in terms of NPM. But Artifactory needs more configuration.

Now npm CLI will use package name with encoded slash: "@myorg%2fgenerator-webapp". By default TomCat and Apache restrict encoded slashes in URLs.
Configuring Artifactory is described in the documentation. Actually we need to put parameter org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true into $ARTIFACTORY_HOME/tomcat/conf/catalina.properties file (for Artifactory 4.x)
or %ARTIFACTORY_HOME%\etc\artifactory.system.properties for Artifactory 3.x.
But it's not enough if we have Apache in front of Artifactory. It should be configured also.
There should be done two things:
  • AllowEncodedSlashed set to NoDecode (by default it's Off)
  • Added keyword `nocanon` for ProxyPass, it tells mod_proxy module not to canonicalize URLs
BTW without `canon` keyword Artifactory will get urls with encoded % symbol, so %2F in "/@myorg%2fgenerator-webapp" becomes %252F ("/"@myorg%252Fgenerator-webapp"").

VirtualHost config should look like:
<VirtualHost *:80>
    ServerName artifacts.mydomain.org
    AllowEncodedSlashes NoDecode
    ProxyPass / ajp://localhost:8022/ nocanon
</VirtualHost>

Now we can publish our package without any additional parameters:
npm publish
To install package:
npm install @myorg/generator-webapp -g
Run our generator (it's a nice thing that Yeoman fully supports scoped packages):
yo @myorg/webapp

Removing "/artifactory" path

I wanted my repositories to be accessible on a custom domain (http://artifacts.mydomain.org). But by default Artifactory always expects to be accessed via /artifactory path. For example initially it listens on http://localhost:8081/artifactory. But even after we moved Artifactory behind Apache it still expects the path. The documentation describes how this path can be customize but says nothing about how to remove it completely.
For me using the path seems completely unnecessary.

So here's Apache configuration for proxy Artifactory without path:
<VirtualHost *:80>
    ServerName artifacts.mydomain.org
    AllowEncodedSlashes NoDecode
    
        Order deny,allow
        Allow from all
    

    ProxyPreserveHost On
    ProxyPassReverseCookiePath /artifactory/ /
    ProxyPass / ajp://localhost:8022/artifactory/ nocanon
    ProxyPassReverse / http://artifacts.mydomain.org/artifactory/

    RewriteEngine On
    RewriteCond %{HTTP_HOST} ^artifacts\.mydomain\.org$ [NC]
    RewriteRule ^/artifactory/(.*)$ /$1 [L,R=301]
</VirtualHost>

For using RewriteEngine we need to load mod_rewrite module.

Do not forget to change registry url in .npmrc ("//artifacts.mydomain.org/artifactory/api/" -> "//artifacts.mydomain.org/api/").

May 25, 2015

JavaScript: Performance loss on incorrect arguments using


One day I stared at CPU profile I collected in my application in Chrome DevTools and noticed a warning on not optimized function with description: "Not optimized: Bad value context for arguments value".
I googled a bit and found this blog post.

The author wrote that warnings "Bad value context for arguments value" are caused by incorrect handling of 'arguments' variable.

I created a test case to compare performance: http://jsperf.com/optimizing-arguments/10

The test case consists of several implementations of function 'append' (like the one from underscore).
Each implementation needs to make a copy of arguments to an array. The first implementation ("Array#slice") uses `Array.slice` method, the next ("for/Array allocated") copies `arguments` in a for-loop explicitly accessing it by index. The test "for/unallocated array" is a variant of the previous one but uses an array literal ([], i.e.unallocated array). The test "helper fn" copies `arguments` with a helper function passing `arguments` into it. The test "helper fn called with apply" does the similar thing but uses `Function.apply` method.

More specifically:
  1. "Array#slice":
    var args = Array.prototype.slice.call(arguments, 1);
  2. "for/Array allocated":
    var i, args = new Array(arguments.length-1);
    for (i = 1; i < arguments.length; ++i) { args[i - 1] = arguments[i]; }
    
  3. "for/unallocated array":
    var i, args = [];
    for (i = 1; i < arguments.length; ++i) { args[i - 1] = arguments[i]; }
    
  4. "helper fn":
    var args = slice_arguments(arguments, 1);
  5. "helper fn called with apply":
    var args = args_to_array.apply(null, arguments);

Here's the results:

We can see 3x performance loss on using `Array.slice`. But not only. The test with helper function has the same results. It seems that passing `arguments` variable anywhere by ref has a big performance impact. So using of `Array.slice` is just a special case of the general problem.

But let's run these tests in Chrome:
"Other" here is Internet Explorer 11. It seems that IE11 just doesn't optimize anything :)
As for Chrome, its results puzzled me honestly. The test "helper fn" with passing `arguments` by ref into a function has the same results as copying in a for-loop. That should not be but it is.

At the moment we can state only one thing for sure: using `Array.prototype.slice` with `arguments` is a bad thing and should be avoided.
But as we can't write code for Chrome only, passing `arguments` is also should be avoided. Except using apply. So this is OK:
var args = args_to_array.apply(null, arguments);
but this is not:
var args = args_to_array(arguments);


There's good news. Using TypeScript helps to avoid this problem with its rest parameters feature.
The TS code:

will generate:
Nice. So it's one more (a small) reason to move your code to TypeScript. P.S. Here's a useful collection of reasons of disabled optimizations in Chrome: https://github.com/GoogleChrome/devtools-docs/issues/53

August 25, 2014

ResxToJson

We often develop Web projects with .NET on the server and RequireJS on the client. For building multi-language applications we have to move all messages presenting to users into resources.
For server project it's the classical way to build localized application on .NET - create resx-files and compite them to satellite-assemblies.
In runtime we just need to setup CultureInfo.CurrentCulture and CultureInfo.CurrentUICultures static properties for each request basing on http headers.
For client-side we are using i18n plugin for RequireJS. See the documention on how to use it. In general it's similar to the server-side - just setup static object (
requirejs.config({locale: "en"});
) and RequireJS will load appropriate module with localized messages.
But what if we need to use the same messages on the server and on the client?

Welcome ResxToJson tool to rescue. This tool generates client resources from resx-files for loading them via RequireJS.

You can install it as NuGet-package from nuget.org: https://www.nuget.org/packages/ResxToJson.

Detailed documentation can be found on GitHub: https://github.com/CrocInc/ResxToJson.

Here's just a small example on usage:

We can specify a folder or particular resx files as inputs ("-i" option). There can be many inputs at once.
As output we can specify a folder ("-dir" option) or a file ("-file" option).
If we specified a folder then for each resx-file in the folder will be generated a js file. If we specified an output file then all resx files for one culture will be merged and placed into single js-file (but again there will be one js-file for every culture).

For exemple 'MyServerProject' folder contains three resx files (Resources.resx, Resources.ru.resx, Resources.nl.resx) with 'fileNotFound' message for three languages:
  • en - File cannot be found
  • ru - Файл не найден
  • nl - Bestand kan niet worden gevonden
Then after that we generated client resources:
ResxToJson.exe -i .\MyServerProject -dir .\MyClientProject -c camel
Folder 'MyClientProject' will contain:
  • resources.js - default resources from Resources.resx (for English):

    define({
      "root": {
        "fileNotFound": "File cannot be found",
      },
      "ru": true,
      "nl": true
    });
    
  • ru/resource.js - resources from Resources.ru.resx:

    define({
      "fileNotFound": "Файл не найден"
    });
    
  • nl/resource.js - resources from Resources.nl.resx:

    define({
      "fileNotFound": "Bestand kan niet worden gevonden"
    });
    

April 10, 2014

How to debug Yeoman generator

1. install node-inspector:


2. start Node Inspector server:


3. run node.js with Yeoman-cli in debug mode:

where 'MyGenerator' is Yeoman generator name which you want to debug.
On Windows "path\to\global\npm" is something like "C:\Users\{UserName}\AppData\Roaming\npm".

4. open Chrome or Opera (any Blink-based browser) and go to http://localhost:8080/debug?port=5858