Symphonious

Living in a state of accord.

Using WebPack with Buck

I’ve been gradually tidying up the build process for UI stuff at LMAX. We had been using a mix of requires and browserify – both pretty sub-optimally configured. Obviously when you have too many ways of doing things the answer is to introduce another new way so I’ve converted everything over to webpack.

Situation: There are 14 competing standards. We need to develop one universal standard that overs everyone's use cases. Situation: there are 15 competing standards...

Webpack is most often used as part of a node or gulp/grunt build process but our overall build process is controlled by Buck so I’ve had to work it into that setup. I was also keen to minimise the amount of code that had to be changed to support the new build process.

The final key requirement, which had almost entirely been missed by our previous UI build attempts was the ability to easily create reusable UI modules that are shared by some, but not all projects. Buck shuns the use of repositories in favour of a single source tree with everything in it so an internal npm repo wasn’t going to fly.

While the exact details are probably pretty specific to our setup, the overall shape of the build likely has benefit.  We have separate buck targets (using genrule) for a few different key stages:

Build node_modules

We use npm to install third party dependencies to build the node_modules directory we’ll need. We do this in an offline way by checking in the node cache as CI doesn’t have internet access but it’s pretty unsatisfactory. Checking in node_modules directory was tried previously but both svn and git have massive problems with the huge numbers of files it winds up containing.

yarn has much better offline support and other benefits as well, but it’s offline support requires a cache and the cache has every package already expanded so it winds up with hundreds of thousands of files to check in and deal with. Further investigations are required here…

For our projects that use Angular 2, this is actually the slowest part of our entire build (UI and server side). rxjs seems to be the main source of issues as it takes forever to install. Fortunately we don’t change our third party modules often and we can utilise the shared cache of artefacts so developers don’t wind up building this step locally too often.

Setup a Workspace and Generate webpack.config.js

We don’t want to have to repeat the same configuration for webpack, typescript, karma etc for everything UI thing we build. So our build process generates them for us, tweaking things as needed to account for the small differences between projects. It also grabs the node_modules from the previous step and installs any of our own shared components (with npm install <local/path/to/component>).

Build UI Stuff

Now we’ve got something that looks just like most stand-alone javascript projects would have – config files at the root, source ready to combine/minify etc. At this point we can just run things out of node_modules. So we have a target to build with ./node_modules/.bin/webpack, run tests with ./node_modules/.bin/karma or start the webpack dev server.

Buck can then pick up those results and integrate them where they’re needed in the final outputs ready for deployment.

Finding What Buck Actually Built

Buck is a weird but very fast build tool that happens to be rather opaque about where it actually puts the things you build with it. They wind up somewhere under the buck-out folder but there’s no guarantee where and everything under there is considered buck’s private little scratch pad.

So how do you get build results out so they can be used? For things with built-in support like Java libraries you can use ‘buck publish’ to push them out to a repo but that doesn’t work for things you’ve built with a custom genrule. In those cases you could use an additional genrule build target to actually publish but it would only run when one of it’s dependencies have changed. Sometimes that’s an excellent feature but it’s not always what you want.

Similarly, you might want to actually run something you’ve built. You can almost always use the ‘buck run’ command to do that but it will tie up the buck daemon while it’s running so you can’t run two things at once.

For ultimate flexibility you really want to just find out where the built file is which thankfully is possible using ‘buck targets –show-full-output’. However it outputs both the target and it’s output:

$ buck targets --show-full-output //support/bigfeedback:bigfeedback
//bigfeedback:bigfeedback /code/buck-out/gen/bigfeedback/bigfeedback.jar

To get just the target file we need to pipe it through:

cut -d ' ' -f 2-

Or as a handy reusable bash function:

function findOutput() {
    $BUCK targets --show-full-output ${1} | cut -d ' ' -f 2-
}

 

Replacing Symlinks with Hardlinks

Symlinks have been causing me grief lately.  Our build tool, buck, loves creating symlinks but publishes corrupt cache artefacts for any build rule that includes a symlink amongst it’s output.

We also wind up calling out to npm to manage JavaScript dependencies and it has an annoying (for us) habit of resolving symlinks when processing files and then failing to find required libraries because the node_modules folder was back where the symlink was, not with the original file. Mostly this problem is caused by buck creating so many symlinks.

So it’s useful to be able to get rid of symlinks which can be done with the handy -L or –dereference option to cp. Then instead of copying the symlink you copy the file it points to. Avoids all the problems with buck and npm but wastes lots of disk space and means that changes to the original file are no longer reflected in the new copy (so watching files doesn’t work).

Assuming our checkout is on a single file system (which seems reasonable) we can get the best of both worlds by using hard links.  cp has a handy option for that too -l or –link. But since buck gave us a symlink to start with it just gives us a hard link to the symlink that points to the original file.

So combining the two options, cp -Ll, should be exactly what we want. And if you’re using coreutils 8.25 or above it is. cp will dereference the symlink and create a hard link to the original file. If you’re using coreutils prior to 8.25 cp will just copy the symlink. Hitting a bug in coreutils is pretty much the definition of the world being out to get you.

Fortunately, we can work around the issue with a bit of find magic:

find ${DIR} -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;

‘find -type l’ will find all symlinks.  For each of those we execute some bash, reading from inside out, to deference the symlink with readlink -m then use ln to create a hard link with the -f option to force it to overwrite the existing symlink.

Good times…

Benq GW2765 Monitor Display Port “No Signal Detected”

I have three Benq GW2765 monitors which periodically report “No Signal Detected” for DisplayPort even when the computer it’s attached to recognises the monitor is present (displaying it in the monitors/displays list etc). Changing  the DisplayPort cable or plugging it into a different computer doesn’t help (I tried with both Mac OS X and Linux/Fedora machines), but HDMI and D-Sub connections work perfectly (but can’t support the full screen resolution). I can even disconnect a cable from a working monitor, plug it into a non-working monitor and it will continue to complain about no signal, but plug the cable back into the working monitor and it carries on working fine.

The solution turns out to be quite simple – unplug it, wait for the power indicator light to turn off and turn it back on. It will then detect the DisplayPort signal correctly.  Unplugging the DisplayPort cable and plugging it back in will not help, nor will turning the monitor off with it’s power button. Briefly disconnecting the power cable and reconnecting it isn’t enough, you have to wait 5-10 seconds for the power indicator light to turn off.

Naturally that means that you can do all kinds of due diligence testing at home before deciding it’s a hardware problem and returning it to the shop. When you get to the shop it will work perfectly because it’s been unplugged on the car trip.

So that was fun…

Fun with Nvidia Drivers and Fedora Upgrades

After any major Fedora upgrade my system loses the proprietary nvidia drivers that make X actually work (I’ve never successfully gotten the nouveau drivers to handle my card and multi-monitor setup) so the system reboots and simply presents an “oops, something went wrong” screen.

The issue seems to be that the nvidia driver doesn’t recompile for the new kernel, despite the fact that I’m using akmod packages which should in theory automatically recompile for new kernels.

The tell-tale sign is:

[   161.484] (II) LoadModule: "nv"
[   161.484] (WW) Warning, couldn't open module nv
[   161.484] (II) UnloadModule: "nv"
[   161.484] (II) Unloading nv
[   161.484] (EE) Failed to load module "nv" (module does not exist, 0)

in the Xorg logs.

Some digging reveals that the akmod recompilation process should be triggered by /etc/kernel/postinst.d/akmodsposttrans but for whatever reason that didn’t run.

The key piece of that script was running akmods similar to:

/usr/sbin/akmods --from-kernel-posttrans --kernels 4.8.11-300.fc25.x86_64

The last argument is the current kernel version, which should match the directory name in /lib/modules/ – there will likely be a few options, either run the command for each of them or pick the latest which is likely to be the one missing the nvidia drivers.

Run that script, reboot and everything came back just fine, though there is likely a better way to do it…