Symphonious

Living in a state of accord.

Safely Encoding Any String Into JavaScript Code Using JavaScript

When generating a JavaScript file dynamically it’s not uncommon to have to embed an arbitrary string into the resulting code so it can be operated on. For example:

function createCode(inputValue) {
return "function getValue() { return '" + inputValue + "'; }"
}

This simplistic version works great for simple strings:

createCode("Hello world!");
// Gives: function getValue() { return 'Hello world!'; }

But breaks as soon as inputValue contains a special character, e.g.

createCode("Hello 'quotes'!");
// Gives: function getValue() { return 'Hello 'quotes' !'; }

You can escape single quotes but it still breaks if the input contains a \ character. The easiest way to fully escape the string is to use JSON.stringify:

function createCode(inputValue) {
return "function getValue() { return " +
JSON.stringify(String(inputValue)) +
"; }"
}

Note that JSON.stringify even adds the quotes for us. This works because a JSON string is a JavaScript string, so if you pass a string to JSON.stringify it will return a perfectly valid JavaScript string complete with quotes that is guaranteed to evaluate back to the original string.

The one catch is that JSON.stringify will happily JavaScript objects and numbers, not just strings, so we need to force the value to be a string first – hence, String(inputValue).

 

Software is sometimes done

In Software is sometimes done Rian van der Merwe makes the argument that we need more software that is “done”:

I do wonder what would happen if we felt the weight of responsibility a little more when we’re designing software. What if we go into a project as if the design we come up with might not only be done at some point, but might be around for 100 years or more? Would we make it fit into the web environment better, give it a timeless aesthetic, and spend more time considering the consequences of our design decisions?

It’s an interesting question – if we had to get things right the first time, would we do a better job? If our design decisions were set in stone for all time would things be better?

The problem is, we’ve already asked this question and decided that in fact designing things up front and setting it in stone doesn’t work as well as releasing early and often with short feedback cycles so that we can adjust as we go. It’s waterfall vs agile and it turns out agile wins.

That said, there’s a difference between being rushing a sloppy job out the door and doing things well with an iterative cycle to adjust to learning.  An short feedback loop is there to let you learn and improve, not to let you release any old thing and get away with it. We need more software developed by doing the best job possible with the information available, combined with a short feedback cycle to gather more information and continually raise the stakes for what’s possible.

It can be romantic to look back and think that we used to do a better job because things were more permanent, that software used to be done, but it’s just not true:

When Windows 95 came out, it was done. Yes, there were some patches to it, but they were few and far between, and in general quite difficult to come by. But of course, then the Internet and App Stores happened in full force, and suddenly we decided that “Software is never done”. In some sense this is certainly true. There are always bugs to fix, things to improve, more features to add, unused features to remove — and of course, the SaaS model makes it all so easy. But I wonder if we’ve taken this a bit too far.

Windows 95 may have a been done, but Windows was not. Otherwise we’d still be running Windows 95. We’re kidding ourselves if we think that anyone at Microsoft ever thought that Windows 95 would be the last thing they ever released, that the OS would never change in the future.

Even if we consider Windows 95 as a standalone thing that is “done”, would you run it today? Of course not, by modern standards it’s horrible. The same is true of every other piece of unmaintained software I can think of, it may have been good enough or even the best for a long time after it became unmaintained but eventually it falls behind. Eventually it stops being “done” in the sense that it doesn’t need any further work and becomes “done” in the sense that no one uses it anymore.

 

CI Isn’t a To-do List

When you have automated testing setup with a big display board to provide clear feedback, it’s tempting to try and use it for things it wasn’t intended for. One example of that is as a kind of reminder system – you identify a problem somewhere that can’t be addressed immediately but needs to be handled at some point in the future. It’s tempting to add a test that begins failing after a certain date (or the inverse, whitelisting errors until a certain date). Now the build is green and there’s no risk of you forgetting to address the problem because it will fail again in the future. Perfect right?

The problem is that it sacrifices the ability to easily isolate the cause of a failure. You can no longer revert changes to get back to a working baseline because the current time is a factor in whether your tests pass or not. You can no longer run historical builds through CI which you may need to do as part of supporting clients on older versions.

It also inevitably leads to false failures as the time points are almost always arbitrary and estimated time lines for completing tasks often slip. So the board goes red and the response is to just suppress it again.

Continuous integration isn’t there to remind you to do something – it’s there to tell you if your build is ok to ship to production or not. It doesn’t make sense to say that a build is ok to ship to production now but not ok to ship in a week’s time. If production is going to blow up in the future because of a bug in the code you want to the board to be red now, not when production actually blows up. And if the client is prepared to accept the risk and leave the problem unfixed then it’s not yet a requirement and doesn’t need tests asserting it – just put a card in the backlog for the client to prioritise and play when required.

If you need to remember to do something, either put a task in the backlog or on the current story board to do it or put a reminder in your calendar. Keep CI focussed on telling you about the state of your code right now – if it’s broken it should be red, if it’s not broken it should be green. Not broken until next week should never be an option.

 

Don’t Make Your Design Responsive

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our "New Item" shown correctly when the window is wide enough.

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The "New Item" menu no longer fits but causes incorrect wrapping because the break point is wrong.

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

From Java to Swift

Ever since the public beta of OS X I’ve been meaning to get around to learning Objective-C but for one reason or another never found a real reason to. I’ve picked up bits and pieces of it and even written a couple of working utilities but those were pretty much entirely copy/paste from various sources. Essentially they were small enough and short-lived enough that I only needed the barest grasp of Objective-C syntax and no understanding of the core philosophies and idioms that really make a language what it is. This is probably best exemplified by the approach to memory management those utilities took: it won’t run for long, so just let it leak.

I do however have a ton of experience in Java and JavaScript plus knowledge and experience in a bunch of other languages to a wide range of extents. In other words, I’m not a complete moron, I’m just a complete moron with Objective-C.

Anyway, obviously when Swift was announced it was complete justification for my ignoring Objective-C all these years and interesting enough for me to actually get around to learning it.

So I’ve been building a very small little utility in swift so I can pull out information from OS X’s system calendar from the command line and push it around to various places that I happen to want it and can’t otherwise get it. The code is up on GitHub if you’re interested – code reviews and patches most welcome. It’s been a great little project to get used to Swift the language without spending too much time trying to learn all the OS X APIs.

Language Features

Swift has some really nice language features that make dealing with common scenarios simple and clear. Unlike many languages it doesn’t seem to go too far with that though – it doesn’t seem likely that people will abuse its features and create overly complex or overly succinct code.

My favourite feature is the built-in optional support. A variable of type String is guaranteed to not be null, a variable of type String? might be. You can’t call any methods from String on a String? variable directly you have to unwrap it first – confirming it isn’t null. That would be painful if it weren’t for the ‘if let’ construct:

let events: String? = ""
if
let events = events { events.utf16count()
}

I’ve dropped into a habit here which might be a bit overly clever – naming the unwrapped variable the same as the wrapped one. The main reason for this is that I can never think of a better name. I figure it’s much like have a if events != nil check.

APIs

Calling Swift a new language is correct but it would almost be more accurate to call it a new syntax instead. Swift does have its own core API which is unique to it, but that’s very limited. For the most part you’re actually dealing with the OS X (or iOS) APIs which are shared with Objective-C. Thus, people with experience developing in Objective-C quite obviously have a huge head start with Swift.

The other impact of sharing so many APIs with Objective-C is that some of the benefits of Swift get lost – especially around strict type checking and null reference safety. For example retrieving a list of calendar events is done via the EventKit method:

func eventsMatchingPredicate(predicate: NSPredicate!) -> [AnyObject]!

which is helpfully displayed inside Xcode using Swift syntax despite it being a pre-existing Objective-C API and almost certainly still implemented in Objective-C. However if you look at the return type you see the downside of inheriting the Objective-C APIs: the method documentation says it returns [EKEvent] but the actual declaration is [AnyObject]!  So we’ve lost both type safety and null reference safety because Objective-C doesn’t have non-nullable references or generic arrays. It’s not a massive loss because those APIs are well tested and quite stable so we’re extremely unlikely to be surprised by a null reference or unexpected type in the array, but it does require some casting in our code and requires humans to read documentation and check if something can be null rather than having the compiler do it for us.

If Swift were intended to be a language that competes with Java, python or ruby the legacy of the Objective-C APIs would be a real problem. However, Swift is designed specifically to work with those APIs, to be a relatively small but powerful step of OS X and iOS developers. In that context the legacy APIs are really just a small bump in the road that will smooth out over time.

Xcode

The other really big thing a Java developer notices switching to Swift is what a Java developer notices when switching to any other language – the tools suck. In particular, the Java IDEs are superb these days and make writing, navigating and refactoring code so much easier. Xcode doesn’t even come close. The available refactorings are quite primitive even for C and Objective-C and they aren’t supported at all for Swift.

The various project settings and preferences in Xcode are also a complete mystery – and judging from the various questions and explanations on the internet it doesn’t seem to be all that much clearer even to people with lots of experience. In reality I doubt its really much different to Java which also has a ridiculous amount of complexity in IDE settings. The big difference is that in the Java world you (hopefully) start out by learning the basics using the standard command line tools directly. Doing so gives you a good understanding of the build and runtime setup and makes it much clearer what is controlling how your software is built and what is just setting up IDE preferences. Xcode does provide a full suite of command line developer tools so hopefully I can learn more about them and get that basic understanding.

Finally, Xcode 6 beta 3 is horribly buggy. It’s a beta so I can forgive that but I’m surprised at just how bad it is even for a beta.

Cocoa Pods

This was a delight to stumble across.  Adding a dependency to a project was a daunting prospect in Xcode (jar files are surprisingly brilliant). I really don’t know what it did but it worked and I’m grateful. Libraries that aren’t available as pods are pretty much dead to me now. There does seem to be a pretty impressive array of libraries available for a wide range of tasks. Currently all of them are Objective-C libraries so you have to be able to understand Objective-C headers and examples and convert them to Swift but it’s not terribly difficult (and trivial for anyone with an Objective-C background).

Overall

Swift has a good feel about it – lots of neat features that keep code succinct. Also it’s very hard not to like strict type checking with good type inference. With Apple pushing Swift as a replacement for Objective-C over time the libraries and APIs will become more and more “Swift-like”. Xcode should improve to at least offer the basic refactorings it has for other languages and stabilise which will make it a workable IDE – exceeding the capabilities of what’s available for a lot of languages.

 

Most importantly, the vast majority of existing Objective-C developers seem to quite like it – plenty of issues raised as well, but overall generally positive.

I think the future for Swift looks bright.