Make decisions so your users don't have to.

My iOS 7 Wishlist

Actually, it’s not a list at all. There’s just one thing I want from iOS 7.

I want it to expose sufficiently powerful hooks that Google could implement Google Now for iOS.

A few months ago, I switched from my iPhone 4S to a Nexus 4. This was quite an aberration for me, as I have been a dyed-in-the-wool Apple fan since the age of 7. The first computer I ever used at home was a Color Classic II (33MHz 68030, 80MB hard drive, 4MB of RAM), I read every issue of MacAddict magazine since issue 1, and landing jobs at Apple (first at an Apple Store in college, and then on the MobileMe team afterwards) were some of the most rewarding moments I’ve ever experienced.

As proof, here’s a photo of me, age 15, right after Macworld Expo, wearing my Mac OS X t-shirt (it had just been announced):


The first few versions of Android were awful, awkward, ungainly things, not too unlike the chubby teenager in the photo above. But everyone grows up and matures. Jellybean has been a dream to use. There are some rough edges, but the moments where I wish I still had my iPhone are few and far between.

I’d rather be an iPhone user, though. The build quality of the hardware is still far superior, and I prefer the smaller size. I don’t have small hands, but they’re not overly large, either. Trying to tap elements near the top of the screen single-handedly on the Nexus 4 feels a bit too much like yoga for my tastes.

When I was driving home from the holidays this December, I hit a pothole and blew out two tires on a remote stretch of highway about 100 miles south of San Francisco. It was that moment that made me realize just how important battery life is. I can mitigate the Nexus 4’s poor battery life in my day-to-day by just leaving it plugged in at the office. But outlier events like traveling and emergencies can be a wake-up call that sometimes you will be away from a power source for extended amounts of time, and I for one depend immensely on my phone in those situations. I was glad my travel partner had an iPhone, or I’m not sure what I would have done.

Yet, my entire digital life runs on non-Apple digital services. Through a combination of technical and business restrictions, Apple has made using those services on iOS terrible. Two examples:

I love reading books on Kindle. Having constant access to my entire library has dramatically increased the amount I read. But Apple prevents Amazon from integrating a storefront into the Kindle app for iOS, because they want a 30% cut. I’ll let others argue over whether that makes sense from a business perspective, but I want to offer this data point: I’d buy another Android phone instead of an iPhone, because developers can offer me the experience they think is best. I don’t want to think about how many man-hours startups have burned trying to dance as close to the edge of the rules as they can, figuring out ways to avoid the Apple tax. Thirty percent is significant.

Second, Google Now is an amazing feature that I think Apple is going to have a hard time competing with. If you’re unfamiliar with it, the introduction video does a good job of explaining it:

Let me emphasize why this feature is amazing. Let’s say I’m traveling to Prague for a conference. Let’s also say that I’m an AT&T customer, so data rates abroad will be usurious. More than likely, I’ll keep data off, unless there’s an emergency.

The conference organizer books me my airline ticket and hotel, and forwards the confirmation e-mail on to me. Assuming I’m using Gmail, this single event can trigger the following:

  • On the day of travel, my flight status will be displayed prominently.
  • If there is a change to the flight, it sends a push notification.
  • One-tap navigation from my current location to the airport.
  • It will send a push notification when it is time to leave for the airport to arrive on time, taking traffic conditions and flight status into account.
  • If I’ve checked in online, my boarding pass will be cued up automatically.
  • When I land, it will have already pulled up directions from the airport to my destination hotel.
  • It will have already downloaded vector map data for the destination city, so I can still navigate despite my lack of data.

This is groundbreaking. It will change the way people travel. And this is just one small facet of Google Now, which I view as the vector by which Google has figured out how to weaponize the stack of PhDs it has been accumulating for the past decade.

And it’s getting better all of the time. The culture inside Apple is one of a giant metronome, which ticks once or twice per year. The whole company is oriented around secrecy, followed by a big bang release. That works tremendously well for hardware, and for big software launches like an operating system. But it’s just terrible for web services; especially heavily data-driven ones.

The companies that are best at web services are less like a synchronized metronome and more like a group of crickets, each team releasing incremental improvements that over time amount to something quite significant indeed.

I’m not optimistic that Apple’s culture can change, and I’m not sure I want it to. But I do want iCloud (and Siri, and Apple Maps) to have to compete on an even playing field. Mobile devices aren’t the grand experiment they were in 2007. At the time, and in the years afterwards, I was supportive of the restrictions Apple put in place to guard the user experience. It’s a different world, though, and people are chafing against them. It’s hampering innovation. Android is effectively the escape valve for mobile developers that want to do cool new stuff that doesn’t fit inside the box that Apple gives you.

And that’s a bummer. There will be more products like Google Now in the future, not less. I want to be an iPhone user, but I also want access to all of the cool new stuff.

So, that’s my hope for iOS 7: make public the OS hooks that things like Siri and Maps use. Let me run different browsers. Let me replace the built-in e-mail app. We’ve appreciated the guidance, but we’ve all got the hang of this smartphone thing now. Let us do what we want.

And for the love of God, figure out a way to get Google Now on my iPhone.

Tell me why I’m an idiot for having this opinion by tweeting @tomdale.

Our Approach to Routing in Ember.js

The URL is an important strength that the web has over native apps. In web apps, the URL goes beyond just being a static reference to the location of a document on a server. The best way to think of it is as the serialization of the application’s current state. As the user interacts with the application, the URL should update to reflect what the user is seeing on their screen.

In most applications, state is tracked in an ad hoc way. To answer the question “What is the current state of the application?” you must synthesize several different properties (usually stashed away on different controllers) to arrive at an answer. If there are bugs in your application, it is possible to get into a conceptually broken state. For example, imagine you have an isAuthenticated property that is true but there is no currentUser property. How did you get into this state? Diagnosing these kinds of bugs is difficult. And more importantly, serializing that state in a sane way is basically impossible, because it’s scattered across the application.

Ember.js’ implementation of state charts solves these issues nicely. Using Ember.StateManager, you describe your application as a hierarchical tree of objects—one object per conceptual state. Because you explicitly enumerate the possible states, it is impossible to be in an unknown state. And if you try to perform an action in a state that doesn’t support it, an exception is raised immediately, making it easy to diagnose and debug the problem.

It also means that we can easily serialize the state of your application on demand. Assuming you model your application’s behavior using a state manager, we can map the state hierarchy to a URL, and update it as you navigate through the hierarchy.

This is an important departure from how most other web frameworks handle routing. They invert the model; the URL is the source of truth for the current state of the application. To change states (for example, to show a different view), you hardcode the URL into your application. If I want to display a photo with an ID of 1, I must synthesize the URL '/photo/1' and send it to my router.

This approach is not ideal for several reasons.

First, it couples your URLs to your application code. Changing segment names means you have to go through your entire app and update the hardcoded strings. It also requires that, if you want to enter a new state, you must go consult the router to be reminded what the particular URL is. Breaking encapsulation this way quickly leads to out-of-control code that is hard to maintain.

You’re in for an especially painful experience if you later need to change the hierarchy of your URLs. For example, imagine you have a blog app that displays different kinds of media. You have URLs like this:

  • /posts/my-favorite-dog-breeds
  • /photos/1234
  • /audio/sweet-child-o-mine.mp3

Your app gets so popular that you decide to add multiple blogs:

  • /blogs/tom/posts/my-favorite-dog-breeds
  • /blogs/wycats/audio/a-whole-new-world.mp3

You now need to audit every URL in your application! With Ember’s approach, it’s as simple as adding a route property to the parent state to have it start appearing in the URL. Because state changes are triggered by events being sent to your state manager by your views (instead of them routing to a specific URL), nothing in your view code changes.

The other problem with describing state in terms of URLs is that there is a cumbersome and pointless serialization step. In my JavaScript, I am dealing with controllers and model objects. Having to turn them into a string form just to get them to interact is brittle and unnecessary.

To use the photo example from above, which one of these is nicer?

stateManager.send('showPhoto', user, photo);

var url = '#/user/' + user.get('id') + '/photo/' + photo.get('id');

Having multiple sources of truth in any application quickly leads to chaos. One of the primary benefits of Ember.js is that it ruthlessly moves truth about your application state out of the DOM and into JavaScript. The DOM is always just a reflection of the current truth in JavaScript.

The work we’re doing on routing in Ember.js will have a similar effect on keeping truth out of the URL. The URL becomes just a reflection of the current truth in JavaScript. Because state managers allow you define your application state in an encapsulated and semantically rich way, changing how your URLs are built is as easy as changing a few properties on the state objects.

For a more concrete description of how routing ties into Ember’s state managers, see Yehuda’s gist, which includes code samples. This work is currently happening on Ember’s routing branch, but we hope to merge it into master soon.

I’m really excited about this feature. We’ve been thinking about this problem for a while now and it’s satisfying to finally be able to start working on it. I think that explicitly modeling application state using objects is a very exciting and powerful technique, and getting routing “for free” by doing it makes it a no-brainer.

Best Practices Exist for a Reason

If you’ve ever used node.js, you’ve probably also used Isaac Schlueter’s npm, the node package manager. By maintaining a central repository to which authors can quickly publish their JavaScript modules, npm has made it easy to get started building node apps and its popularity has exploded over the past year. Unfortunately, two months ago, the hashed passwords of all npm accounts were leaked. npm uses CouchDB for its backend, and the default security model is to grant administrator access to all databases, but only when connections originate from the same machine. It appears that in this case, the CouchDB server was made accessible to the world over HTTP with the default access settings left in place.

In retrospect, it’s obviously a dumb mistake, but the same kind of dumb mistake you or I or anyone ten times smarter than us could make; whether because we’re sick or tired or under pressure or simply because we’re using a new system that we’ve never used before.

The community’s reaction to the security leak was relatively forgiving, and rightly so. After all, npm is a community project that was created and is maintained for free, and we all benefit from the enormous amount of time and energy that Isaac and the other npm contributors have generously donated.

And yet, npm is no longer just a hobby project. Thousands of people rely on it to provide services to themselves, their friends, and to paying customers. Additionally, when it comes to security, there can be repercussions far beyond the original breach. Anyone who created an npm account and used the same password for another web service is potentially at risk.

So this is the eternal balancing act that open source maintainers (and, indeed, many startups) must face: limited time and resources for building, securing and maintaining systems that will be used by hundreds, thousands, or even millions of people.

Which is why I have been so saddened and, yes, angry, about the recent trend in the JavaScript community; to throw away the best practices we have spent a long time honing in what, to my eyes, is an act of machismo; a revolt against good engineering practices for the sake of revolting rather than to make the world a better place.

As a community we’ve advocated for these things precisely because most comers to JavaScript are not us, not as invested in the language and the ecosystem as us; but rather have been thrown at some problem because all of a sudden it’s 2012 and not writing JavaScript is no longer an option. They are not JavaScript developers per se; they are Java developers or Ruby developers or .NET developers or any of the thousands of possible kinds of developers who are now thrust into this unfamiliar world where JavaScript is the substrate that holds the web together.

Here’s the thing about best practices: at the point at which you become sufficiently experienced, you understand why they are good and so can choose to not use them as the situation allows. Your understanding of the language or the ecosystem or the particular problem at hand has allowed you to view the problem from the same vantage point of the people that came up with those best practices; and so you are free to discard them if the situation merits.

But until your understanding has crystallized, not following those best practices can cost you hours of wasted time tracking down bugs that didn’t have to otherwise exist. But writing code before you have an expert-level understanding is okay. It’s okay because it is reality and there is nothing you can do to change the fact that people with a very tenuous grasp of these technologies are being thrown at hard problems that they will solve one way or another.

Which is why I have to admit that my blood boiled when I read Isaac’s post about automatic semicolon insertion. I don’t want to re-litigate the semicolon debate because I think we were all tired of it before it even began. What I want to highlight is a general attitude that I find very disappointing:

I am sorry that, instead of educating you, the leaders in this language community have given you lies and fear.

By couching it in these terms, it implies that anyone who follows best practices has given in to “lies and fear!” Who wants to be swayed by that?

A more generous interpretation is that the leaders Isaac accuses of spreading lies and fear know that the reality is that you have a month to ship a JavaScript app, and asking you to understand the ECMAScript spec is not reasonable. (Because it’s not just automatic semicolon insertion, right? It’s this binding and anonymous functions and a million other features of JavaScript that confuse newcomers.)

We can’t front-load complexity and let people sort it out. That way lies madness. We must distill the rules down so that people can be effective, and help them along their journey towards JavaScript mastery. It’s a learning curve, not a learning cliff.

So I have to vehemently disagree with this statement in Isaac’s post:

If you don’t understand how statements in JavaScript are terminated, then you just don’t know JavaScript very well, and shouldn’t write JavaScript programs professionally without supervision…

I find this staggeringly condescending, but maybe it’s just my reading of it. But even if my reading is wrong, it rejects reality. Isaac knows this because the npm team deployed a misconfigured CouchDB instance to production and it’s not their fault. To suggest they needed “supervision” is absurd. They were not expert-level CouchDB users but they had a job to do so they picked the tool and went to town. This is a CouchDB failing; because the best practices were not made clear enough or the default settings were not strict enough.

So let’s learn from our mistakes and realize that the bulk of JavaScript developers are not experts. Let’s stop tearing down the hard work our predecessors have done to shepherd JavaScript from a toy language to the language that is revolutionizing how the web applications are built. Let’s embrace good engineering practices and have the sense to know when to ignore them, and not abandon them altogether to prove how smart we are. We’re all in this together.

Ember.js Resources

I come across a lot of really interesting links related to Ember.js, but often don’t have anywhere useful to put them, or don’t really know how to describe the thread that holds them all together. So here is my linkdump post, which I will update as I remember things, that contains useful stuff for Ember developers.

Ember.js Todos
Sample todo application. Particularly useful for its heavily-commented Assetfile. Great starting point if you’d like to know how to use Rake::Pipeline together with Ember.

SproutCore MVC vs. Rails MVC
Written back in the SproutCore 2.0 days, Greg Moeck’s seminal post describes how, despite the same name, Rails’ concept of MVC differs radically from MVC systems like SproutCore, Ember.js, and Cocoa.

Ember Skeleton
Boilerplate for starting a new Ember.js project, using Rake::Pipeline to assemble and serve your files.

Beginning Ember.js on Rails
Dan Gebhart’s three-part tutorial, which eases you gently into using Ember.js with a Rails backend. Resources are loaded with Ember REST, and assets are managed with Rails’ asset pipeline.

Using SproutCore 2.0 with jQuery UI
Yehuda’s article on using jQuery UI with what was then SproutCore 2.0. Note that, if following along with the code samples, there are a few changes that you will need to make that are pointed out in the comments at the bottom. While useful specifically for people wanting to use jQuery UI, the article is more broadly useful as it serves as a template for anyone who wants to write a bridge between Ember.js and non-bindings-aware JavaScript libraries.

More soon…

Dizzying But Invisible Depth

Jean-Baptiste Queru’s remarkable piece on the sheer amount of abstraction mankind has built to be able to load the Google homepage:

For non-technologists, this is all a black box. That is a great success of technology: all those layers of complexity are entirely hidden and people can use them without even knowing that they exist at all. That is the reason why many people can find computers so frustrating to use: there are so many things that can possibly go wrong that some of them inevitably will, but the complexity goes so deep that it’s impossible for most users to be able to do anything about any error.

Dizzying but invisible depth

AMD is Not the Answer

Every so often, we get requests to make Ember.js support AMD (asynchronous module definition). Until today, I had yet to hear anyone articulate why the advantages outweighed the (in my opinion) many disadvantages. Then, James Burke wrote an article called Simplicity and JavaScript modules that has so far done the best job outlining why AMD is good. However, I disagree with many of the assumptions and find many of the arguments outright contradictory. So, while James is both smart and I’m sure good-looking (and I agree with his comments on CommonJS), here are the reasons I think he is wrong about AMD.

Build Tools Are Okay

However, for those of us who came from Dojo, requiring a server tool or compile step to just develop in JS was a complication. I’m going to mangle Alex Russell’s quote on this, but “the web already has a compile step. It’s called Reload”.

I have a lot of respect for Alex but, if this is his current opinion, he’s wrong. The app development world of the Dojo era is different from the world of today, and it’s important that we are driven by current realities rather than stale institutional knowledge. Every serious application development environment in the world has a build step. If you are making a small application, then fine, I agree you don’t need build tools. But you probably don’t need AMD or a script loader, either.  If your app is truly simple, you will be fine with adding a few <script> tags to your page. I’m involved in several large web application projects right now and universally they use a build procedure of some kind. CoffeeScript compilation and minification are two examples of legitimate reasons to have a build step. Packaging your modules is fine too.

Many HTTP Requests

AMD expects every module to be contained in a separate file. As web app development gets more rigorous, developers want to be able to organize their files in the same way they might in Ruby or Cocoa. For example, all of the projects I’m working on easily have hundreds of files. Each view is a separate file, each template is a separate file, each controller is a separate file, and so on. With AMD, each additional file means additional HTTP overhead. But more importantly, many users are now on high-latency but high-bandwidth wireless connections. Minimizing the number of trips to the server is important. James addresses this in his blog post:

Loading individual modules piecemeal is a terrifically inefficient way to built a website. Because of this, there’s the great RequireJS optimizer, which will turn your modules into ordinary packages.

But wait, I thought we were just arguing that build tools are bad? Here’s the thing: if your app is simple, you don’t need build tools or module loading. If your app is more sophisticated, you’ll need both; so why not let the build tools handle the packaging for you? AMD proponents also argue that serving files individually makes it easier to debug. We did this in the SproutCore 1.0 days and, though it was extremely slow in development (perhaps because our HTTP server was single-threaded), it was effective. However, enough browsers support the sourceURL convention that in our current projects we just include a function like this:

  function appendSourceURL(data, path) {
    return data + " //@ sourceURL="+path;

Too Much Ceremony

AMD requires you to wrap all of your code inside an anonymous function that is passed to the define method:

define(['dep1', 'dep2'], function (dep1, dep2) {
    //Define the module value by returning a value.
    return function () {};

In my opinion, having to write this out for every file sucks. I prefer to call require for each dependency. Perhaps it’s a trivial difference, but removing a dependency is as easy as deleting a line and adding a new dependency means adding a new line of code. It’s less error-prone than editing an array of strings.

You can achieve a similar syntax with AMD, like this:

define(function() {
 window.MyApp = Ember.Application.create();

But at that point, why bother with loading a large AMD script loader? We use an implementation of a script loader that is under 50 lines of code.

The Alternative

What we’ve been using for our projects is a system that takes the source code for each file and wraps it as a string, as described by Tobie Langel in his post Lazy evaluation of CommonJS modules. All of the source code is downloaded in one HTTP request (great for high-latency 3G connections) and is saved in memory as a string so parsing is fast. Those strings are saved in a hash keyed on each individual file’s name. If you were to look at our application.js file, you would see something similar to this:

Files = {};
Files['main.js'] = "require(\"controllers/app_controller.js\");";
Files['controllers/app_controller.js'] = "alert(\"Hello world!\");";

The main JavaScript file is executed, and dependencies are resolved at runtime. This allows you to conditionally evaluate code based on the execution environment. For example, imagine you had a file that defined keyboard shortcuts. You could decide not to pay the parsing cost for that file if you detected that you were running on a touch platform.

We also have a system for packaging up many files into a single module, which can be loaded lazily. This helps us get the initial payload within the limits of mobile device’s cache limits.

The best part of this system is that, as an application developer, there is very little ceremony involved. If I need some functionality, I just place a require statement. If it has already been loaded, the require is a no-op. I make this argument a lot, but going from “a little friction” to “no friction” often makes the difference between good habits and bad habits. In this case, I can open a new file and start typing code, instead of worrying about setting it up as a module.

AMD has many features, but I think that the extra markup and complex loaders needed to support it outweighs its benefits. Out-of-the-box it is not ready for production, and needs build tools to make it truly useful. If build tools are required anyway, a much simpler solution should fit most developers’ needs.

I am looking forward to your blog post pointing out the flaws in my argument and excoriating me for making a fool of myself in public.

A very sincere thank you to James Burke, Tim Branyen and Yehuda Katz for reviewing this post and helping me better understand AMD. Please consider this a strong opinion, weakly held.

Tilde’s Pairing Setup

Yehuda Katz and I spend the majority of our time pair programming on client projects and on our open source projects like Ember.js. As in any profession, it’s important to be “foaming at the mouth crazy” about your tools.

Since Tilde moved into our new offices, we’ve finally put together what I think is our dream setup that I wanted to share.


The crown jewel of the setup is a brand new 3.4GHz 27″ iMac with a 256GB SSD and 1TB hard drive. This thing absolutely screams, and is a joy to use. On our last pairing setup (a Mac mini with a 5400rpm hard drive), we would cringe when we had to bootup VMware to test Internet Explorer. On this thing, you can’t even tell that you’re running Windows virtualized. It’s insanely responsive.

We prefer being able to face each other, and modeled our setup after Josh Susser’s post about his pairing setup at Pivotal. Sitting side-by-side has always felt awkward to me; and the ability to easily have a face-to-face conversation is critical. This also allows us to more easily deliver high-fives after slaying a particularly hard bug.

We orient our desks at a right angle, and plug an external 27″ Thunderbolt Display into the iMac with mirroring turned on. This allows us both to see the same content at the same resolution. We both get a Bluetooth keyboard, and Yehuda uses a Magic Trackpad while I prefer the Magic Mouse. The ability to have your own space to keep drinks and other personal belongings is a big win. I can also customize the keyboard to my liking, since I prefer mapping Caps Lock to the Control key and Yehuda hates it. When we’re not pairing, we can use either the Thunderbolt Display or the iMac as external monitors for our laptops. The display also serves as a nice USB hub for charging iPhones or other devices.

Perhaps most critically, we hung the biggest whiteboard we could find on the wall next to our station. There are few problems a good whiteboard diagram can’t fix.


Yehuda and I both use MacVIM with Janus as our editor. One major point of contention was file navigation: I prefer NERDtree and Yehuda prefers alloy’s fork of MacVim with a native file browser. (The native file browser drives me absolutely fucking crazy because I can’t navigate using just the keyboard; I have to pick up the mouse.)

We almost came to blows over this, but now that we have a 27″ display, we agreed to just use both.

Marital Bliss


We use zsh with oh-my-zsh. Our shell displays both the current git branch (if applicable), as well as the current Ruby version. Both of these have saved us countless hours of hair-pulling. We manage Ruby versions with Wayne Seguin’s indispensable rvm, the Ruby version manager.

For communcation, we use Propane to hang out in various Campfire rooms and iChat to log on to AIM, where we have separate pair accounts so that there is no feeling of invaded privacy. We use Dropbox and Lion’s AirDrop feature to shuttle files back and forth, and use the surprisingly powerful speakers in the Thunderbolt Display to do our standups via Skype.

I couldn’t be happier with our setup. If you haven’t tried pairing, I recommend it; while it sometimes feels slower, you avoid many of the obvious mistakes that one can make when spending hours in isolation. I also think that the software you write tends to be better if it’s constantly being sanity-checked by someone else.

If you have your own pairing setup, what improvements would you make to ours?

Edit: I forgot the most important component of a successful pairing setup:

Edit 2: Updated to clarify that the second display is mirrored.

An Uphill Battle

Where is the web’s Loren Brichter?1 Why does it take big teams, big budgets, and long timelines to deliver web apps that have functionality and UI that approaches that of an iPhone app put together by one or two people?

If you’re a developer who is obsessive about building a quality user experience, you are most likely to create an iOS application. One reason is that Cocoa is able to operate at a higher level of abstraction than raw web primitives. It means that iOS developers can think more about how their UI or features should work, instead of how they should be implemented. The human brain is like a buffer: at a certain point, every new concept pushes something else off the other end. For web developers, buffer overflow is common as they grapple with cross-browser bugs or APIs that are openly design-by-committee.

The web needs better abstractions. There are, however, only a limited number of abstractions you can implement in under 5k of code. I’m sensitive to cultivating an attitude in the community that the only valuable problems worth solving are those that can be done under 5k (or 10k, or 20k, or…). I will take my share of the blame, and admit that many frameworks that tried to take on large problems ended up out-of-touch and crufty.

The web won’t have a singular answer, and that’s fine. But we have to avoid fostering a culture where people are afraid to tackle hard problems because they’ll be seen as architecture astronauts. We shouldn’t let the mistakes of the past make us timid about learning and moving forward.


Yesterday on Twitter, I posted:

Just wanted to point out that Backbone is still listed on, even though it’s >5k if you count dependencies.
Which is fine; let’s just be upfront about the fact that is more akin to the cool kids table than a useful directory.

In retrospect, this came off crankier than I intended. Several people contacted me privately (and publicly!) and weren’t sure why I was making a stink over something that, to them, seemed trivial. And while in the scheme of things it is trivial, I think this arbitrary line in the sand we as a community have drawn is harmful to the future of the web.

I want the web to win. Everywhere.

So I want people to be aware of the abstractions we’re giving up to hit our 5k target. If some things get included that are over 5k, and other things get rejected that are under the 5k limit, it sends a very mixed message to new developers about what is possible. If we’re going to emphasize small codebases, let’s be honest about the limitations that entails.

If we don’t get our act together soon, proprietary platforms are going to become entrenched. Every new device that comes out is an opportunity that is ours to lose. The community has to figure out how we educate and recruit developers into the web fold. People will disagree with me about the best way to accomplish the task, and that’s great. But I’d like to compete on things like developer productivity and finished products, not insinuation that anything over a certain size is inherently unsuitable for the web.

It’s human nature that once we start assigning labels to things, we think about which side of the label we’re on. I think that positioning microlibraries as a separate “thing” is a bad idea. There are just tools that fall on different sides of the size spectrum.

Writing efficient code is important. But when we decide that certain web abstractions are a bridge too far, we’re actively discouraging precisely the developers that we need the most. When the next Loren Brichter comes along, I want to be able to say he or she is a web developer.

Thanks to Majd Taby and Yehuda Katz for reviewing this post. I tweet as @tomdale if you’d like more timely updates.

  1. Loren Brichter’s company, atebits, created both Tweetie for Mac and Tweetie for iOS, which were acquired by Twitter. []

Imagine a Beowulf Cluster of JavaScript Frameworks

Thomas Fuchs recently wrote about the advantages of using JavaScript micro-frameworks:

I for one welcome our new micro-framework overlords—small frameworks and libraries that do one thing only, not trying to solve each and every problem, just using pure JavaScript to do it. Easy to mix and match, and to customize to your needs.

He also pans “full-stack” frameworks:

These libraries are HUGE, being 100+ kilobytes in code size, and even minified and gzipped they are pretty big. … A whopping 100% of sites or apps using these libraries don’t use all the features they provide.

His argument sounds good, in theory. The idea that I can pick and choose among a number of well-written and focused JavaScript libraries to assemble the ultimate bespoke JavaScript environment seems very enticing indeed.

The golden rule of software, however, is that unless it is designed to work well together, it usually won’t. Mr. Fuchs has apparently never heard of dependency hell. Dustin Diaz has done a great job of putting together many of these micro-frameworks with Ender.js, but as a curator, he has to rely on the original author if he wants to make a change.

This attitude also continues the trend of giving developers overwhelming choice. Convention-over-configuration and emphasis on developer ergonomics are key to getting real work done. Of course Mr. Fuchs is able to tell you which JavaScript library will precisely match your requirements—he writes JavaScript libraries in his spare time! For the rest of us, we don’t really want choice; we just want what’s best.

If choice was more important than well-tested integration, the majority of attendees at last week’s CodeConf would have been carrying ThinkPads instead of MacBook Pros. Instead, developers want a happy path that they can be reasonably assured will work well. If it doesn’t meet their needs, they want the ability to augment or replace the components that make up their integrated system.

Here’s the reality: As web applications get more complex, developers end up needing all of the miscellaneous nuts and bolts that they thought they would never use. Take Flow for example. Flow is a killer task manager web application built on Backbone.js. Backbone.js often advertises that it is only 3.9kb minified and compressed. How much of Flow’s nearly 900k of (minified!) JavaScript do you think is the application developers filling in the deficiencies in Backbone?

SproutCore is often dismissed because of its size. Of course minimizing size is important. But if it takes a megabyte of JavaScript to ship the features you want, better the bulk of that code live in a framework instead of your application. Living in a framework means that it has been reviewed and optimized by an entire community of developers, and improves without any effort on your part. New Twitter currently clocks in at over a megabyte of JavaScript:

To the best of my knowledge, Twitter is not using any third-party libraries except for Mustache and jQuery. That means that a huge amount of JavaScript was written by Twitter engineers just to provide the framework in which the application runs. This isn’t stuff specific to Twitter. This is, for example, code that determines how to propagate changes to the DOM when the data model changes via XHR—a problem that all JavaScript developers face.

SproutCore may be big, but it’s because web applications are getting more complex every day, and we all have a common set of complex problems to solve. Oversimplifying the problem is disingenuous and forces each application to fend for itself. Mr. Fuchs says:

Small code is good code: the fewer lines in your source, the fewer bugs you’ll have, plus it will download and execute faster.

If creating a modern web app requires a megabyte of JavaScript, would you rather write, debug and optimize all of that code yourself?

It’s time to lay to rest the argument that full-stack libraries are unable or unfit to solve real problems. Instead, they are best suited to tackle the problems faced by modern web app developers. The same criticisms full-stack frameworks receive today are eerily similar to the same criticisms levied at jQuery several years ago. Unless you are planning to discontinue your application in six months, it’s time to start developing for the future.

Thanks to Charles Jolley for reviewing this post. I tweet at @tomdale if you want more timely updates.

The Future of the View Layer

The views expressed below are my own and not those of the SproutCore core team.

Recently, Yehuda Katz and I have been making some changes to the SproutCore view layer. We’ve pulled out basic functionality into a new class called SC.CoreView, and broken the old SC.View (a behemoth) into several files based on what features they add. We also introduced SC.TemplateView, a subclass of SC.CoreView, that allows you to use Handlebars templates. 1 2

Changes like these can be scary because they introduce uncertainty, but I want to assure you that the project is still headed in the same direction; we just have two additional goals:

  • Reduce the learning curve for new developers
  • Improve load time on low-power, low-bandwidth devices (e.g., iPad and Android tablets)

It turns out that these view changes put us on the path towards meeting both goals.

The desktop framework, which can best be described as SproutCore’s widget library, is its largest by far. The major reason for its size is that we have re-implemented almost all desktop controls in JavaScript and HTML. For example, SproutCore menu items flash when selected, like they do in Mac OS X, and we have popovers that can anchor to another on-screen element, like in iOS.

The downside to this sophistication is both an increase in code size and an increase in cognitive overhead for developers new to SproutCore. In particular, traditional web developers have a hard time learning how views work; they are better off reading the Cocoa guide to views than to read a book on JavaScript and DOM.

Today, when we ask developers to write SproutCore applications, we ask them to throw away their existing expertise and commit entirely to the framework. If in six months they decide it won’t meet their needs, they have to abandon almost all of their work. And it’s almost impossible to test the waters with an existing codebase, since developers must throw away their entire view layer.

Many developers don’t want or need native-style controls. They want to throw together a quick application that still feels like a website. We can bring value to these developers, too. While SproutCore’s view layer complements the controller and model layers, it doesn’t require them—indeed, the point of MVC is to encapsulate these concerns. If we can bring the sophistication of the data store and the robustness of statechart-driven apps to everyone, we should.

Here’s the thing about templates, though: they’re a better way of doing what we’ve been doing all along.

At its core, a SproutCore view is a DOM representation plus a managed layout. You need to build that DOM representation somehow, which means either multiple DOM API calls, or lots of string concatenation operations.

For example, here’s the render method for SC.RadioView:

render: function(dataSource, context) {
  var theme = dataSource.get('theme');
  var isSelected = dataSource.get('isSelected'),
      width = dataSource.get('width'),
      title = dataSource.get('title'),
      value = dataSource.get('value'),
      ariaLabeledBy = dataSource.get('ariaLabeledBy'),
      ariaLabel     = dataSource.get('ariaLabel');
    active: dataSource.get('isActive'),
    mixed: dataSource.get('isMixed'),
    sel: dataSource.get('isSelected'),
    disabled: !dataSource.get('isEnabled')
  //accessing accessibility
  context.attr('role', 'radio');
  context.attr('aria-checked', isSelected);
  if(ariaLabel && ariaLabel !== "") {
    context.attr('aria-label', ariaLabel);
  if(ariaLabeledBy && ariaLabeledBy !== "") {
    context.attr('aria-labelledby', ariaLabeledBy);
  if (width) context.css('width', width);
  context.push('<span class = "button"></span>');
  context = context.begin('span').addClass('sc-button-label');
  theme.labelRenderDelegate.render(dataSource, context);
  context = context.end();

For those keeping track at home, that’s 34 lines of code to generate this:

<div class="sc-radio-button mixed" aria-checked="true" role="radio" index="0">
  <span class="button"></span>
  <span class="sc-button-label">Item1</span>

And that’s just to create the DOM representation. Updating it takes another 27 lines of code, and it’s not even efficient; if a single property changes, it re-renders the entire label portion of the control.

Imagine if we could instead tell the radio view to use a template:

<div {{bindClass "classNames isActive isMixed isSelected isEnabled"}}
     {{bindAttr role="ariaRole"
  <span class="button"></span>
  <span class="sc-button-label">
    {{renderDelegate "label"}}

This is both shorter and easier to understand. But perhaps most importantly, good abstractions allow us to make optimizations once. Instead of each view trying (and failing) to optimize DOM access, and in the process introducing a rat’s nest of complicated logic, we can move the complexity to the templating layer and have every view ever written benefit from it.

We could, for example, use different optimizations for Internet Explorer and Chrome. Because views are now stating intent instead of atomic DOM operations, we can choose the fastest path based on the performance characteristics of the current environment. Decoupling allows implementers to focus on behavior and appearance, not the intricacies of DOM.

My point is this: template views are not meant as a replacement for SC.View and our rich library of controls. In fact, template views can serve to make desktop-style controls even better. If controls are easier to implement, more people will make them, and then we all benefit.

Template views are designed to integrate with existing applications, or to stand on their own if you need them to. As always, it’s important to pick the right tool for the job.


Yehuda and I have recently made fixes to SC.TemplateView that allow it to be used inside traditional view hierarchies. An updated prerelease gem with these changes is in the works, or you can find them in the master branch on GitHub.

We think there will be a lot of opportunity for existing SproutCore apps to take advantage of templates. For example, think of the many modern Mac OS X applications that use a WebView to integrate HTML content.

If you’re starting a new SproutCore application, you can decide which works best for you. If you want a web-style application or a desktop-style application (or some combination of the two), we want you to feel like a first-class citizen. And if you opt for desktop-style, we will work hard to ensure SC.TemplateView works with Greenhouse when it ships.

I will be discussing this and more at the SF SproutCore meetup, Tuesday, March 15.

Thanks to Chris Swasey, Tyler Keating, Sudarshan Bhat and Peter Wagenet for reviewing this post. I tweet at @tomdale if you want more timely updates.