by Tom Dale
The Beauty of the Language
Worst of all, you now have state and behavior for the same task—UI rendering—implemented in two languages and running on two computers separated by a high-latency network. Congratulations, you’ve just signed up to solve one of the hardest problems in distributed computing.
Imagine if you were using an app on your mobile phone and after every tap or swipe, nothing happened until the app sent a request over the network and received a response. You’d be pissed! Yet this is exactly what people have historically been willing to put up with on the web.
Client-side rendered web apps are different. These apps load the entire payload upfront, so once it’s booted, it has all of the templates, business logic, etc. necessary to respond instantly to a user’s interactions. The only time it needs to confer with a server is to fetch data it hasn’t seen before–and frameworks like Ember make it trivially easy to show a loading spinner while that data loads, keeping the UI responsive so the user knows that their tap or click was received. If the data doesn’t need to be loaded, clicks feel insanely fast.
Say what you will about server-rendered apps, the performance of your server is much more predictable, and more easily upgraded, than the many, many different device configurations of your users. Server-rendering is important to ensure that users who are not on the latest-and-greatest can see your content immediately when they click a link.
In a traditional client-side rendered application, your browser first requests a bunch of static assets from an asset server. Those assets make up the entirety of your app. While all of those assets load, your user usually sees a blank page.
Once they load, the application boots and renders its UI. Unfortunately, it doesn’t yet have the model data from your API server, so it renders a loading spinner while it goes and fetches it.
Finally, once the data comes in, your application’s UI is ready to use. And, of course, future interactions will be very, very snappy.
We do this by running an instance of your Ember app in Node on the server. Again, this doesn’t replace your API server—it replaces the CDN or whatever else you were using to serve static assets.
When a request comes in, we already have the app warmed up in memory. We tell it the URL you’re trying to reach, and let it render on the server. If it needs to fetch data from your API server to render, so long as both servers are in the same data center, latency should be very low (and is under your control, unlike the public internet).
<a> tags and avoids the
onclick= jiggery pokery. More advanced things like D3 graphs won’t work, obviously, but hey, better than nothing.
These are early days, and making this work relies heavily on having a single, conventional architecture for your web applications, which Ember offers.
Ultimately, this isn’t about you replacing your API server with Ember. I don’t think I would ever want that.
Instead, client-side rendering and server-side rendering have always had performance tradeoffs. Ember’s FastBoot is about trying to bend the curve of those tradeoffs, giving you the best of both worlds.
Big thanks to Bustle for sponsoring our initial work on this project. Companies that sponsor indie open source work are the best.