tomdale.net

Make decisions so your users don't have to.

Portland City Hall vs. Ride-sharing: The Lady Doth Protest Too Much

Last Friday, Uber started operating in Portland despite the city government’s stance that ride-sharing services remain illegal.

One of the loudest voices of fear, uncertainty and doubt coming out of City Hall about Uber has been Steve Novick. Mr. Novick is a City Commissioner, and, unfortunately, in charge of the transportation board here.

Let me start by acknowledging that Uber does not make a compelling victim. Their take-no-prisoners attitude serves them well when dealing with local bureaucrats in the pockets of special interests like Mr. Novick, but it can also manifest in much more sinister ways.

But this isn’t about Uber. It’s just as much about other services like Lyft, Sidecar, etc. More importantly, it’s about the right of Portland residents to have a choice in who provides their transportation. Uber happens to be fighting the fight, so it’s inevitable that the unsavoriness of that company becomes part of the conversation, but from my perspective this battle is about unshackling Portland from its antiquated and corrupt taxi system.

Mr. Novick’s feigned outrage that Uber has started operating without government say-so is made all the more hilarious by his assertion that, had they just been a little more patient, the regulations would have been updated to accommodate them.

“We have told Uber and Lyft that they are welcome to offer ideas for regulatory changes. Uber has chosen instead to break the law,” Mr. Novick is quoted as saying.

Mr. Novick, and Mayor Hales, the onus is on you to fix the regulations, not the people you are regulating. And you’ve had plenty of time. I’ve been using Uber in San Francisco since 2010; these services did not suddenly take you by surprise. I travel frequently and Portland is the only city where I can’t rely on Lyft or Uber to get a reliable ride.

“It’s difficult to understand why Portland is now the largest city in the country where ride sharing companies are not able to operate,” said over 40 local restaurant, bar, and hotel owners in a letter to City Hall, “At Feast Portland this year, the number one complaint among the 12,000 attendees from 50 different states and provinces was, ‘lack of reliable taxi service.’”

Cities that value the needs of their citizens over preserving taxi monopolies have already found ways to adjust regulations to protect riders while still offering them the convenience, availability, and reliability of phone-hailed car services.

In New York City, for example, drivers must have a special driver’s license, licensed vehicle, and insurance. If your concern is actually about the welfare of passengers and not just protecting your taxi company cronies, it turns out there are solutions that don’t involve wholesale bans.

While Mr. Novick claims this is about safety, I have never felt unsafe in an Uber or a Lyft. I have had some very sketchy taxi rides, including one where the cabbie nearly got out and engaged in a fist fight with another driver.

The taxi regulation in Portland is a disaster, with demand far exceeding supply on weekends. The number of cabs is capped at 460, or approximately one taxi for every 1,269 people. (Seattle’s ratio is about one taxi per 890 people; San Francisco, one per 533.)

There is a reason every other major city has welcomed these services. Being able to reliably call for a car, even on busy nights, reduces drunk driving, means fewer cars on the road, and improves access to underserved communities.

Mr. Novick says he’s going to throw the book at drivers, impounding the cars of hard-working Americans who are looking to make a little extra cash doing what Americans do best: responding to market demand.

Let him. Once Portland residents get a taste for how transformative services like Uber and Lyft can be, they’ll find it hard to stomach the image of authoritative thugs putting their boot down on middle-class Americans trying to make a buck, all because unions happen to be Mr. Novick’s biggest donors.

Oh, did I mention that Mr. Novick is in the pocket of the Oregon AFL-CIO? Forget all the blustering about doing what’s best for Portland residents; it’s just theater to please his union masters and protect his political base.

While I wish City Hall had had the foresight to deal with this before it became a legal battle, I’m grateful that, in the story of Portland vs. ride-sharing, we have someone coming out looking like a bigger villain than Uber.

Mozilla’s Add-on Policy is Hostile to Open Source

Mozilla prides itself on being a champion of the open web, and largely it is. But one policy continues to increasingly grate: their badly-managed add-on review program.

For background, Ember.js apps rely heavily on conventional structure. We wrote the Ember Inspector as an extension of the Chrome developer tools. The inspector makes it easy to understand the structure and behavior of Ember apps, and in addition to being a great tool for debugging, is used to help people learning the framework.

Screenshot of Vine with the Ember Inspector open

Because we tried to keep the architecture browser-agnostic, Luca Greco was able to add support for the Firefox API.

We were excited to support this work because we believe an open web relies on multiple competitive browsers.

However, actually distributing this add-on to users highlights a stark difference between Mozilla and Google.

Though there was an initial review process for adding our extension to the Chrome Web Store, updates after that are approved immediately and rollout automatically to all of our users. This is wonderful, as it lets us get updates out to Ember developers despite our aggressive six-week release cycle. Basically, once we had established credibility, Google gave us the benefit of the doubt when it came to incremental updates.

Mozilla, on the other hand, mandates that each and every update, no matter how minor, be hand vetted. Unfortunately, no one at Mozilla is paid to be a reviewer; they depend on an overstretched team of volunteers.

Because of that, our last review took 35 days. After over a month, our update was sent back to us: rejected.

Why?

Your version was rejected because of the following problems: 1) Extending the prototype of native objects like Object, Array and String is not allowed because it can cause compatibility problems with other add-ons or the browser itself. (data/toolbar-button-panel.html line 40 et al.) 2) You are using an outdated version of the Add-ons SDK, which we no longer accept. Please use the latest stable version of the SDK: https://developer.mozilla.org/Add-ons/SDK 3) Use of console.log() in submitted version. Please fix them and submit again. Thank you.

This is ridiculous for a number of reasons. For one, not being able to use console.log is a silly reason for rejection and, as Sam Breed points out, is even included in their add-on documentation. Extending prototypes should not matter at all since the add-on runs in its own realm, and we’ve been doing this since day one. Getting rejected now feels extremely Apple-esque (or Kafka-esque, but I repeat myself).

But the bottom line is that waiting a month at a time for approval is just not tenable, especially since Google has proven that the auto-approval model works just fine. Worst of all, many Ember.js developers who prefer Firefox have accused us of treating it like a second-class citizen, since they assume the month+ delays are our doing. Losing goodwill over something you have no control over is incredibly frustrating.

At this point, I can not in good conscience recommend that Ember developers use the Ember Inspector add-on from the Mozilla add-ons page. Either compile it from source yourself and set a reminder every few weeks to update it, or switch to Chrome until this issue is resolved.

There are many good people at Mozilla, many of whom I consider friends, and I know this policy is as upsetting to them as it is to me. Hopefully, you can use this post to start a conversation with the decision-makers that are keeping this antiquated, open-source-hostile policy in place.

Update

Some people have accused me of writing a linkbaity headline, and that this issue has nothing to do with open source. Perhaps, and apologies if the title offended. The reason it feels particularly onerous to me is that, as an open source project, we’re already stretched thin—it’s a colossal waste of resources to deal with the issues caused by this policy. If we were a for-profit company, we could just make it someone’s job. But no one in an open source community wants to deal with stuff like this on their nights and weekends.

Maybe Progressive Enhancement is the Wrong Term

Many of the responses to my last article, Progressive Enhancement: Zed’s Dead, Baby, were that I “didn’t get” progressive enhancement.

“‘Progressive enhancement’ isn’t ‘working with JavaScript disabled,'” proponents argued.

First, many, many people conflate the two. Sigh JavaScript is an exercise in lobotomizing browsers and then expecting websites to still work with them. This is a little bit like arguing that we should still be laying out our websites using tables and inline styles in case users disable CSS. No. JavaScript is part of the web platform; you don’t get to take it away and expect the web to work.

So, maybe it is time to choose a new term that doesn’t carry with it the baggage of the past.

Second, many people argued that web applications should always be initially rendered on the server. Only once the browser has finished rendering that initial HTML payload should JavaScript then take over.

These folks agree that 100% JavaScript-driven apps (like Bustle) are faster than traditional, server-rendered apps after the initial load, but argue that any increase in time-to-initial-content is an unacceptable tradeoff.

While I think that for many apps, it is an acceptable tradeoff, it clearly isn’t for many others, like Twitter.

Unfortunately, telling people to “just render the initial HTML on the server” is a bit like saying “it’s easy to end world hunger; just give everyone some food.” You’ve described the solution while leaving out many of the important details in the middle. Doing this in a sane way that scales up to a large application requires architecting your app in such a way that you can separate out certain parts, like data fetching and HTML rendering, and do those just on the server.

Once you’ve done that, you need to somehow transfer the state that the server used to do its initial render over to the client, so it can pick up where the server left off.

You can do this by hand (see Airbnb’s Rendr) but it requires so much manual labor that any application of complexity would quickly fall apart.

That being said, I agree that this is where JavaScript applications are heading, and I think Ember is strongly positioned to be one of the first to deliver a comprehensive solution to both web spidering and initial server renders that improve the “time to first content.” We’ve been thinking about this for a long time and have designed Ember’s architecture around it. I laid out our plan last June in an interview on the Herding Code podcast. (I start talking about this around the 19:30 mark.)

I think that once we deliver server-side renders that can hand off seamlessly to JavaScript on the client, we will be able to combine the speed of initial load times of traditional web applications with the snappy performance and superior maintainability of 100% JavaScript apps like Bustle and its peers.

Of course, there are many UI issues that we still need to figure out as a community. For one thing, server-rendered apps can leave you with a non-functional Potemkin village of UI elements until the JavaScript finishes loading, leading to frustrating phantom clicks that go nowhere.

Just don’t dismiss 100% JavaScript apps because of where they are today. The future is coming fast.

Progressive Enhancement: Zed’s Dead, Baby

zeds-dead-baby

A few days ago, Daniel Mall launched a snarky tumblr called Sigh, JavaScript. I was reminded of law enforcement agencies that release a “wall of shame” of men who solicit prostitutes.1 The goal here is to publicly embarrass those who fall outside your social norms; in this case, it’s websites that don’t work with JavaScript disabled.

I’ve got bad news, though: Progressive enhancement is dead, baby. It’s dead. At least for the majority of web developers.

The religious devotion to it was useful in a time when web development was new and browsers were still more like bumbling toddlers than the confident, lively young adults they’ve grown to become.

Something happened a few years ago in web browser land. Did you notice it? I didn’t. At least not right away.

At some point recently, the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

Developer communities have a habit of crafting mantras that they can repeat over and over again. These distill down the many nuances of decision-making into a single rule, repeated over and over again, that the majority of people can follow and do approximately the right thing. This is good.

However, the downside of a mantra is that the original context around why it was created gets lost. They tend to take on a sort of religious feel. I’ve seen in-depth technical discussions get derailed because people would invoke the mantra as an axiom rather than as having being derived from first principles. (“Just use bcrypt” is another one.)

Mantras are useful for aligning a developer community around a set of norms, but they don’t take into account that the underlying assumptions behind them change. They tend to live on a little bit beyond their welcome.

Many proponents of progressive enhancement like to frame the issue in a way that feels, to me, a little condescending. Here’s Daniel Mall again in his follow-up post:

Lots of people don’t know how to build sites that work for as many people as possible. That’s more than ok, but don’t pretend that it was your plan all along.

Actually, Daniel, I do know how to build sites that work for as many people as possible. However, I’m betting my business on the fact that, by building JavaScript apps from the ground up, I can build a better product than my competitors who chain themselves to progressive enhancement.

Take Skylight, the Rails performance monitoring tool I build as my day job. From the beginning, we architected it as though we were building a native desktop application that just happens to run in the web browser. (The one difference is that JavaScript web apps need to have good URLs to not feel broken, which is why we used Ember.js.)

To fetch data, it opens a socket to a Java backend that streams in data transmitted as protobufs. It then analyzes and recombines that data in response to the user interacting with the UI, which is powered by Ember.js and D3.

Probably a short movie illustrates what I’m talking about:

What we’re doing wasn’t even possible in the browser a few years ago. It’s time to revisit our received wisdom.

We live in a time where you can assume JavaScript is part of the web platform. Worrying about browsers without JavaScript is like worrying about whether you’re backwards compatible with HTML 3.2 or CSS2. At some point, you have to accept that some things are just part of the platform. Drawing the line at JavaScript is an arbitrary delineation that doesn’t match the state of browsers in 2013.

In fact, Firefox recently entirely removed the ability to disable JavaScript, a move I applaud them for. (They also removed the <blink> tag at the same time—talk about joining the future.)

Embracing JavaScript from the beginning will let you build faster apps that provide UIs that just weren’t possible before. For example, think back to the first time you used Google Maps after assuming MapQuest was the best we could do. Remember that feeling of, “Holy crap, I didn’t know this was possible in the browser”? That’s what you should be aiming for.

Of course, there will always be cases where server-rendered HTML will be more appropriate. But that’s for you to decide by analyzing what percentage of your users have JavaScript disabled and what kind of user experience you want to deliver.

Don’t limit your UI by shackling yourself to outmoded mantras, because your competitors aren’t.

From Daniel’s post:

And sometimes, we don’t realize that “only for people who have JavaScript enabled” also means “not for anyone with a Blackberry” or “not for anyone who works at [old-school organization]” or “not for people in a developing country” or “not for people on the Edge network.”

If those are important parts of your demographic, fine. Run the numbers. But I do take issue with Daniel’s last claim here, about Edge networks.

What I’ve found, counter-intuitively, is that apps that embrace JavaScript actually end up having less JavaScript. Yeah, I know, it’s some Zen koan shit. But the numbers speak for themselves.

For example, here’s the Boston Globe’s home page, with 563kb of JavaScript:

Screenshot of the Boston Globe website

And here’s Bustle, a recently-launched Ember.js app. Surprisingly, this 100% JavaScript rendered app clocks in at a relatively petite 141kb of JavaScript.

Screenshot of bustle.com

If you’re a proponent of progressive enhancement, I encourage you to really think about how much the browser environment has changed since the notion of progressive enhancement was created. If we had then what we have now, would we still have made the same choice? I doubt it.

And most importantly: Don’t be ashamed to build 100% JavaScript applications. You may get some incensed priests vituperating you in their blogs. But there will be an army of users (like me) who will fall in love with using your app.

Thanks to Yehuda Katz for reviewing this draft. Tell me how mad I just made you: @tomdale

  1. It is coincidence that the police commissioner is also named Thomas Dale []

San Francisco, I Love You But You’re Bringing Me Down

It’s impossible to argue that San Francisco hasn’t changed my life dramatically. When I moved to the Bay Area four years ago, I was an inexperienced kid who was working at a Genius Bar in an Apple Store. I didn’t know JavaScript at all—hell, I could barely program and my useless liberal arts degree was doing nothing for me.

Since then, I’ve helped found a company that, I’m proud to say, is both profitable and invests serious capital back into open source software. On the way to get coffee, I regularly bump into people that are changing the way I think about technology. I’ve had the opportunity to travel the world and share my half-baked ideas with other developers. The sense of excitement in the air is palpable—the sense that we’re always on the cusp of something big.

That excitement has attracted plenty of investment dollars, and it has a dark side. Enough ink has been spilled whining about how wealthy tech people are ruining the city. It’s bothered me, too; not because I think there is anything wrong with wealthy tech people, per se, but because it’s become like the classic Star Trek episode, The Trouble with Tribbles.

The brobdingnagian salaries we’re getting paid haven’t just skewed the market; they’ve taken it in two hands, turned it upside down, and shaken it like a British nanny. My friends who are not in technology keep getting pushed further and further away, or into smaller, dingier accommodations.

The recent BART strikes are just a single data point in a larger trend: we’re alienating everyone who isn’t in technology. It’s not sustainable. The stomach-turning coverage of the BART strikes should throw into stark contrast just how bad things have gotten. Even I, who makes a decent salary, have seen the great American dream of home ownership recede into the distance.

It took a long time for me to realize I was part of the problem. Yeah, I might be in tech, but I’m not one of these social media douchebags, I thought. Doesn’t matter. The fact that I get embarrassed when a girl at a bar asks me what I do should have been my first clue.

But as I said, there has been enough hand-wringing and navel-gazing. Whiny blog posts do nothing. What can I do?

Exactly what Adam Smith would want: I’m moving to Portland.

In Portland, my mortgage payment will be the same price as the rent I pay in San Francisco. The only difference is that, instead of sharing a small house with two other dudes, I can have a larger house to myself. Portland offers all of the great restaurants, coffee shops and bars that I love about SF, without having to overhear conversations about Series A rounds or monetization strategies.

And I’m looking forward to whatever small part I can play in helping Portland’s burgeoning tech scene. I’m excited to be neighbors with the likes of Panic, Sprint.ly and Simple.

But.

I am going to miss the hell out of San Francisco. I grew up in a small town, and went to school in Orange County. Both were heavily conservative and well-to-do. The tolerance of San Francisco has been eye opening.

I remember the October I moved into my place in Noe Valley. It was Halloween, and I was driving back from the Marina. I didn’t yet know enough to avoid the Castro on days the city dresses up in costume.

I was stopped at the light at 18th and Castro when a man strode in front of me, wearing nothing but a glow ring around his… undercarriage. I was flabbergasted. Where I came from, you would have been arrested immediately. Here, no one cared, as long as you weren’t harming anyone else.

San Francisco is where I learned not just to be a programmer, but to be an engineer. It’s where I learned about design, and tolerance, and business, and how to let your hair down.

Last night I was flipping through 7×7 magazine and started reading Robin Rinaldi’s What It’s Like to Leave the City of Your Dreams. I came close to having a full-blown anxiety attack, thinking about leaving the city that has shaped me and delivered me from the life-long depression that I thought was just intrinsic.

After I calmed down, I realized that it was just like when my last serious girlfriend and I broke up. We had been fighting all day, and at some point I turned to her and said, Do you still want to do this? She said no. I think we were both relieved.

But then, as I drove her home, we started reminiscing about all of the personal struggles we had helped each other through. We had both been new to the city when we met, and both had plenty of personal issues to work through. It was a tearful affair as we finally parted ways. It was hard to come to grips with the fact that even though we had been good for each other, we weren’t right for each other.

And that’s how I feel about living here now. I owe an inordinate debt to San Francisco and its people. But I think, now, the relationship is doing more harm than good.

But, I can’t help but think: maybe, someday, when we’ve both changed, we can try to make things work again. I’ll miss you. And I’m sure I’ll still see you around.

If you want to follow along with my move, I’ll be tweeting about it from @tomdale.

Evergreen Browsers

…Exponential growth is seductive, starting out slowly and virtually unnoticeably, but beyond the knee of the curve it turns explosive and profoundly transformative. The future is widely misunderstood…

Today, we anticipate continuous technological progress and the social repercussions that follow. But the future will be far more surprising than most people realize, because few observers have truly internalized the implications of the fact that the rate of change itself is accelerating.

—Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology

I am privileged in two ways that are very rewarding for me:

  • I have some insight into the web standards process, due to my friendship with people driving it, and have a notion of the exciting features that are soon to come.
  • I get to travel around the world and talk to boots-on-the-ground developers who are building amazing stuff on the web platform, today.

Often, I’ll express excitement about some new feature coming to the web platform—whether it’s ES6 features like object proxies or modules, or W3C-specified features like Web Components. But, it’s easy for in-the-trenches developers to dismiss these features as cool but far-off; unhelpful in their current plight; and worse, they are shell-shocked: they have lived through the dark times of Internet Explorer 6, and an out-of-touch W3C. That pain is marked indelibly on their soul.

Eso es todo. A lo lejos alguien canta. A lo lejos.
Mi alma no se contenta con haberla perdido.

That’s all. In the distance, someone sings. In the distance.
My soul is not at peace with having lost her.

—Pablo Neruda, Veinte poemas de amor: 20, The Essential Neruda: Selected Poems

I’m here with a message of hope. I don’t know who coined the term, but I heard it first from Paul Irish: evergreen browsers. What’s an evergreen browser? One that updates itself without prompting the user.

If you’re like me, when you’re developing a new web application, you put features into mental buckets. There’s the “works in IE7″ bucket, the “works in IE8″ bucket, “(I think) works in IE9,” and of course, “works in MobileSafari.”

The one bucket I don’t have is the “works in Chrome” bucket. That’s too much mental overhead. Instead, if I want to test whether something works in Chrome, I just pop open a new JS Bin and try it out. I don’t worry about which version they’re on—I assume that by the time my code makes it to production, my users will be on more-or-less the same version as me.

What would the web platform look like if every browser with significant market share updated itself at the same pace—and lack of user intervention—as Chrome?

The good news is that both Internet Explorer and Firefox have adopted this strategy, and now we just have to wait for the last generation of non-evergreen browsers to die out. But even that is happening more rapidly than you might think.

There are, of course, some sticking points. On mobile devices, old versions of Android’s pitiful browser continue to linger. But now that Chrome for Android is ascendant, this too should soon be no more than a painful memory. The only large entity still casting a shadow, from where I sit, is Apple. But given the adoption rates of new iOS versions, we’ll just have to hope that the competitive pressure from Chrome for Android will force them into once again being good citizens of the web.

I am excited to be a web developer. Not only is the pace of innovation increasing, so too is the pace of delivering new features to the end user. My advice: start preparing for the future. It will be here sooner than you think.

Open Source, Thick Skin

Yesterday, Heather Arthur posted a well-written and sad account of how she felt after the open ridicule of one of the projects she had made available on GitHub.

This caused the battle lines to be lain between the Ruby and node.js communities. Friends of mine opened fire at one another. That made me sad.

Thanks to the internet some of my closest friends are Ruby developers. Some of my closest friends are also node developers. Seeing friends get pilloried by other friends based on a lapse of judgment that doesn’t represent them, at all, made me sad.

A few things.

First of all, let’s be clear that the kind of thing Steve and Corey did absolutely happens in the node community. There is no room for self-righteousness here. I know because it happened to me:

(My reaction to this was disappointment, because I had had a great time previously hanging out with Paolo at the AustinJS party at SXSW. Thanks Joe McCann! It stings to know someone you like personally thinks your work is sufficiently bad as to induce nausea.)

Sometimes I think the internet is unhealthy for us. When someone violates community norms, the reaction is vociferous and its intent seems to be to punish rather than to help. The Steve Klabnik I know is one of the most empathetic sweethearts I’ve ever met, and I can guarantee that he has already beat himself up over this; far worse than anything strangers on Twitter could do.

Similarly, despite Paolo’s ridicule of my work, I don’t bear any ill will for him. I suspect he was just in a cranky mood and took it out on my work as a way to help him feel better. It’s lazy, easy, and unproductive, but it works. I know because I do it all the time (cf. any of my tweets about Turbolinks or MongoDB).

So, takeaways:

  • If someone pulls an asshole move, give them the benefit of the doubt. The internet lynchmob thing is so 2012. Remember that typically what you’re doing is reinforcing your own internal narrative.
  • The Ruby and node communities are different. They value different things. One is not a threat to the other, but we sure act like it. I wrote a post on Google+ about the responsibilities I think are associated with releasing open source software. Mikeal Rogers wrote a great reply about how those rules don’t apply in the node community. I think these two posts explain a lot of the friction between the two communities.
  • I think it is valid for people to get upset if they see someone else trying to convince someone to use code that, in their opinion, is bad. It is right to try to protect your friends from perceived danger. Perhaps this would be mitigated by having a way to differentiate between:
    • Here is some braindump/learning code, I make no guarantees about the quality or fitness of this code.
    • I wrote this thing; I believe it is better than alternative solutions and you should use it. I am signing up to maintain it to the best of my abilities. Criticism welcome.

I think eliminating misunderstandings is the key to getting the two communities to work together. I think we need each other more than we might realize.

But seriously, you node people really do want to reinvent everything. It’s exhausting trying to keep up.

Just kidding.

Bill_and_ted_be_excellent_to_each_other

My iOS 7 Wishlist

Actually, it’s not a list at all. There’s just one thing I want from iOS 7.

I want it to expose sufficiently powerful hooks that Google could implement Google Now for iOS.

A few months ago, I switched from my iPhone 4S to a Nexus 4. This was quite an aberration for me, as I have been a dyed-in-the-wool Apple fan since the age of 7. The first computer I ever used at home was a Color Classic II (33MHz 68030, 80MB hard drive, 4MB of RAM), I read every issue of MacAddict magazine since issue 1, and landing jobs at Apple (first at an Apple Store in college, and then on the MobileMe team afterwards) were some of the most rewarding moments I’ve ever experienced.

As proof, here’s a photo of me, age 15, right after Macworld Expo, wearing my Mac OS X t-shirt (it had just been announced):

tom-at-macworld

The first few versions of Android were awful, awkward, ungainly things, not too unlike the chubby teenager in the photo above. But everyone grows up and matures. Jellybean has been a dream to use. There are some rough edges, but the moments where I wish I still had my iPhone are few and far between.

I’d rather be an iPhone user, though. The build quality of the hardware is still far superior, and I prefer the smaller size. I don’t have small hands, but they’re not overly large, either. Trying to tap elements near the top of the screen single-handedly on the Nexus 4 feels a bit too much like yoga for my tastes.

When I was driving home from the holidays this December, I hit a pothole and blew out two tires on a remote stretch of highway about 100 miles south of San Francisco. It was that moment that made me realize just how important battery life is. I can mitigate the Nexus 4’s poor battery life in my day-to-day by just leaving it plugged in at the office. But outlier events like traveling and emergencies can be a wake-up call that sometimes you will be away from a power source for extended amounts of time, and I for one depend immensely on my phone in those situations. I was glad my travel partner had an iPhone, or I’m not sure what I would have done.

Yet, my entire digital life runs on non-Apple digital services. Through a combination of technical and business restrictions, Apple has made using those services on iOS terrible. Two examples:

I love reading books on Kindle. Having constant access to my entire library has dramatically increased the amount I read. But Apple prevents Amazon from integrating a storefront into the Kindle app for iOS, because they want a 30% cut. I’ll let others argue over whether that makes sense from a business perspective, but I want to offer this data point: I’d buy another Android phone instead of an iPhone, because developers can offer me the experience they think is best. I don’t want to think about how many man-hours startups have burned trying to dance as close to the edge of the rules as they can, figuring out ways to avoid the Apple tax. Thirty percent is significant.

Second, Google Now is an amazing feature that I think Apple is going to have a hard time competing with. If you’re unfamiliar with it, the introduction video does a good job of explaining it:

Let me emphasize why this feature is amazing. Let’s say I’m traveling to Prague for a conference. Let’s also say that I’m an AT&T customer, so data rates abroad will be usurious. More than likely, I’ll keep data off, unless there’s an emergency.

The conference organizer books me my airline ticket and hotel, and forwards the confirmation e-mail on to me. Assuming I’m using Gmail, this single event can trigger the following:

  • On the day of travel, my flight status will be displayed prominently.
  • If there is a change to the flight, it sends a push notification.
  • One-tap navigation from my current location to the airport.
  • It will send a push notification when it is time to leave for the airport to arrive on time, taking traffic conditions and flight status into account.
  • If I’ve checked in online, my boarding pass will be cued up automatically.
  • When I land, it will have already pulled up directions from the airport to my destination hotel.
  • It will have already downloaded vector map data for the destination city, so I can still navigate despite my lack of data.

This is groundbreaking. It will change the way people travel. And this is just one small facet of Google Now, which I view as the vector by which Google has figured out how to weaponize the stack of PhDs it has been accumulating for the past decade.

And it’s getting better all of the time. The culture inside Apple is one of a giant metronome, which ticks once or twice per year. The whole company is oriented around secrecy, followed by a big bang release. That works tremendously well for hardware, and for big software launches like an operating system. But it’s just terrible for web services; especially heavily data-driven ones.

The companies that are best at web services are less like a synchronized metronome and more like a group of crickets, each team releasing incremental improvements that over time amount to something quite significant indeed.

I’m not optimistic that Apple’s culture can change, and I’m not sure I want it to. But I do want iCloud (and Siri, and Apple Maps) to have to compete on an even playing field. Mobile devices aren’t the grand experiment they were in 2007. At the time, and in the years afterwards, I was supportive of the restrictions Apple put in place to guard the user experience. It’s a different world, though, and people are chafing against them. It’s hampering innovation. Android is effectively the escape valve for mobile developers that want to do cool new stuff that doesn’t fit inside the box that Apple gives you.

And that’s a bummer. There will be more products like Google Now in the future, not less. I want to be an iPhone user, but I also want access to all of the cool new stuff.

So, that’s my hope for iOS 7: make public the OS hooks that things like Siri and Maps use. Let me run different browsers. Let me replace the built-in e-mail app. We’ve appreciated the guidance, but we’ve all got the hang of this smartphone thing now. Let us do what we want.

And for the love of God, figure out a way to get Google Now on my iPhone.

Tell me why I’m an idiot for having this opinion by tweeting @tomdale.

Our Approach to Routing in Ember.js

The URL is an important strength that the web has over native apps. In web apps, the URL goes beyond just being a static reference to the location of a document on a server. The best way to think of it is as the serialization of the application’s current state. As the user interacts with the application, the URL should update to reflect what the user is seeing on their screen.

In most applications, state is tracked in an ad hoc way. To answer the question “What is the current state of the application?” you must synthesize several different properties (usually stashed away on different controllers) to arrive at an answer. If there are bugs in your application, it is possible to get into a conceptually broken state. For example, imagine you have an isAuthenticated property that is true but there is no currentUser property. How did you get into this state? Diagnosing these kinds of bugs is difficult. And more importantly, serializing that state in a sane way is basically impossible, because it’s scattered across the application.

Ember.js’ implementation of state charts solves these issues nicely. Using Ember.StateManager, you describe your application as a hierarchical tree of objects—one object per conceptual state. Because you explicitly enumerate the possible states, it is impossible to be in an unknown state. And if you try to perform an action in a state that doesn’t support it, an exception is raised immediately, making it easy to diagnose and debug the problem.

It also means that we can easily serialize the state of your application on demand. Assuming you model your application’s behavior using a state manager, we can map the state hierarchy to a URL, and update it as you navigate through the hierarchy.

This is an important departure from how most other web frameworks handle routing. They invert the model; the URL is the source of truth for the current state of the application. To change states (for example, to show a different view), you hardcode the URL into your application. If I want to display a photo with an ID of 1, I must synthesize the URL '/photo/1' and send it to my router.

This approach is not ideal for several reasons.

First, it couples your URLs to your application code. Changing segment names means you have to go through your entire app and update the hardcoded strings. It also requires that, if you want to enter a new state, you must go consult the router to be reminded what the particular URL is. Breaking encapsulation this way quickly leads to out-of-control code that is hard to maintain.

You’re in for an especially painful experience if you later need to change the hierarchy of your URLs. For example, imagine you have a blog app that displays different kinds of media. You have URLs like this:

  • /posts/my-favorite-dog-breeds
  • /photos/1234
  • /audio/sweet-child-o-mine.mp3

Your app gets so popular that you decide to add multiple blogs:

  • /blogs/tom/posts/my-favorite-dog-breeds
  • /blogs/wycats/audio/a-whole-new-world.mp3

You now need to audit every URL in your application! With Ember’s approach, it’s as simple as adding a route property to the parent state to have it start appearing in the URL. Because state changes are triggered by events being sent to your state manager by your views (instead of them routing to a specific URL), nothing in your view code changes.

The other problem with describing state in terms of URLs is that there is a cumbersome and pointless serialization step. In my JavaScript, I am dealing with controllers and model objects. Having to turn them into a string form just to get them to interact is brittle and unnecessary.

To use the photo example from above, which one of these is nicer?

stateManager.send('showPhoto', user, photo);

var url = '#/user/' + user.get('id') + '/photo/' + photo.get('id');
router.route(url);

Having multiple sources of truth in any application quickly leads to chaos. One of the primary benefits of Ember.js is that it ruthlessly moves truth about your application state out of the DOM and into JavaScript. The DOM is always just a reflection of the current truth in JavaScript.

The work we’re doing on routing in Ember.js will have a similar effect on keeping truth out of the URL. The URL becomes just a reflection of the current truth in JavaScript. Because state managers allow you define your application state in an encapsulated and semantically rich way, changing how your URLs are built is as easy as changing a few properties on the state objects.

For a more concrete description of how routing ties into Ember’s state managers, see Yehuda’s gist, which includes code samples. This work is currently happening on Ember’s routing branch, but we hope to merge it into master soon.

I’m really excited about this feature. We’ve been thinking about this problem for a while now and it’s satisfying to finally be able to start working on it. I think that explicitly modeling application state using objects is a very exciting and powerful technique, and getting routing “for free” by doing it makes it a no-brainer.

Best Practices Exist for a Reason

If you’ve ever used node.js, you’ve probably also used Isaac Schlueter’s npm, the node package manager. By maintaining a central repository to which authors can quickly publish their JavaScript modules, npm has made it easy to get started building node apps and its popularity has exploded over the past year. Unfortunately, two months ago, the hashed passwords of all npm accounts were leaked. npm uses CouchDB for its backend, and the default security model is to grant administrator access to all databases, but only when connections originate from the same machine. It appears that in this case, the CouchDB server was made accessible to the world over HTTP with the default access settings left in place.

In retrospect, it’s obviously a dumb mistake, but the same kind of dumb mistake you or I or anyone ten times smarter than us could make; whether because we’re sick or tired or under pressure or simply because we’re using a new system that we’ve never used before.

The community’s reaction to the security leak was relatively forgiving, and rightly so. After all, npm is a community project that was created and is maintained for free, and we all benefit from the enormous amount of time and energy that Isaac and the other npm contributors have generously donated.

And yet, npm is no longer just a hobby project. Thousands of people rely on it to provide services to themselves, their friends, and to paying customers. Additionally, when it comes to security, there can be repercussions far beyond the original breach. Anyone who created an npm account and used the same password for another web service is potentially at risk.

So this is the eternal balancing act that open source maintainers (and, indeed, many startups) must face: limited time and resources for building, securing and maintaining systems that will be used by hundreds, thousands, or even millions of people.

Which is why I have been so saddened and, yes, angry, about the recent trend in the JavaScript community; to throw away the best practices we have spent a long time honing in what, to my eyes, is an act of machismo; a revolt against good engineering practices for the sake of revolting rather than to make the world a better place.

As a community we’ve advocated for these things precisely because most comers to JavaScript are not us, not as invested in the language and the ecosystem as us; but rather have been thrown at some problem because all of a sudden it’s 2012 and not writing JavaScript is no longer an option. They are not JavaScript developers per se; they are Java developers or Ruby developers or .NET developers or any of the thousands of possible kinds of developers who are now thrust into this unfamiliar world where JavaScript is the substrate that holds the web together.

Here’s the thing about best practices: at the point at which you become sufficiently experienced, you understand why they are good and so can choose to not use them as the situation allows. Your understanding of the language or the ecosystem or the particular problem at hand has allowed you to view the problem from the same vantage point of the people that came up with those best practices; and so you are free to discard them if the situation merits.

But until your understanding has crystallized, not following those best practices can cost you hours of wasted time tracking down bugs that didn’t have to otherwise exist. But writing code before you have an expert-level understanding is okay. It’s okay because it is reality and there is nothing you can do to change the fact that people with a very tenuous grasp of these technologies are being thrown at hard problems that they will solve one way or another.

Which is why I have to admit that my blood boiled when I read Isaac’s post about automatic semicolon insertion. I don’t want to re-litigate the semicolon debate because I think we were all tired of it before it even began. What I want to highlight is a general attitude that I find very disappointing:

I am sorry that, instead of educating you, the leaders in this language community have given you lies and fear.

By couching it in these terms, it implies that anyone who follows best practices has given in to “lies and fear!” Who wants to be swayed by that?

A more generous interpretation is that the leaders Isaac accuses of spreading lies and fear know that the reality is that you have a month to ship a JavaScript app, and asking you to understand the ECMAScript spec is not reasonable. (Because it’s not just automatic semicolon insertion, right? It’s this binding and anonymous functions and a million other features of JavaScript that confuse newcomers.)

We can’t front-load complexity and let people sort it out. That way lies madness. We must distill the rules down so that people can be effective, and help them along their journey towards JavaScript mastery. It’s a learning curve, not a learning cliff.

So I have to vehemently disagree with this statement in Isaac’s post:

If you don’t understand how statements in JavaScript are terminated, then you just don’t know JavaScript very well, and shouldn’t write JavaScript programs professionally without supervision…

I find this staggeringly condescending, but maybe it’s just my reading of it. But even if my reading is wrong, it rejects reality. Isaac knows this because the npm team deployed a misconfigured CouchDB instance to production and it’s not their fault. To suggest they needed “supervision” is absurd. They were not expert-level CouchDB users but they had a job to do so they picked the tool and went to town. This is a CouchDB failing; because the best practices were not made clear enough or the default settings were not strict enough.

So let’s learn from our mistakes and realize that the bulk of JavaScript developers are not experts. Let’s stop tearing down the hard work our predecessors have done to shepherd JavaScript from a toy language to the language that is revolutionizing how the web applications are built. Let’s embrace good engineering practices and have the sense to know when to ignore them, and not abandon them altogether to prove how smart we are. We’re all in this together.