The Browser Shuffle

It's interesting how things come full circle, and the disruptor eventually becomes the disrupted. Case in point is the Web Browser, where Mozilla just announced that Firefox 12 would have some important features that were previously endemic to Chrome. This brings me back to the point when Firefox eclipsed IE, forcing Microsoft to begin developing its browser anew.

Monopoly?
So let's go back to 2002, when IE6 had effectively 100% of the market share. Sure, there were a few other browsers out there (Beta Mozilla, which had been in the midst of a rewrite for at least 4 years, and Opera, which seems to be a perennial underdog), but no one had more than a percent or so. Macs at that time shipped with an old version of IE that was essentially useless (5.2 I believe). IE was so dominant that Microsoft actually dismantled its browser team around summer of 2001 (they intended to integrate the web into the next version of their OS). I was an intern at Microsoft at the time, and heard through the grapevine that all members of the IE team were to be moved onto other teams or dismissed. This didn't surface publicly until a few years later, but it totally made sense at the time. Why innovate when you are the undisputed king and have the entire market share?

In 2002, I ran across Firefox for the first time. Or rather "Phoenix," as it was called back then (since it rose from the ashes of the Mozilla rewrite). Firefox made a number of improvements over IE, but the most important was tabbed browsing. To clarify, Firefox didn't invent tabbed browsing, but it provided the first good tabbed browsing experience. Before long, Firefox had replaced IE on my computer - I could just fire up one browser, and load my standard set of tabs. It wasn't perfect - it crashed a lot. But it had tabs, was faster, and the UI was much better.

By the time I got to Google in 2004, Firefox had gained a lot of popularity. Considering that IE didn't run on Macs and Linux boxes (which most Google engineers used exclusively), it was pretty much the only option. Most products were developed for Firefox and IE first, and Safari support was only added later. And, at the time, developing for a new browser often meant rebuilding the frontend from scratch, since all of the Javascript was different. I actual remember that my manager David Jeske suggested we build a high-level driver that implemented each operation on every browser. This was like 2005, about a year before the advent of JQuery, so we had to do everything manually (our implementation was much less elegant than JQuery). We built Google Page Creator for Firefox and IE. IE was a lot easier because it implemented contenteditable, allowing you to instantly make any element on the page editable. Firefox was harder because you had to do this manually using a bunch of trickery, and it also had a lot of bugs in its Javascript implementation (it wasn't version 6.0 like IE). Regardless, we launched these platforms - I can't remember if we ever launched for Safari.

Microsoft Backpedals
At some point, Microsoft realized that Firefox was actually a threat. Firefox had significant resources, and its Archenemy Google was even promoting it. It started to chip away at IE6's market share, and after a while, something had to be done. By 2005, Microsoft had announced that there would be a new version of Internet Explorer. This new version of IE had tabs (wow) and a new UI. It also had enhanced support for browsing standards and better security.

So, let's get back to Safari. At some point, buzz within Google started to turn towards Safari. This was interesting, because Google employed several of the core members of the Firefox team, but the word around town was that the underlying engine in Safari (webkit) was much better than the one used in Firefox. Safari was actually the first to implement contenteditable back in version 3, and it seemed to implement web standards much sooner than Firefox did. So, even though Google wanted to push Firefox as an alternate to IE, it also wanted to push forward the state of the web, and Firefox moved relatively slowly. So rumors circulated for a while about a Google-based browser, although no one really knew when this was going to heppen.

I don't think that any real steam picked up until after I left in 2007, although work was already underway on a Webkit-based browser that would run within Android. This is speculation on my part, but at some point someone decided that if Google was already working on a webkit-based browser for the mobile phone, why not also build a version for the desktop? There was already a team working on enhancements for Firefox (including Google Gears), so why not have them just build a browser? Google could continue to promote Firefox while building its own standards-compliant reference for the way a browser should work.

Chrome Dome
So Chrome came out, and it did a few things differently. First of all, it ditched monolithic releases for a rapid development cycle. Rather than releasing a ton of new features every year or two (a la Firefox), there could be a new release as often as once a month. So the first version was absolute garbage, but if you just waited a month, everything got much better. Second of all, it isolated Flash into a separate process, which prevented Flash's inevitable crashes from bringing down your entire browser, and each tab was in also separate process. It used to be that Firefox for Linux would just crash every once in a while, interrupting your browsing and bringing up the error reporting dialog. I actually remember one conversation that I had with a Google engineer in the early days of Chrome. He said something like "Chrome is already much better than Firefox for Linux because it crashes a lot less, so most of the developers at Google have already switched over."

Chrome also offered Javascript performance that absolutely spanked everyone else. Its performance was so good that everyone else had to fight to catch up, and this has spawned an arms race, where each subsequent browser offers better javascript performance than the best from everyone else. This was actually a good thing, because Javascript performance used to be pitiful (even on fast computers), and the performance gains have trickled down to mobile platforms, which are much more performance-constrained. You can now run Javascript-rich web applications on the mobile platform, where a few years ago this was impossible. I remember running a Javascript-heavy web app that I wrote on a prototype Nexus One, and being amazed that it actually worked as written.

Chrome also had a few features that didn't work that other browsers lacked. It included a home screen, which showed you the most recent sites you had opened, and later the chrome web store. It also offered a New Tab button, which allowed you to open a new tab without using the keyboard or a menu (my muscle memory causes me to instinctively hit Command-T, but an older generation needs this). Also, there was an X on each tab that allowed it to be easily killed (without right clicking). Overall, the UI was a bit more polished and easier to use than the browsers that came before it.

Firefox Plays Catch-Up
Firefox slowly implemented the performance changes, making its javascript engine faster, and they even did the process isolation stuff, improving its stability. The developers also focused on supporting web standards, and eventually started to catch up to Chrome and Safari. Over time, Firefox became better and better, partially because Mozilla did switch to a rapid development cycle. However, every time that I fire up Firefox, my browser feels somewhat clunky. Everything is less rounded and slightly less smooth than Chrome, and I find myself switching back.

So, going full circle, Firefox just announced that they were implementing the home screen and the new tab button in an upcoming release. This is, as when Microsoft implemented tabbed browsing, and admission that they have been eclipsed by the next new thing, and that they are scrambling to catch up. I wonder what the next new thing to come along will be. While I'm sure that the desktop browser will continue to evolve, I'm fairly certain that the biggest improvements will involve mobile browsers. Even though their performance is tolerable (in most cases), mobile browsers still provide a second-rate experience. Switching between windows is clunky - I rarely switch tabs on my Android phone, and when I do, I need to go to a separate screen. iOS 5 addresses this on the iPad by offering a PC-like tabbed experience, as does the browser on MIUI. I'm pretty sure that there will be some major improvements in the future that are better-suited for a smaller screen. With the pixel densities that we are now seeing in mobile phones, it is possible to display more information than ever before. This seems like a ripe opportunity for some disruptive innovation in UI design.

Find discussion of this post on Hacker News

The Intel-ARM War Heats Up

So the war between Intel and ARM has just heated up a notch, and things are really starting to get interesting. Intel just showed off the first Android smartphone powered by an Intel Chip, which was clearly a warning shot in ARM's direction. Performance is allegedly better than the current dual-core ARM designs, although no one has actually been able to play with the device yet. Power consumption is supposedly comparable to existing dual-core ARM devices, although we won't know any of this for sure until we see production designs.

More importantly, Intel announced high-profile partnerships with Motorola and Lenovo, who have claimed enthusiastic support. Within the next six months or so, Intel devices will be available at your local smartphone store. And Intel is committing to significant improvements to their technology over the next few years.

What does this really mean? Is Intel aiming to beat back ARM's advance into its territory (tablets and notebooks), or is it trying to capture the smartphone space from the current leader? ExtremeTech's Sebastian Anthony just published a column alleging that Intel will dismantle ARM. He claims that Motorola's "extensive" partnership provides proof that Intel has something special up its sleeve. Intel's process shrinks over the next several years will allow Intel's chips to gradually outpace ARM's designs. By 2014, Intel's mobile chips will be the clear leaders, and ARM might as well pack up and go home.

What Actually Happened
Well, I'm not so sure I agree, so here's what I think. Intel was scared when they heard that Windows 8 would be running on ARM. Sure Intel had lost its foothold in the mobile space, but it had pretty solid dominance over the desktop and laptop market. Windows 8 running on ARM is a slippery slope. At first, it may just be for tablets, but tablets are already starting to include HD displays and clip-on keyboard, which makes them functionally equivalent to laptops. If they didn't make a move, Intel knew that they would be disrupted, gradually pushed upmarket and eventually obsoleted completely.

So Intel basically approached two companies who are down on their luck and offered them sweet deals to use their chips. I really can't say much about Lenovo because their devices aren't available stateside, but Motorola has made many errors in executing its smartphone and tablet strategies. Motorola's tablets were a bust, and based on the recent financials, their smartphone sales are stalling again. They were looking for some special sauce to differentiate themselves, and along came Intel.

Enter The Same Old New Thing
In coming up with a product, Intel did what they always do in this sort of situation. They pulled out old technology, and spun it as new. Here's a secret - the processor in Medfield is functionally identical to what Intel released in 2009. It's a single-core, dual threaded Atom processor running at 1.6ghz, exactly the same as you would find in a first generation netbook. The only major difference is that it is significantly smaller, and that a bunch of other chips are integrated onto the CPU (reducing the number of support chips required). And the Atom was essentially five year-old technology when it came out - it was essentially a Pentium M (circa 2003) that was optimized to use less power.

Atom hasn't increased in performance in the past three years. All Intel has done is to shrink the die, reducing both physical size and power consumption. At some point, the size and power consumption both dropped to the point where you could fit an Atom processor into a mobile phone. And heck, that Atom processor's performance is slightly favorable to the current generation of mobile phone chips. So Intel might as well use it.

Intel is a One-Trick Pony
But here's the problem - Intel doesn't have any more tricks up its sleeves. It just happens that performance of Medfield is comparable to the competitions, because the competition has steadily improved as Intel has stagnated. Medfield is Atom from 2009, but Intel hasn't significantly improved Atom since then. They just focus on their higher-end (and higher margin) chips, and trickle down the process improvements to Atom. They were so afraid of cannibalizing sales of their higher-margin products that they intentionally crippled their lower-margin ones. Sure Intel can go to a dual-core design (as Atom eventually did), but there are no magical architecture improvements on the horizon.

The only real improvement that Intel can promise over the next two years are die shrinks.  And die shrinks are a great thing, but they don't solve all of the world's problems. First of all, die shrinks are somewhat unpredictable. To do a die shrink, you usually have to build a new factory with all-new equipment. Factories often take longer to build than expected, and frequently the initial chips don't work nearly as well as hoped for. Typically, you only do a die shrink on an existing design - changing both manufacturing process and design at the same time is a recipe for disaster. If Intel hopes to die shrink twice, first to 22nm and then to 14nm, they are going to have to stay with essentially the same Atom we currently know and love. Given that 14nm is about 1/4 the size of 32nm, you will probably see 4-core Atom designs where we currently have one core. In fact, Intel's next generation of Atom is nicknamed Clover Trail, which indicates that it will likely be a four-core solution (designed to take on the ARM-based Windows 8 tablets). With a few die shrinks, it will fit in your phone.

Why ARM Will Win
So here's the thing about ARM that has caused the platform to advance so quickly. All that ARM does is design chips - it doesn't make the finished product. That falls onto TI, NVidia, Qualcomm, Samsung, and Apple, who all make chips based on the ARM designs. These companies don't design the chips from scratch, but they do tweak the designs significantly. Right now there's a massive war between those manufacturers to produce the fastest and lowest-powered chips. On average, each company releases a new chip generation at least once a year, and these lead to huge improvements year-over-year. By then end of last year, Tegra2 (which came out in January) had been eclipsed by TI's OMAP4, to the point where no new smartphones include Tegra2. The quad-core Tegra3 is both lower-power and higher performance than the Tegra2, and should blow the pants off of the first-generation Intel smartphones. By the time Medfield is released, Tegra3 should be included most high-end devices. By next year, when Intel releases the first die shrink (probably with a dual-core Atom), we may see the first 6 or 8-core ARM designs.

Furthermore, ARM-based designs are truly created for mobile use, unlike Intel's chips, which are simply repurposed desktop designs. Desktops and laptops are essentially either on or off at any point in time, and waking from standby takes a few seconds, even on my Mackbook Pro with SSD. On the other hand, smartphones and tablets need to operate in a low-power standby state, with the ability to instantly come to life. In response to this, ARM designs include innovative features, such as Tegra3's low-power core, which handles standby processing at a fraction of power. Since ARM licensees are developing mobile chips full-time, they will continue to create new designs that include these sorts of features.

Sorry to say it, but if Intel had really wanted to compete with ARM, they would have already done it, and this move wouldn't be a knee-jerk reaction. What we are seeing now looks just like disruptive innovation, and Intel is making what amounts to a last gasp effort to compete. Otherwise, Atom would be well on the way to disrupting Intel's midrange, and there would even be server-level Atom chips.

So here's what I predict is actually going to happen. Motorola and Lenovo are going to release Intel-based smartphones. They are going to be sold at a premium price, and will be slower than contemporaneous Tegra3 designs. Overall, they will be a flop, and the manufacturers will eat crow. Motorola Mobility, which will be owned by Google, will return to producing primarily ARM-based designs, keeping one Intel-based design around for show. The second generation Intel processors will be more competitive, but no one will really care by that point (they will probably work their way into some low-power tablets and notebooks). ARM will continue to improve - I think Nvidia's road map indicates that the 2015 model will offer 100X the performance of Tegra2. In 2014, or 2015, we will see the first high-performance ARM-based laptops. By that point, Intel will be relegated to server duties.

Regardless of what happens, I think that the consumer will win. More competition leads to greater innovation, and we are currently seeing unprecedented levels of advancement (the last time we saw anything of the sort was the late 90s/early 2000s, when ATI and NVidia were releasing new graphics cards every six months. The only reason Intel has had to improve their chips so much is that formidable competition arose from an unexpected place. If not for that, Intel would have reserved their next few die shrinks for their higher-end processors, and Atom would get them a few years later (if ever). This is definitely going to be an exciting time for mobile devices.

The Most Important Things I Learned Last Year

As 2011 comes to a close, I can say that it has been a pretty exciting year overall. I switched jobs, going from a cushy job at a large company to the best job in the world (as founder of my own startup). I found a great cofounder, who eventually convinced me to move from San Francisco to New York. We built several cool products, and learned a lot about iterating towards product market fit.

So, I was ready to write an essay about all the things I learned this year, but I realized that I wrote that essay on my birthday. And I could have come up with a top 10 list, but that seemed kind of lame. In the interest of being brief, I distilled what I learned this year down to one sentence.

Follow your gut and move forward

So, that's the TLDR, so I'm pretty much finished. But I've never been known for being terse, and if you've read this far, I guess that I will ramble on for a bit longer.

Follow Your Gut
You may have heard that you have more neurons in your digestive system than in your brain (aka the Enteric Nervous System). What does this have to do with entrepreneurship? Well, it's easy. An entrepreneur is only as good as his gut intuitions. Sure, there is a lot of logic and problem solving involved with building a successful company, but if your intuitions can't guide you in the right direction, you will never figure out how to succeed. The most successful entrepreneurs I know have a mysterious ability to know which direction is going to be the right one (even before they have made a move). Of course they take missteps, but more often than not they can skip a whole bunch of things that aren't going to work. That's why Paul Graham is so good at what he does - he has a great intuitive sense for what will and won't work (and he freely shares that with the green entrepreneurs he invests in). Sure he's wrong sometimes, but so are we all. But I'm sure you know what I'm talking about - when you get the "right" idea, you get a good feeling inside.

Conversely, when you know deep down that something is wrong, you have to be able to quickly turn your back on it. As an entrepreneur, you are constantly being presented with opportunities of various types, and if you want to succeed, you need to take only the best ones. Weeding out the ones that aren't right can waste a ton of time, so it's useful to have a spider sense that can do it automatically (hint: you probably already have it). If an opportunity appears to be too good to be true, it probably is. If you have to vacillate too much over something, whether it's a candidate you are interviewing or a business partnership, it's probably going to be more trouble than it's worth. 

This past year, I quit my job to cofound a startup. As I was investigating startup opportunities, a pretty large number of options worked their way onto the table. Some of them felt right, while others didn't. There was at least one that I may regret not taking, but something in my gut told me to join Sam (even though other offers were more attractive on the surface). From pretty much the first moment, we got along remarkably well, and every time there seemed to be an insurmountable roadblock, things somehow worked themselves out.

There was also one time this past year where I ignored a bad vibe I got about someone, and it cost me a significant amount of time and money. Your gut won't always lead you in exactly the right direction, but it is particularly good at steering you away from trouble. When you get that bad feeling that makes you a tiny bit sick at the bottom of your stomach, it is time to start running.

Move Forward
I recently started reading "Not Everyone Gets a Trophy," which is a book on how to deal with members of Gen Y. One of the things the author says about Gen Yers is (and I quote):

They want to hit the ground running on day 1. They want to identify problems no one else has identified, solve problems no one else has solved, make existing things better, invent new things. They want to make an impact.

This sounds to me like pretty much every tech startup founder I know - I guess it's no coincidence that there are so many Gen Yers among the current batch of startup founders. If you want to make an impact, it is important to make sure that you are always moving forward, both personally and in your career. Set goals and milestones, both long and short-term, and then make sure that you hit them. Keep yourself honest, even (especially?) when it's painful. In addition to having a good intuition, all successful entrepreneurs are also operators. They figure out how make stuff happen, even when it isn't supposed to work out. That's one of the things that constantly amazes me about working with Sam - he is so darned persistent.

If the first thing you try doesn't work, throw ten more things at the wall until something sticks. Sometimes (usually) your first approach is the wrong one, so you are going to have to move fast if you want to find the right thing before the music ends (your money runs out or you get demotivated). We built at least three different products in the past year - we started out building a marketplace for speakers, and ended the year on a completely different note (which we will soon announce). In the interest of moving forward - we decided to finish out 2011 by shutting down our old service. This was kind of emotional since it was what we started out with, but startup founders don't have the luxury of being nostalgic. In the end, we realized that it was taking up time that we could be using to move our new product forward, so we canned it.

If you are a creative person, you will have about ten projects that you want to work on. Unfortunately, you will only be able to succeed at ONE. This is probably the biggest mistake that wannabe entrepreneurs make - they try to do multiple things at once and fail at all of them. Actually, you are even going to have to ignore aspects of your primary project because you will be so short staffed, but at the end of the day, all that matters is that you focused on the right areas and succeeded at those things. From what I understand, the books were a mess when Zecter was acquired, and they had to pay accountants a lot of money to get everything in order. However, the founders of the company focused on building great technology, which was the only thing that really mattered at that point, and the rest they could hire someone else to fix. The only thing you (usually) can't fix is forgetting to file your 83B election, so be sure to do that.

A Pitch For Your Personal Life
So here's something else to remember. While you are moving forward with your company, be sure to spend some time moving forward with other aspects of your life. It is easy to get wrapped up in work and not pay enough attention to yourself. Spend time with friends and family, and build your social circle. If you aren't an extrovert and don't have a strong social circle, those connections will wither if you don't pay special attention to keeping them strong. Otherwise, you will be alone and miserable when you aren't working, and no one wants to be like that. I know some people say that startup founders should spend all of their time working, but I think it's crucial to find a balance that allows you to be happy. If you are miserable, then your productivity will suffer, and your startup will have pretty much no chance of succeeding. I only say this because I personally have spent years working on startups while neglecting my personal life. At some point over the past year, I had a big realization that I wasn't happy, and that I cared as much about leading a fulfilling existence as I did about building a startup. I needed to figure out how to move forward with multiple aspects of my life - business was only one of them. I won't beat this to death because I have talked about it recently in other blog posts, but it definitely bears repeating.

My Take on "Lean Startups"

It seems like the "Lean Startup" movement has been sweeping through the valley over the past few years. I've spent quite a bit of time working with Lean Methodologies, both in building software startups and because that's a big part what I studied in grad school, so I'm going to weigh in. Basically, Lean is pretty cool, but it's not some kind of magical panacea for all the problems startups face. Like any tool, it works well when used properly, but it's also easy to misuse.

What is All This Lean Stuff Anyways?
For the unininitiated, Lean basically came out of Toyota following World War II. Japan was pretty resource drained from fighting and losing the war, so they had to think creatively in order to rebuild their country. Toyota wanted to make cars, but they didn't have nearly as many machines as Ford or GM. So, rather than having a different machine to make each part, they needed one machine to make a whole bunch of different parts. And in order to make that work, they had to be able to quickly switch that machine from making (for example) one body panel to another. Through the application of creative thinking, they managed to drastically reduce the switchover time. Not only did this make the operation much more efficient, it made it much more flexible. This sort of thinking revolutionized Toyota's operations, to the point where they were kicking the US auto manufacturers' butts by the late 80s.

I'm not going to go too much into the specifics of the "Lean Tools," but Lean basically involves opening your eyes, figuring out what is broken, and figuring out how to fix it. We see broken things hundreds of times a day, ruminate on them for a second, and then immediately forget about them (because that's what we're programmed to do). When you "become Lean," you actually try to fix these problems as they happen.

Q: But if we tried to fix every broken thing, wouldn't we never get anything done?
A: Well, thanks for asking. Actually, every worker at Toyota has a cord he can pull to stop the assembly line, and he is encouraged to pull it whenever needed. At first, this leads to a lot of line stops, but eventually all of the simple problems are ironed out, and everything works a lot more smoothly than it used to.

And that's basically it. There are a lot of "tools" that have been developed to help us apply Lean in the real world, but those are just gravy, and aren't strictly required. Basically, if you are methodically observing and fixing broken things, you are trending towards Lean.

The first one to introduce this stuff to a wider audience was Eli Goldratt with The Goal, but I guess that Steve Blank's "Four Steps To The Epiphany" was the way that most people in Silicon Valley were introduced. Honestly, none of us actually read past the first couple of chapters, but every self-respecting entrepreneur has a copy on his bookshelf (and has recommended it to at least two friends). But I guess that wasn't enough, so Eric Ries recently released "The Lean Startup." I personally own at least 7 copies (mostly purchased to get the freebies). If you are interested in actually learning the principles in a hands-on way, there is "The Lean Startup Machine," a weekend-long practical bootcamp.

So how exactly does "Lean" apply to product development? Basically, it comes down to simplifying and testing your assumptions before you spend months or years building it. Figure out whether anyone actually wants the freaking thing before you spend your life savings paying engineers to code it for you. As Paul Graham says, "build something people want." The trick is that you need to figure out what that something is before you can build it. And that can be easier said than done.

Why Easier Said Than Done?
So, the biggest wrinkle is that none of this quite as straightforward as it seems. When you actually go out and ask your potential customer what he/she wants, he probably will tell you that he doesn't know. He's too busy worrying about how to do his job to figure out how to do yours. If you're lucky, he will be able to enumerate what his biggest problem is, but a lot of people can't even do that. Most likely, when you ask, your customer will mention some peripheral problem that isn't even close to the biggest one out there. Steve Spear (who is an expert on implementing Lean in healthcare systems) takes the following approach to identifying problems - he follows doctors around for a day, and writes down all of the things that are broken as they happen. Then he figures out which problems are the most disruptive, and solves those.

So here's how I would start - first you figure out the problem you want to solve, and you figure out the minimum possible product that could solve that problem (aka Minimum Viable Product or MVP). So you build the MVP (and by "build", you could just mock it up), and you show it to people as soon as possible. You could even tell people about it before you have built anything, and see how excited they are. Basically, if people (and by "people", I mean the people who write the check) are excited by whatever you show them, you keep going, and move on to the next phase. If not, you take a step back, or maybe even start over. There are probably going to be a lot of these - a lot of startups don't get to the good stuff until they hit their fourth or fifth idea.

You Can't Always Bootstrap
 So once you build your MVP, and people like it, now what? In some cases, you can get people to actually use that product and potentially even pay for it. This can make for a great business model - a bunch of successful companies have been built this way (Github, 37 Signals, Bingo Card Creator). The problem is that none of these businesses were completely new - hosted Git was just Git repackaged in an easier to use package, while Bingo Card Creator replaced desktop software. It's likely that if you're actually doing something all new, there are going to be leaps of faith. This is where Lean starts to break down - it isn't always viable to bootstrap every business. At some point, you have to take a gamble and throw in a bunch of resources to get from MVP to the product that people will actually pay for. For example, if you are going to create a new automobile company, there is going to be a lot of upfront investment before you can build your first car.

When I attended Lean Startup machine in San Francisco at the beginning of the year, Dave McClure was one of the kickoff speakers, and he summed up all this stuff pretty well. He said something along the lines of "We don't know if this Lean stuff works. But it definitely does seem to do good job at showing us which ideas won't work."

Setting Up an Experimental Framework
Mark Pincus of Zynga puts it well when he talks about building an experimental framework. Instead of worrying about making your company instantly profitable, or about lining up about customers before you have product to sell (both of these are nice IF you can do them), set up the product development cycle as a series of tests. At any point, you should be working systematically towards putting your product through the next predefined test. The actual set of tests will vary based on what the product is, but there should always be a plan on the table.

For example, the first test could be "Do people seem excited when I talk about this?" Then you could build a paper prototype (or simple usable prototype), and see whether people still seem interested. Then you add a few more features, and see whether you can get them to agree to use the product. Then you add the features they need to use the product, and see whether you can actually get them to pay for it. Then you try to figure out a reasonable model for customer acquisition. And so on and so forth. 

If a test passes, you keep going. Whenever a test fails (and tests will fail a lot), you will realize that you have made some incorrect assumptions. At that point, you figure out whether to pivot to a different set of assumptions or to throw the product out. The early tests will hopefully be a lot cheaper and quicker than the later tests. That way, you have failed quickly with the ideas that aren't going to make it.The goal is to minimize your downside, and spend as many or your resources as possible on the most promising ideas. And that's how Lean applies to product development at startups.

Remembering What is Really Important

It's funny how easy it is to forget why you're doing this whole entrepreneurship thing. A couple of things have happened recently that put things into perspective for me.

Money or Free Choice?
A few days ago, I was at a holiday party thrown by one of my friends from MIT, and a bunch of guys from my fraternity were there. All of us were about 10 years out of college, so it was interesting to reflect on what we had done with their lives thus far. It is actually kind of humbling to compare myself to my peers - most of them have done astoundingly well by various measures. I guess you could say that pretty much everyone had settled down, except for me. I was the only one who didn't work in finance, and I was also pretty much the only person who was single or who didn't own a condo in Manhattan.

So I was talking to one of my college classmates, and was telling him about what my startup is working on. He said something along the lines of "that's cool. It's nice that you get to do something where you call the shots." He told me that his job is interesting, not terribly difficult, and pays well (I'm pretty sure that this guy makes between five and ten times as much as I do). However, he had spent two years building some quantitative trading software, and then the hedge fund he worked for decided not to use it.

Another classmate kept saying he found it amazing that our investors pay me money to work on whatever I want. I reminded him that it is actually an investment, and that I also probably work longer hours (and have more stress) than he does. The tradeoff of working for someone else is that you have to work on their project, doing what they want you to do. And the more you get paid, the more expectations your bosses put on you. These guys have great jobs - I'm not going to tell you that they suck or that I would never want to have them (given other circumstances, I might). I will just say that I am grateful for having had the opportunity to found multiple ventures where I could dream crazy things up and then make them reality.

What Are You Looking To Get Out Of All This?
I had another realization when an old friend emailed me to check in. A while back, this guy quit a corporate job to found a startup that he was passionate about, and I think that's pretty incredible. In his email, he mentioned that he hadn't gotten into YCombinator. I started thinking about how to respond. The first thing I wrote was "the only thing that matters is success," but then I realized that this just isn't true. The truth is that you probably aren't going to get rich running a startup. A lot of people think that a startup is the road to riches, and I hope for their sake that they make piles of cash. But, realistically, the expected value of founding a startup is much lower the EV of working a job. Most people who found a successful startup could have made plenty of money working for someone else. A startup is a choice that you make when you think that you can make better use of your time than others can.

I've been doing this stuff for coming up on three years, and I have sort of released the outcome that I'm going to get rich. Mostly because it forces me to think about what the fuck I'm doing. Stop daydreaming about dollars, and ask yourself the important questions. If you never made any real money from your startup, would you feel like you had done something that was still rewarding to you? Would you have any regrets? Would you give up and go back to working for someone else? If not, is there something you can do to change that?

Another question this guy asked me in his email was "are you still meditating?" I can say fairly objectively that I am significantly happier when I do meditate regularly, but my answer was a weak "no." One of the nice things about being an entrepreneur is that you don't need to account for your time on an hourly basis. But you do need to be productive. And to do that, you need to be sure do the things that will make you the most productive, even if it seems like you don't have time for them right now. And if you don't, you will be the one who suffers, and the startup you've sacrificed so much for will fail. So be sure to eat right, get enough sleep, and pay attention to your mental health. And spend time with the people who matter to you - you never know what is going to happen.

Here's a story - shortly after I started my first company, I headed to Baltimore because my sister was getting married. For various reasons, it made sense to spend the week following the wedding at my parents' house rather than heading right back to California. It felt like a waste of time, because I was anxious to get to work on the startup. But I stayed around for the extra week, and spent plenty of time with my family. My father died suddenly a few months after the wedding, and that trip was the last time I ever saw him alive.

In the end, I wrote the following response to my friend:

Remember that the only thing that really matters is having control of your destiny and enjoying yourself more days than not. A lot of people judge their success by external factors (like whether they get into YC or whether they get onto some list of "people who matter"), but I try to remind myself regularly that the only important thing is whether I get to work on something interesting and personally meaningful.

The only one you really need to answer to is yourself.

The Choice is Ours
The truth is that we can control the direction of our lives (if we want to). Despite societal pressure to the contrary, we can actually do pretty much whatever we want on a day-to-day basis, although we may of course have to live with the consequences of our decisions. Sometimes we make decisions that limit our choices (like getting married or having kids), but strangely, many people manage to work around those limitations. I know a number of successful entrepreneurs who have have done it both with kids and without huge cash reserves.

It is important to periodically remind myself that I have the best job in the world, and it is my privilege and responsibility to make the most I possibly can from it.

Find a discussion of this post on Hacker News

The Low-Power Revolution

An interesting trend has been emerging over the past few years, as computer chips scale out rather than up. By scaling out, I mean that chip designers are adding more cores rather than increasing the clock frequency. Where we used to have single-core processors clocked at two to four gigahertz, now we have two and four-brain processors running at essentially the same speeds. The new processors, however, are a lot faster than the older chips, even at the same clock speed. Technologies such as hyperthreading make it is possible to do up to 60% more work at the same clock speed, and chip designers improve the efficiency of their designs with each subsequent revision. Furthermore, there have been significant optimizations in power usage. It is possible to shut down unused cores, drastically reducing the power required.

Why The Desktop Is On Its Way Out
The result is that even today's budget laptops ($4-500) have enough computing power for pretty much anyone. In fact, the only real reason to buy a desktop these days is to get a standalone graphics card (for maximal video game performance) or to support more than two displays at once. Apple is allegedly considering eliminating the Mac Pro, which is no surprise, because pretty much no one ever buys them (at $2500 for the base Mac Pro, you might as well just buy a souped up 17" Macbook Pro or 27" iMac). Within a few generations, I'm going to bet that the state of computing has advanced to the point where there is no point in buying a non-portable computer.

This is a classic case of disruptive technology. You can buy a cheaper and faster computer if you get a desktop, but a laptop is good enough for most people's needs. I can now get a laptop that supports nearly any use case I could imagine. Eventually, no one will need desktops any more, and the market for them will drop off (this is already happening, as more laptops have been sold than desktops since around 2008). You will always be able to find a desktop for specialized purposes, just like you can still buy a mainframe if you really need one.

PCs Are Next To Go
So the desktop is on its way out - what's next. Well, I'm going to guess that the PC platform will be next. ARM and Microsoft recently released a bombshell - Windows 8 will run on ARM processors. Yes, those same chips that power your smartphone will soon be running Windows (and the same Windows that you run on your desktop, not some cut-down version designed for the phone). This started out as an attempt to create cheap and power-efficient Windows tablets, but I predict that this is a harbinger of things to come. Within a few years, your smartphone processing platform will be powerful enough to do pretty much anything that you could want to do with your current PC. But, most importantly, it will do that much more efficiently (sorry Intel).

As furious as the innovation has been in the PC realm, things have progressed even faster in the mobile space. Mobile platforms are notoriously power-constrained - where you can get by with a PC processor that dissipates 95W of power, the Tegra 2 processor dissipates only 2W (there are complete Tegra-based systems that consume only 3W). While Intel and AMD have been fighting for the speed crown, mobile manufacturers have been going for something even more important - efficiency. Hitting decent performance with a low-power device requires some pretty amazing wizardry from chip designers. Despite this, we are seeing two-core chips in most new smartphones and tables, and quad core will be the norm by the middle of 2012. NVidia recently released its first quad-core ARM chip (Tegra 3), and the other manufacturers (Samsung, Qualcomm and TI) have already announced their new designs. For the first time, we have a chip that nears desktop performance, but when in full-on power savings mode, uses a tiny fraction of the power.

What Will This Look Like?
So what are we going to do with these chips? You have a powerful chip, but when mated to a four-inch display, you aren't going to be able to get much real work done. I see two potential use cases, both of which are currently on the market in early forms. The first is the convertible tablet, a tablet that converts into a laptop. Asus introduced this form factor with the Transformer. You use it like any 10" Android tablet, but when you want it to become a full-fledged computer, you plug in the dock, and you get a laptop form factor (complete with keyboard and trackpad). The Transformer Prime, announced last week, marries this technology to a quad-core Tegra3 processor. Initial reviews are pretty good, and while it isn't cheap ($650 for the package with the tablet and dock), it seems like the wave of the future, especially running Windows 8 or a future version of Android that truly supports the laptop use case (prices will decline as the technology matures and volumes increase). 

The second use case is what Motorola calls the "Webtop." Basically, you plug your smartphone into a laptop dock to turn it into a laptop. The initial implementation, released earlier this year, was pretty half-baked. While the hardware was pretty top-notch, the software support just wasn't there. It pretty much allowed you to run Android applications in a Window and a desktop version of Firefox (compiled for ARM processors). I'm going to postulate that future versions of Android (and possibly Windows Phone) will switch seamlessly from phone to desktop form factor. Just like I can now plug a 30" display into my 13" Macbook, in the future I will do the same with my phone or tablet.

Overall, I'm sure that there will be a lot of improvement to the software and hardware, but these hybrid devices are only the beginning. Low-power chips will allow us to use computing power in drastically different ways than we currently do. While at first we will enable multiple current use cases with a single device, eventually the lines will be blurred, and you will be able to seamlessly use smart computing power in pretty much aspect of your life. At some point we will sever the physical connection - imagine being able to use your smartphone to point at and control pretty much any electronic device in your house.

The future is looking pretty bright, but we need to move off legacy power-hungry and expensive devices, cut the cord, and embrace the low-power revolution.

Birthday Wisdom (2^5 Edition)

I figure that, since it's my birthday, I'm allowed to disseminate a little bit of wisdom (not that I ever refrain from doing that). Particularly because I'm 32 today, which makes me about twice as old as most of the other people floating around the Valley. Ok, maybe not twice as old, but most likely I'm 25%-50% older than most of the people reading this. I remember when I graduated college and was working with a 25 year-old. He was ancient. Now I work with 25-year olds and think about how little they are.

On Work-Life Balance
This has been on my mind a lot recently. Basically, you read a ton about the glory days of engineers sleeping under their desks and having no life outside of work. It seems awesome and all, but one day you wake up and kind of wonder what's going to happen if you keep going down that path. Honestly, even though we know the names of the entrepreneurs who succeeded, we probably couldn't name 90% of the ones who failed. And some of the ones who failed worked just as hard as the ones who succeeded. I'm not going to say that the ones who succeeded just got lucky, because that isn't true (although there is some luck involved in every success). Just that it might take a lot of tries to get it right, and you don't want to be lonely and miserable while you are trying to find something that might never work for you personally. Again, I'm not saying that it won't work, just that after a while, you may want to figure out how to hedge the bet that you will sell your company for $10 million in 18 months. I meet plenty of people who tell me "yeah. I used to have a startup when I was younger. but then I got older, and wanted a job that actually paid me something and let me have a normal life." And I understand exactly where they are coming from, and why they made that choice.

So, if you want to be in startups long-term, you have to make it work for you in a long-term way. Figure out how to do the things you want, and keep the relationships that are important to you. There is nothing more important than family. If you aren't on great terms with all of your family, do whatever you can to mend things. It's hard when you are living across the country from them, like when I was living in Silicon Valley, and everyone else in my immediate family was in New England and Baltimore. But phone calls, emails, and Skype are an easy way to stay in touch (although the 3-hour time difference means that you are getting off work as they are getting into bed).

Figure out how to do the things you want. If you want to go to salsa, force yourself to get out of the office at 6:30 every Monday so that you can do it. Don't work 7 days a week if you can help it. Your overall productivity is actually lower than it would be if you were working 5, you just work slower to make up for the extra hours. The only way to get people to work 7 days a week is with a whip, and eventually they die of exhaustion. This is not to say that you shouldn't work all the time if that's what you want to be doing, just that there are many more things to be doing, and you WILL regret the things you won't do (just like you would regret not doing a startup if all you did were corporate jobs).

Building A Life Around Your Startup
I actually known some entrepreneurs who managed to build a life around their startup that worked for them. Dave Zhao, founder and CEO at the last startup I worked for (Zecter), seemed to do reasonably well. He worked a lot, so he hired his girlfriend (now fiancee) as UI Designer and office admin (she did a pretty good job at both, and definitely made the office a happier and more fun place to be). And at least she got to see him a lot. They got a (hypo-allergenic) puppy, and made sure that the office building was dog-friendly so they could bring him in every day. When people were tired of working, they could play with the dog (so it turned into a plus for everyone who worked there).

I'm sure that there were lots of hard times and downturns, but he seemed to stick it out ok through pretty much everything. After almost four years (and at least two major pivots), the company had a nice-sized exit to Motorola, and the founders made out quite nicely. My point is that success wasn't guaranteed at any point, and they needed to be able to stick it out for long enough to get to the end. If you are miserable, how do you expect to get there? Honestly, Dave's cofounder didn't hedge his bets nearly as well as Dave did, and I think that the startup took a lot more out of him.

On Being Happy
Being smart is a blessing and a curse. It's a blessing because you realize that you can do pretty much anything you dedicate yourself to, and there are fewer exceptions to that rule than you would expect (so long as you follow certain basic guidelines). It's a curse because you beat yourself up for all of the things you didn't do, and because you actually want to change the world. I saw Malcolm Gladwell speak quite a while ago (at the Googleplex), right after he had published Blink. Like any intellectual, he didn't talk much about his last book (which was old and boring to him by that point), but what interesting stuff he was working on at the time. He was looking into child prodigies, and whether they did well as adults. Interestingly, he didn't feel like the smartest people were the most successful, but he did comment that a lot of them ended up being happy later in life. This work later tunred into his book "Outliers," although the eventual focus was a bit different.

My theory is that the people who are super-successful tend to be the ones with fatal flaws. You don't work 90 hours a week for 20 years if you are already satisfied - you must be looking for something down that rabbit hole. I think that smart people can choose to either dig one rabbit hole really deep, or they can dig a bunch of shallower burrows (work, family life, hobbies, community, etc...). I'm not going to comment on which one is better (because we clearly need both types), but realize that the choice is yours to make. And if you don't like the choice at any point, you are free to change it.

On Working At Home (How Did I Get This Far Off Topic?)
It doesn't really work for me. That's all I can say. I find that it's a collosal waste of time 90%. I might as well just have taken the day off and had fun, rather than getting nothing done and feeling guilty about it. There are just too many distractions at home, including one that I won't mention here but is probably at the back of every guy's mind. 

I know that some people are different, but if a lot of people were honest with themselves, they would realize that productivity sucks when they work from home. I had a desk at home for years (complete with a second monitor and everything else I had at work), and it didn't really help much. After a while, I reclaimed my kitchen table for eating. Plus, home should be for "home" things, not for working. I think it's important (necessary) to be able to leave work and relax for an hour or two before bed. I'm even toying with the idea of not having a computer at home (other than my smartphone and iPad).

This is not to say that working from a coffee shop is a bad idea - I find that works fine, so long as it is quiet (bring noise cancelling headphones) and I only do it once in a while. But, for me, I need a dedicated desk in a quiet space to get anything productive done longer-term.

On Rebranding
This is more of a cautionary tail than "wisdom" per se. Even if you think that your blog or web site's name sucks, you probably shouldn't change it. You spent a lot of time and effort building that brand, and by changing your name, you compromise it. I have been blogging for several years under "Third Year MBA," since I started my blog right after finishing business school. Over time, I decided that it was no longer appropriate, since I had been out of business school for longer than I was in it, and because I never really felt like an MBA (despite having a degree from one of the best programs in the world). So I changed it. Twice. Page views plummeted, and I wasn't even happy with the new name. A bunch of people who I didn't even know read my blog told me that they liked the old name better. So I changed it back. It's Third Year MBA for good now.

On Birthday Cakes (Thank Goodness for Mono-Spaced Fonts)

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| ----------------------------------------------------------- |
|                                                             |
|      Happy Birthday to Dana and Alissa (my twin sister)     |
|                                                             |
| ----------------------------------------------------------- |

One Plus One is Often Less Than Two

Simplicity is king in the world of startups - If you can't explain your startup idea in one sentence, then it probably isn't worth doing. Actually, that's probably a bit too much space - let's stick to six words (just for a challenge). I'm going to try out this little exercise with a few ideas that I've been throwing around:

  • File Sharing for Pets
  • Wireless Printing From Any Bologna Sandwich
  • Hamster Taxidermy Done Right/Made Simple
  • Virtual Private Networking For Snails

Seriously, if you are talking to an investor or potential customer, and you can't get across your idea in 15 seconds or less, you have probably lost them. I once had a business partner to whom I would say things like "you have two sentences to explain the idea that you just tried to enumerate." More often than not, he wouldn't be able to do it. We also weren't business partners for all that long.

And not only does the listener need to understand the idea, he also needs to "get it." At which point he will say "I get it," or something of the sort. If the listener can't understand why Fido or Fluffy would want to share files, your pitch is just as bad as it would be had he not spoken the language you were using to pitch the idea.

So you give your pitch, and the investor doesn't like it. A lot of first-time entrepreneurs make another mistake here. They take two ideas that individually aren't compelling, and combine them in an attempt to divert attention from either stinker. The problem is that they end up building an even bigger stinker. Let's use my examples from above to illustrate:

  • File Sharing and Virtual Private Networking for Pets and Snails
  • Wireless Printing/Hamster Taxidermy Done Right/Made Simple/From Any Bologna Sandwich

I think that both of those break my six word rule, but the products provide so much functionality that it's probably ok. But, wait a second - those pitches don't really make sense. Our hypothetical investor isn't really interested in putting any money in? No matter - let's combine them into one idea, since that will probably fix everything. A lot of first-time startup founders continue to do this, layering feature upon feature (with the rationale that anyone could find something that he likes in the mess).

  • File Sharing/Virtual Private Networking/Wireless Printing/Hamster Taxidermy Done Right/Made Simple/From Any Bologna Sandwich/For Pets and Snails

Ok, that sounds about right for most of the ideas coming from your average MBA student/startup weekend. So what's the problem here?

The problem is that none of the ideas are compelling in and of themselves, so they aren't really any better if you put them all together. So, start again, and try to pitch your idea in six words. If you can't, you either have problems being concise, or maybe your idea just isn't compelling enough.

Yahrzeit

In Jewish tradition, it is customary to commemorate the death of a relative with an annual ritual. You go to services, and get up at the appropriate time to say a prayer. My father died two years ago today, and I feel like this is my own way of commemorating his life and what he meant to me. This will be the first time I've written of it publicly - I figure enough time has passed that I can say some things that I have wanted to for a long time.

I guess I'll start by saying that I never felt like I knew my father as well as I would have liked. He was sort of a difficult person to get to know. He didn't like to express his emotions publicly, even to family. Also, he very rarely praised people to their faces, although he would say nice things about people behind their backs. Maybe he didn't want it to go to their heads, or possibly it just wasn't his nature. Criticism was much more open - if you screwed up, he would be the first to let you know. This was maddening for me as I was growing up - no matter how well I did, I never felt like it was good enough for him. To some degree, I credit my successes in life to this attitude, but this realization kind of came in hindsight. He expected people to execute to the best of their abilities, and anything less was simply unacceptable.

Despite his overt hardness, my father was a good person who cared deeply about people. His life's work was as an orthopaedic surgeon, and I believe that he chose that job primarily because he wanted to help people and make a difference. His professional specialties were in two areas - fixing scoliosis and removing malignant bone tumors (if you asked him what he did, he would call himself an orthopaedic oncologist). He would take children whose backs were so crooked that they couldn't walk, and give them essentially normal lives. He also specialized in removing bone tumors from peoples' spines, in procedures that could take up to 12 hours. He was good at what he did, and helped many many people. When we were cleaning out his office at work, we found an incredible number of name tags of discharged patients, which he kept as souvenirs. I think that he wanted to remind himself that you can make a huge difference, one step at a time.

For the last ten years of his life, he focused his professional energies on building a state-of-the-art cancer treatment facility at Sinai Hospital in Baltimore. He managed to attract a host of top-notch cancer specialists to work there, simply because they wanted to work with him. The result was the creation of a true center of excellence at an otherwise ordinary hospital, and I hope that it will serve as part of his legacy.

It is interesting how people have hobbies that seem to reflect their day jobs. For my father, that was making teddy bears. Kind of funny that a surgeon would spend his spare time sewing together teddy bears, but he found it cathartic. He would buy old fur coats, and painstakingly convert them into beautiful teddy bears. A number of these bears went to charity auctions or as baby presents, but the most innovative one was Scoli-Bear. He had an medical device manufacturer create scoliosis instrumentation that would fit a teddy bear, and he placed it inside of one of his creations. He then took an x-ray of the teddy bear, with the instrumentation visible. He could then explain scoliosis surgery in a non-threatening way to even the youngest patients. 

My father also enjoyed biking, a love that I inherited from him. One of the ways we could connect when I was growing up was by going riding together. When I was younger, we would do rides of up to 50 miles on a tandem. By the time I got to high school, he bought me my first road bike. Many of our happiest times were spent riding together on Maryland's Eastern Shore or in semi-rural areas of Baltimore County.

Over time, I began to understand that my father and I were a lot more alike than I had ever suspected. In addition to having a number of interests in common, I realized that our personalities shared many aspects. Just as he was a difficult person to get to know, I can also be somewhat opaque to other people. The ways that he was cryptic to me were very similar to how I was cryptic to others. However, I think I also inherited many of his positive aspects - his ability to diagnose to the root cause. One of his long-time colleagues said that my father was one of the best medical diagnosticians he had ever met. Although I have made my living as a software engineer and not a doctor, I feel like my strongest skills are also in diagnosis. I pride myself on being able to understand just about any system I am thrown into, and when I encounter a software bug, I won't stop until I understand and can resolve its root cause. I guess it makes sense, since I am my father's son - it is funny how you don't notice some of these things until it is too late.

I still remember our last conversation fairly vividly. I called my parents for our normal Sunday evening chat, and after I talked to my mother, she put him on the line. He had just read a book by Clayton Christensen about disruptive technology in the healthcare industry. I have long been a fan of Christensen and The Innovator's Dilemma, so we talked about disruptive technologies on a wider scale. I told him that The Kindle was going to disrupt paper books. When it was time to hang up, I'm pretty sure that I told him I loved him, but I can't quite remember.

About a week later, my mother called me with the news that my father had died suddenly at the age of 61. I still relive the experience every once in a while, and it haunts me to the core. I'm not sure if you can ever get over that - you just sort of move on. Two years later, there is still a pretty big hole inside of me, but I try to fill it by surrounding myself with things and people that I love. But things will never exactly be the same.

I miss you, Dad.

Sometimes It's What's Inside That Counts

An interesting thing happened immediate after The Big Show where the iPhone 4S was announced. People started to grumble, saying things like "it isn't all-new" and "it doesn't have a 5-inch screen with Retina Display HD," and even "it doesn't have 4G" (only the fake 4G that AT&T is trying to pass off as the real thing). And clearly these people have a point. But, remember that Apple only needs to sell you a new iPhone every two years, so they didn't need to make this iPhone all-new compared to the iPhone 4. What they really needed to do was to make it all-new compared to the 3GS, which is who got to upgrade this time around. And I think that they succeeded pretty well at doing that. However, I promise that the next iPhone will be all-new and improved, just in time for everyone with an iPhone 4 to plunk down $649 (for most of us, that comes out to $200 now and $449 over the course of a 2-year contract).

So the outside of the iPhone 4S is basically the same, and that's all most people really notice. The inside, however, is all-new. And that is what really truly counts, and allows Apple to do its magic, like Siri and the HD version of Infinity Blade. So, even though most people couldn't give a damn what is inside of their phone (as they shouldn't), it is still pretty important.

The last time that Apple gave the iPhone this big of an internal upgrade was actually the 3GS, which most people perceived as another ho-hum upgrade. When the 3GS came out, Apple used pretty much the same marketing, "Twice as fast with a much better camera." The problem is that people don't respond well to clock speed comparisons - you can't see the clock speed like you can a new case design. At least you can see the results of a better camera - the pictures look better. But internal upgrades are difficult to sell to consumers, even though at the end of the day they are what actually push the platform ahead.

So let's look at what you actually get with the "not a real upgrade" 4S. With the 3GS upgrade, we went from a CPU that could just run the iOS to one that was capable of running multiple applications at once.  Notice that the 3G could barely run iOS 4, while the 3GS keeps up just fine when upgraded to iOS 5. The internals of the 3GS and the 4 are actually pretty similar - when you consider that the 4 has to drive the "retina display," the slightly upgraded CPU just levels the playing field. With the 4S, you get another CPU doubling - instead of one ARM A8, you get two A9s. This gives you over twice the processing power, since an A9 is actually faster than a comparably clocked A8. As for the Graphics chip, it contains two cores, each of which is about twice as powerful as the Graphics in the iPhone 4 (for a total of at least 4x the processing power). Overall, this pretty much equates to a generational shift, the first in over two years on the iOS platform. The 4S will be pumping out pixels long after the 4 has been retired (iOS 7 anyone?). More importantly, the enhanced processor enables a whole host of new applications, including Siri (some people will argue that Apple intentionally crippled Siri, but I don't buy this - Apple has never intentionally crippled its devices). It will take a while for games to take advantage of this, but I'm sure that we will see a shift towards more demanding applications (with use cases that we can't currently imagine).

I would argue that Apple's "small" upgrades are actually the most significant - they tend to keep the outside the same when radically altering the architecture. The original Macbook Pro looked very similar to the Powerbook it replaced, even though it was all-new inside. Ditto for the new iMac vs the G5 models - from the outside they looked basically identical. While at first the old and new machines seemed somewhat similar, the new ones were vastly more powerful, and eventually supported many more software capabilities. Likewise, the recent Macbook Air upgrade took the platform from being somewhat anemic to a replacement for Apple's previous low-end machines, including both radically faster processors (Intel Core i5 and i7) and faster connectivity. I predict that the Airs will eventually disrupt the Pro models, rendering them unnecessary (although it will take a few years).

So, in summary, the upgrades that are the most visible aren't always the most significant, and the biggest upgrades sometimes don't appear so at first. Over time, the capabilities that matter the most are the ones that allow devices to do new things, rather than just allowing them to keep doing the same old things.