Why Intel Should Be Scared of the Future

Intel recently announced that its new Core M Processor is 2-3X as fast as Qualcomm’s high-end Snapdragon 801 and 805 chips. That’s great, but Intel has a real problem on its hands - pricing. Core M’s volume pricing is about $281 in quantities of 1000 - Snapdragon 801 has a volume price of about $41. The Snapdragon 805 sells at a bit of a price premium, but even at $100, it’s still just over 1/3 of the price of the Core M. In the next few years, ARM-based chips are going to bridge the performance gap (or at least come close) - the 801 and 805 are actually just incremental updates put in place to stall until 64-bit chips are available early next year. And the next generation of ARM chips will be manufactured on a 20nm process, allowing them to run faster and use significantly less power than current designs.

So here is Intel’s problem - Intel is used to selling expensive processors used in PCs. Qualcomm is used to selling cheap processors used in smartphones and tablets. To admit that the market has shifted to the bottom, Intel has to admit that its high-end market for expensive processors is slowly going away. For a long time, Intel didn’t even get involved in the low-cost mobile space - this is the the typical attitude used by high-end players who are being disrupted from below. The first "smartphones" were toys, and tablets hadn’t even been invented. 

A History of Small Computing

I fondly remember my circa 2001 Palm Pilot with a 33 mhz Motorola Dragonball processor - it was relatively limited in terms of capabilities (I used it for keeping track of my schedule and playing Bejeweled). At the time my PC had a 733 mhz processor, and ran Windows XP. My phone at that point was a Motorola Startac, which pretty much just made calls (no web browsing or camera, and definitely no apps). If I wanted to do anything “real”, I booted up my desktop computer (a big tower with a 19” Trinitron CRT).

But mobile chips slowly got faster, and people began to do more with them. For a long time, it was all sort of a joke, but some time around ARM11 (the chip powering the original iPhone and Android devices), Intel started to see that these entrants were getting close to posing a threat. About 5 years ago, it

begrudgingly released the low-power Atom, which debuted in the ill-fated Netbook (again, the typical response of a high-end player is to launch a low-end, crippled version of their technology). Atom proved a reasonably capable chipset for Windows-based tablets, so it basically ended up as a platform for high-end tablets/low-end laptops (not the most exciting part of the market, but it made sense to Intel). Atom then evolved very little as ARM-based processors slowly caught up, moving up from low-res smartphones to much higher-res tablets. At this point, the Snapdragon 801 and 805 (not to mention Apple’s A7/A8) are comparable in performance to the highest-performance Atom. In face, Qualcomm’s chips are quickly becoming good enough to run demanding applications at high resolutions - many current smartphones and tablets run at either 1080p or 2560x1440, equivalent to the highest resolutions supported by today’s desktop computer displays.

What Intel did in response was to strip down it’s new high-end mobile processor (Broadwell) to use extremely little power while still providing pretty good performance. Core M’s low power usage allows it to run without a fan, allowing for use in super-thin systems (like tablets). Intel says that Core M will work well in high-end tablets, priced at $700 and higher (sounds similar to their strategy with Atom). Great, but from what I understand, the market for $700+ tablets is fairly limited, with the possible exception of loaded up iPads. The real market is for cheap tablets and smartphones, and there is no way to put an almost $300 chip into a $4-500 device. So what Intel is heralding as a “victory” over ARM-based chips is actually an acknowledgement that its competitors are getting far too close for comfort in a key market.

The Road Forward

Right now, Intel has two options. One is to focus on making better low-cost processors, rather than rehashing the same designs year after year. The problem is that they don’t really want to do this - if they make cheaper chips perform better, then they won’t sell as many of their expensive chips. So their best bet here is to advance the “state of the art” of their low-cost processors just quickly enough to keep competitors at bay. However, since their primary focus on is on building expensive processors, this will always be a secondary goal, and they will eventually lose to companies whose “state of the art” processors are low-end chips.

Intel's second option is to bow out of the low-end and focus entirely on the high-end. My guess is that they will eventually do this, whether by choice or by force (when ARM-based designs make it impossible for them to compete here). The problem is that this will accentuate disruption from below. “Low-end” chips will continue to improve in performance and drop in price, until they are good enough to meet the needs of pretty much all consumers. It’s actually only a matter of time before Apple releases a laptop based on their A-series of processors (possibly as soon as next year, but definitely by 2017), and from that point, the rest of of the PC market will slowly follow. In the end, Intel will eventually be driven out of the consumer market, and into high-end servers (where they will continue to dominate for quite some time).

As a footnote, I think that AMD is doing the right thing by transitioning to ARM-based architecture (if you can’t beat em’, join ‘em). The only question is whether they will be able to shift their thinking quickly enough to build products that serve new markets (tablets, phones, and smart devices), rather than focusing all of their efforts on ARM-based servers.

Dealing With The Loneliness of Working Alone

For the past six months or so, I’ve been primarily doing freelance development work. And I have to be honest - it’s pretty lonely a lot of the time. Sure, I have a decent number of meetings with the people I’m working with or with friends/contacts, but a lot of this interaction is via the computer, and I end up spending a lot of the day on my own, staring at my computer screen.

Now, I’m not the most extroverted person out there (if we’re being honest, I’m a mildly to moderately shy introvert), but I do like being around other people at least some of the time. For the past several years, I’ve been noticing that I get lonely when I’m sitting in front of my computer coding. I like interacting with machines - they fairly simple and reasonably predictable, but after a while, it gets sort of old. Even though socializing hasn’t come naturally, I’ve found that I enjoy getting to interact with other people on a daily basis. The tasks I enjoy most are building things and seeing how real humans interact with them. Unlike computers, humans will always push the button that says “don’t click this,” or try to use your product for a different use case than the one you intended. They constantly surprise you. I’ve realized that I find it a lot more enjoyable to build things that real people can use than to build things that are just technically interesting. So, over the years I’ve been moving more towards “product-focused engineer” than “backend engineer,” even though I’ve had a lot of experience building backend systems. A lot of the work I’m currently doing involves building out front-end user experiences for early-stage products. Even though the actual work can be lonely, at least there is a part of it where I get to see the results of my work.

When I’m working for a company, typically I have coworkers who are in the same situation, so there are lunch times and periodic breaks to play chess/get coffee/etc… The problem with working on my own is that I don’t have those. And, to boot, I need to worry about hours on the clock, so every time I take a break, I end up feeling guilty that I’m not doing billable work. I’m slowly getting over that one, mostly because I realized that the best way to get in more billable hours is to take care of myself. When I’m feeling lonely, I tend to not be very productive, and I end up spending most of my time surfing the web and wasting time.

Loneliness is the primary thing that has killed it for me in the past. The last few times when I ended up in a flat spot (typically after the death of a startup), I ended up find a job pretty quickly. Typically, it ended up being the next reasonable option rather than the thing that felt right to me. Because, to be honest, anything felt better than sitting alone at home and trying to get myself motivated to be productive (when I didn’t even know what I was going to be productive with). But this time I resolved to stick it out for a bit longer. And things have gone reasonably well so far. I’ve found some great projects to work on, and I’ve been learning that it’s possible to make a living without having a job that I go to every day.

How to (Begin to) Deal With This
So what are the solutions that I’ve tried or seen for combatting loneliness? I can’t say that I’ve figured it out (I actually started writing this post because I was feeling lonely), but I’ve had the opportunity to try a number of different things. And talking to some other people, I’ve found that a lot of programmers/solo practitioners have to deal with similar issues, even though it isn’t something that comes up all that frequently.

The most important thing is to take care of myself (kind of strange how this ends up being a blanket solution to most of the problems one will encounter). If I’m not eating right, or sleeping enough, or exercising, or even meditating enough, I’m going to find myself feeling depressed and lonely  (I haven’t been doing too well lately with a few of these). I also need to make sure that enough time is spent working on things “for me,” like writing blog posts, or building projects that I want to build. Finally, I’ve found that not working weekends is one of the nicest things I can do for myself. When I work weekends, I usually end up feeling resentful, and reducing my productivity during the week. I’ve recently instituted a “working hours” policy during the week. I set particular hours when I can do work, and the rest of the time I’m “off.” Even if I haven’t “done enough work,” it’s time to put away my stuff and go home when my “work day” ends. Of course there are exceptions, but I find that the “extra time” I work during off hours is generally pretty unproductive, and is better-spent reading or watching cat videos on Youtube.

The second piece is making sure that I have a reasonably active social life. This can be tricky if I worry about “the clock,” but in the end, I’ve decided that the clock will figure itself out if everything else is right. So I generally plan to go to events two to three nights per week. My philosophy on events is that it’s fine to leave if I’m not having fun, but at the very least I go and give it a try. Meeting up with friends/acquaintances/people who seemed nice at Meetups is also a good thing - I schedule at least a few coffees/lunches/meetings each week. I don’t accept everything that comes my way, but I find myself going to a lot of speculative meetings. Sometimes these even turn into paid projects, but even when they don’t, it’s fun to meet up with people and hear about what they are working on.

Another solution (this is actually turning out to be a necessity) is working out of a shared space with other people around. I tried working from home for a while, but it was way too isolating. I sometimes spent the entire day sitting at my kitchen table, and when it was time to go to bed, I realized that I hadn’t left home. After a while, I found a co-working space, which was fine for a while, but after some time I realized that it wasn’t the right situation (it was too far from home, so I ended up just working from home most of the time). So I visited a bunch of spaces until I found something that seemed right. The space that I chose is a 5 minute bike ride from home, gets enough sunlight, is fairly quiet, and still has the benefit of having people around. Plus, they have events a lot of nights, so if I end up working late, I might be convinced to go the to event instead.

A final solutions I’ve seen other people try is to find a business partner whom they can work with. It’s nice to have someone else around, especially if his or her skills complement yours, and whenever you’re feeling down, there is a good chance they can pick you’ve up. I’ve done this with startups, but I’ve held off on my consulting business, although I have friends who have used this to reasonably good effect. With a consulting business, it’s a bit trickier, because one person typically ends up having to find projects for both people. Which is more efficient in some ways, but considering that I haven’t had a full plate of work until fairly recently, I’ve held off.

On a related note, it can be good to have some people to talk to (even via text or GChat). It's easy to think that since I'm all alone at work (or wherever I am), no one wants to be around me. But that's clearly not the truth. I have a decent-sized list of people who I can ping whenever I'm feeling lonely or down. I used to suffer it out by myself, but I've learned that this is almost always a mistake. Within a few minutes, I can usually get someone on the phone/computer to talk to. Talking to another person for even a few minutes can make a huge difference. A lot of people are in a similar situation, and can sympathize with where you are right now. Heck, if you are ever feeling down, you can even ping me if you want.

So these are just a few things that I’ve tried - clearly it isn't an exhaustive list, but everything on this list has helped me at some point (and often many, many times). If the people reading this have any suggestions, I would be happy to hear them.

Apple’s “Handoff” Technology Shows Us the Future of Run-Anywhere Applications

I thought that the highlight of today’s WWDC Keynote was “Handoff," technology that allows iOS and Mac OS apps to share data in real time. In one demo, Craig Federighi began composing an email on an iPhone, and then he completed/sent it with a desktop Mac. Another demo involved opening a web page on a Mac, and then showing it to a friend using an iPad. Apple didn’t announce whether this technology will be available to third parties, or only in first-party apps (Apple often releases new OS features to internal apps before it makes them available externally). I’m guessing that even if this technology isn’t available to everyone this year, it will be by iOS 9.

The thing that excites me about this announcement is the potential of interoperable, run-anywhere applications. We are in the process of transitioning from having our applications and data tied to a single device to being able to access them anywhere. While this has actually been underway for quite a while in the data realm (thanks to DropBox, iCloud, etc...), this move begins the transition towards run-anywhere applications. One could argue that the advent of rich web applications was the beginning, but the longer-term goal is to allow even native apps to share data and run anywhere.

I can see a future where developers build a single version of their app that runs on iPhone, iPad, and Mac OS X (and hopefully Linux, Android, and Windows OSes, although let’s leave that discussion for another time). We already (sort of) have device-independence for the web in terms of responsive web applications - developers can build a single version that scales to a variety of different screen sizes and platforms. Since pretty much any device can run a web browser, we have a low level of device independence. However, web applications are still much slower than native apps, and lack direct low-level access to the hardware. In the long-run, I’m guessing that web apps will merge with native apps as web browsers improve in performance and gain greater hardware access (Google is doing some interesting things in allowing developers to build Chrome apps that look and work like desktop apps). But we aren’t quite at this vision of a unified platform, and won’t be there for quite some time (if ever - that would require too much cooperation between companies that hate each other).

iOS already has a mechanism to build apps for both the iPhone and iPad, although it isn’t as elegant as responsive design (developers use discrete layouts rather than designing a single layout that scales to the appropriate screen size). So, for now, that will have to do. Now the big question is whether it’s possible to extend similar support to the Mac. Obviously there would need to be an elegant way to deal with differences between iOS and the desktop, such as the touch/click difference, window chrome/resizing, and multitasking. But, even if there were an API that glossed over these differences by default (specify one fixed window size and convert touches to clicks), it would be a good start. That way, you could get instant cross-platform compatibility, and could extend the apps to elegantly support each platform.

Now we have to consider data sharing, since it’s important to have not just run-anywhere applications but also access-anywhere data. The Mac OS App Store moves the desktop more towards a sandboxed version of the world, so it’s feasible that all of the data would be stored in Apple’s iCloud rather than on your hard drive (most of my data on my SSD is just program files - all of my data is in the cloud). Once we have everything stored on the cloud, and appropriate APIs to support access, we can easily have our data anywhere (and we do even today, so long as we have an appropriate application that can read/use that data). There’s still the need to synchronize data that hasn’t yet been written to a storage device (such as an email), but technologies like Handoff have tricks to handle just such a problem. When developing applications for the unified platform, developers will be able to annotate the data that needs to be available anywhere, and then the applications can take care of state synchronization between devices. If it’s truly a single code base that’s running everywhere, the synchronization problem is actually easier to solve, because you only have to write the logic once.

The result is that you can put down one device, pick up another, and continue working with the same applications. Sure we’ve sort of had this for years, but only in bits and pieces, and not as a single, coherent solution. In its latest attempt to tie us even more closely to the Mac platform, Apple actually shows us the high-definition version of this vision.

If You’re Feeling Totally Lost, You’re Probably on the Right Track

From time to time, I find myself feeling lost. I don't mean a little bit lost - I'm talking about the existential questions. 

"What the fuck am I doing with my life? Am I wasting it all? Is everything going to be for naught? What will everyone else think of me?" 

And, when that happens, I usually assume that I’ve made a wrong turn somewhere, and start to freak out. But, in truth, I’m usually onto something big when I have that sort of feeling. Which leads me to freak out, question myself, and inevitably self-sabotage by talking myself into abandoning what I’m doing and turning back. Here’s a quick parable for what’s happening.

The Receding Shoreline
When you feel utterly and completely lost, you have managed to swim far enough that you can’t see the shore any more. You know, your old life, that comfortable albeit somewhat boring existence where you went to work every day and worked on someone else’s projects. If you were halfway good at your job, you probably made a decent amount of money and were in general mostly content. Except for that subtle underlying malaise that led you to leave in the first place. Now, when you left that life, and started swimming away from the shore, you looked back every few strokes to make sure that it was still there. Because, you know, it could have moved or something. And maybe you even shouted to someone on land every once in a while. “Hey! How’s it going? What’s the weather like on shore?”

But, over time the shore moved farther away and became smaller, until finally it was just a dot on the horizon. And when you called back to your friends on shore, they couldn’t hear you over the noise of the surf. And finally, one day, you couldn’t see the shore any more. But the strange thing was that you could see another dot in the direction you were swimming. Actually, you could probably see many, many dots, in every direction other than the one you came from. And it was probably pretty scary and confusing, because you didn’t know what dot to swim towards. So you panicked and froze, and realized that you had three options.

Your Three Choices
The first (and most obvious) option is to turn back. That is probably the easiest option, because you knew what would happen if you chose this one. After all, the shore wasn’t that far away, and if you began swimming back, you could probably get there pretty soon. Sure, your whole trip may have been a waste, and you weren’t excited about that place you left, but hey, it’s sort of comfortable in a warm and fuzzy sort of way. So there was a pretty good chance you turned back. And, if you did, you probably ended up with a pretty good job at some company that only left you mildly dissatisfied. 

In all honesty, this is probably the best option in most cases, and if you haven’t taken it before, you might want to. Because, by the second or third time you have to make the choice, you will probably be ready to make a bolder choice. But hey, experience doesn’t come cheap.

The second option is to drown. That isn’t really an active option, because no one willingly drowns, but if you keep treading water for long enough or if a shark came along, it was definitely a possibility. By continuing to not act, you continually increase the probability that you won’t make it to any shore, so it becomes the default option. You could take drowning to mean suicide, or the inability to find a job, or whatever you want to make of it. But, in general, drowning isn’t really a pleasant option. So I would recommend doing anything possible to avoid drowning, i.e. one of the other two options.

So, the last option is to forget about where you came from and to swim like mad for a point on the approaching bank. It’s important to pick one point, because if you pick too many, you will bounce between them and never get any more. And eventually you will either drown or turn back. But assuming that you can focus hard enough and kick hard enough, hopefully you will make it. At least until the next time when you decide to look back and notice that the shore still isn’t there any more.

Why You Shouldn’t Quit
So, what’s the lesson here? The point when you are feeling lost is precisely the one where you shouldn’t quit. Because you’ve already made a lot more progress than you can imagine. It’s only when you can begin to see the goal that you start to freak out for real (our own self-sabotage is actually our biggest enemy). So you can turn back and give up everything you’ve fought for so far, or you can keep moving towards the unknown. Which won’t be anything like what you are imagining, but I’m sure that in the end it will be a lot more satisfying that where you came from. But the choice is yours.

Why Your Life Will Unfold Differently From Anyone Else's

A while back, I wrote a post about how it doesn’t make sense to compare yourself to others. My justification was that this inevitably leads to depression, self-flagellation, and all sorts of other bad things. But there is a completely different reason why you shouldn’t compare yourself to others - you are completely different from everyone else, and no matter how hard you try, you aren’t going to be like them. I know that you probably watched Fight Club, and accepted that you aren’t a special snowflake. And I hate to break it to you, but that’s all lie. It’s just an excuse that we tell ourselves to allow ourselves to fall into mediocrity.

The truth is that your life is going to unfold completely differently from anyone else’s life (even that of your twin brother or sister). And for good reason; you have a unique set of things that drive you. Those drivers are going to cause you to do things differently from anyone who has ever lived before (or who will live in the future). When you combine those different choices with a different set of luck and circumstances, your life will look radically different from any other life (if only on a microscopic scale). Sure, there are probably common themes that run through most peoples’ lives. All of us will be born, and will die. We will probably have jobs at some point, and be in some sort of relationship (more likely plural than singular). Maybe we will get married and have kids. We will likely experience loss at some point in our lives, and feel alone and isolated. And at other points we will feel completely connected and plugged in to the world. But, at the end of the day, each of us will have his or her own experience. And each moment is uniquely ours, to be experienced as we see fit.

So what does this mean? Well, when you compare yourself to others, you rob yourself of the richness that is your life. In wanting someone else’s life, you actually forego living your own. In truth, you probably aren’t going to have what someone else has by the time that you reach whatever age that he/she had it at. That person was driven by his (or her) own motivations, and he got what he sought through some combination of doing and happening (let’s not begin to discuss the ratios of the two). And if he didn’t get what he thought “wanted," then maybe the conscious motives weren’t aligned with the true motives. So, if you think your motives are exactly the same as someone else’s, they aren’t (caveat emptor to anyone who is trying to find a business partner - the best you are going to do is to find alignment on a small but crucial set of points). It is impossible to know someone else’s motives - you can only guess based on your personal interpretations of what they say and do. 

For example, there was a long-standing rumor in the valley that Mark Zuckerberg wouldn’t sell Facebook until he could have a billion dollars after taxes. Clearly, that wasn’t the case, or he would have sold Facebook a long time ago. It’s also possible that he had this intention at some point, and then his agenda shifted later. Truthfully, no one knows why he built Facebook, not even Mark Zuckerberg. He can say and believe whatever he wants about his starting motivations, but no one (even him) will ever truly know why he opened up his computer one day and began writing code. All that we can say with any certainty is that it happened. And no one knows why he quit Harvard to work on his little startup, or why he stuck with it until he owned the largest stake in one of the biggest tech companies in the world. He just did a series of things, and had a series of interactions with other people, and had various occurrences of luck, and the end of the day, he somehow ended up as the chief executive officer of Facebook. This is not to say anything about whether he deserved it or how much luck was involved vs skill - Facebook as we see it today is just an emergent property of the system that includes Mark Zuckerberg and some combination of other things.

So, if you want to be a billionaire, your best bet is probably not to try to have the same motives as Mark Zuckerberg, or even to do the things that you think that he did to earn a billion dollars (actually, it’s more like $30 billion in his case, but numbers stop mattering once they cross a certain threshold). Because you can’t ever truly know what was going on when he did those things, and without the same motives (and luck), there is no way for you to replicate what he did. If you do want that billion dollars, your best bet is probably just to make having a billion dollars your number one priority. Focus all of your attention on that desire, and let that guide you towards actions that will bring about the goal. And, if you fall short, the simplest conclusion is that you got sidetracked on something else, and you focused on that instead. 

Just don’t ask me what to do, because I’ve never been good at making large amounts of money (my general priority is to have just enough money that I can effectively ignore money in most cases). I will give you some advice that will probably save you a step and lots of heartbreak. No one ever truly wants money just for the sake of having money - money always represents something else. Just want all of the things that you will be able to have when you have a billion dollars, and go for those instead. Because I’m betting that you don’t really need a billion dollars to have any of them, and even if you want to a professional sports team, you might actually have to forego a bunch of the more important items on your list to achieve that (most billionaires spend the vast majority of their time working, even if they are sitting in the owner’s box watching the game).

How To Protect Your Startup Against The Things You Can’t Predict

Many years ago, I was an engineer at a large software company, and helped to launch a new product. I spent months going to preparation meetings, and filling out various readiness checklists. People looked over our code for security issues, and we had to do “load testing” (although, in my experience, the only real load testing you can do is with live users). And, after what seemed like an eternity, we were finally ready to launch.

We went live, and were instantly overwhelmed by the traffic we received. A security issue popped up due to the load - basically, if the auth server fell over, we would let the user in (this was my fault - I didn’t know any better). We ran out of disk space within a few days, and had to switch over to larger servers (which required manual intervention). Overall, it took about a month to get our servers stabilized, and while nothing catastrophic happened, we couldn’t exactly say that we were prepared for launch. So what was the purpose of all these checklists?

The Problem With Checklists
Well, in my experience, most safeguards involve making a list of all of the things that went wrong in the past, and preventing these from happening again. And that’s great - if you make your list long enough, you will prevent most of the common mistakes. As a result, large companies tend to have lots of red tape, and to launch anything, you need to clear all of that red tape. This has a way of stifling innovation, or at the very least, reducing the outliers to the mean. And, you have the added problem that, no matter how many safeguards you put in place, you won’t ever prevent bad things from happening. This is the whole thesis of the book "The Black Swan," which asserts that while the chance of a specific outlier happening is pretty small, the total sum of outliers (i.e. the long tail) is actually pretty significant (if you’ve read the book, you won’t be surprised by anything I’m going to say from here on).

So how can you guard against things that you can’t even predict? I mean, if you are a new startup these days, you probably finish coding your MVP, push it to Heroku, and then post it to Hacker News/Facebook/Reddit as quickly as possible. There are a million startups doing exactly what you are doing, so time is of the essence. You probably don’t have time for launch checklists - heck, a lot of early stage startups don’t even bother with testing (we can debate the merits of this later). Anything that reduces your velocity is probably impeding your ability to succeed, right? So how can you predict which one of a million bad things could possibly happen to your company? After all, the most important thing is getting people to use your product.

The Solution?
So, here’s what I would recommend - think of the most common classes of things that could go wrong, and then figure out ways to mitigate the damage of these outliers. In general, these would be:

  • My server goes down, either because I get too much load, or because a component fails. The load situation probably isn’t going to happen at first (you might want to focus more on the situation where no one comes), although there might be pieces of infrastructure that won’t even handle a minimal amount of load. You should know what your weakest points are, and how you are going to handle these either going down or not performing adequately. Also consider what happens if a service provider isn’t up to spec. On this line of thought, I've had a lot less stress when I've used known service providers (e.g. Heroku, WPEngine, Posthaven) than when I've tried to host things by myself. Yes, Heroku is down/slow sometimes, but less frequently than your server will be if you don't have a full-time site reliability engineer.
  • I get hacked. Probably not going to happen at first, but by the time that it happens, it might be a big deal (see Snapchat or Target). Try to make sure that, even if someone compromises your production database, they can’t get any payment information or cleartext passwords (and salt your hashes).
  • Some "idiot" on my team accidentally deletes the production database (and I put it in quotes, because intelligent people screw up every once in a while, and it's fine, so long as it's once in a while). I actually had this happen once recently - good thing we had a fairly recent backup. And you bet that I put in place a much more aggressive backup scheme once we lost 6 hours of user data and had to apologize. There is a subtler version of this, and that’s that I push a totally broken build, and there’s an irreversible database migration so I can’t go back to the old build. This is like a failure scenario, but the problem is that you can’t just push a new database server unless you have a backup of the database (you should probably have a mitigation scenario for anything that’s irreplaceable). In general, the solution is a fairly frequent backup that’s stored offsite.

There are probably a bunch of other things you should account for. If you have dependencies on external servers or services, try to understand what having them going down will do. This may cause you to go down, and that might be ok (just put up a big fail whale), but try to make sure that there isn’t any non-deterministic behavior (i.e. your auth server goes down and people can suddenly view the contents of every account). While you aren’t going to make your server hacker proof, you might want to think about the consequences of a security error and plan accordingly (i.e. you forget to auth protect a new page that you put on the server) - in general it’s better to blanket deny access than to blanket grant it.

And one of the most important things I learned at that big company was to put in place a service that emails you whenever you get a 500 server (there are a number of good services). Same thing for server monitoring – it’s better to get an email that your database server is down than to hear from an angry user who can’t access his data.

A lot of good companies go further than just having a plan - they actually stage failures, and try to recover from them. I’ve heard that the Heroku team used to play a game where someone would take down pieces of their service in a creative way, and then other people would have to figure out how to recover. Apparently, the Obama campaign’s IT infrastructure stayed up through Hurricane Sandy because they had already staged the contingency where the eastern seaboard went down.

Planning For Your Business
Finally, there are the disasters that are specific to your company’s business. For example, what happens if your competitors send you a cease and desist? What if Google drops you from their index? What happens if your sole data supplier revokes your access or pulls your contract (hint - never single-source anything if you can help it)? What happens if a founder walks out one day (this happens all the time)? While you aren’t going to predict all of these, you can probably figure out the major classes of disasters, and take some steps to prepare for their possible (likely?) occurrence. For example, if the Internet goes down, I’m going to walk up and down Valencia Street passing out paper copies of my blog post to all the hipsters sitting in coffee shops.

In general, while you aren’t going to be prepared for everything, intelligent preparation goes a long way. Emphasize velocity above everything else, but having a mitigation plan for your most common disasters can save you a lot of stress later on. Because we’ve all been in the situation where we we’re running around trying to fix an issue, and it’s always worse when users are screaming at us over email/Facebook/Twitter/phone and we have no idea what we’re going to do to get through this one.

Learning How To Avoid Mistakes

Over the years, I've frequently heard it said that success is about not making the same mistake twice. I originally thought that just being aware of my mistakes would allow me to act in a healthy (and more sane manner), but sadly, this has not always been the case.  Over the past several years I've noticed myself making similar mistakes more than once, and even once I've become aware of a mistake, that doesn't mean that I'm free of it. By studying this process, I've learned achieving success is actually a lot more complicated than noticing mistakes - you must discover and correct the major areas that block you from success.

After much trial and error (and self-reflection), I've discovered that the process of learning to not make a particular type of mistake is actually akin to attaining mastery at a skill. There are four different phases in the process of mastering something new: unconscious incompetence, conscious incompetence, conscious competence, and unconscious competence. When I decide that I'm going to try something new (let's say playing tennis), I start out lacking competence and also lacking awareness of that lack of skill. Once I start training, I gain awareness of my lack of skill, and begin to correct that. Over time I gradually build skill, and finally that skill becomes part of who I am. At that point, I am said to have mastery of that skill (although even that is an oversimplification). In a similar vein, I present the stages of learning to avoid a particular class of mistake:

1) Being Unaware of My Mistake
I'm sure that you have hear the old adage, "ignorance is bliss," but sadly this one isn't always so true. We make many mistakes unconsciously, and don't even realize that we have made them until we see the eventual consequences. For example, I have this habit of finding fault with things and then pointing out those faults in a way that isn't terribly diplomatic. Typically, I don't even notice that I've made a mistake until either someone calls me out on this mistake or I find that there are consequences down the road. For example, there are times where I unknowingly piss someone off and it eventually comes back to haunt me. You would think that I would become aware of this after the first time, but I realized recently that I had been doing it for most of my life, and wasn't even aware that I was making a mistake, because the consequences weren't always immediately apparent.

2) Having Consciousness of My Mistake
After a while, I start to realize that I'm making a mistake. It isn't enough for other people to call me out on the mistake - I have to actually notice that I am making the mistake, and accept that this mistake is having an impact on my life. For example, I have a "friend" who sometimes tells me about things other people observe in him, but it always comes from the perspective of a detached third-person observer. Until he is able to see the impact that these mistakes are having on his life, he is unable to accept that they don't help him and start to work on a solution. This stage actually has two phases. In the first, I see the impact of my mistake on my life, but only after the fact or in a cumulative manner. 

By the time that I reach the second phase of this stage, I am able to see this mistake happen. For example, I realize that I'm geting resentful at someone or something, and I want to say something to them. And then I notice myself blowing up. One would think that as soon as I'm able to see the consequences of my actions, I would be able to instantly avoid those actions (and hence the saying about not making the same mistake twice), but I often find myself making similar mistakes numerous times before I correct them. At some point, the pain of the mistake builds up enough (for example, it causes me to leave a string of jobs), and I finally resolve that I'm going to do whatever it takes to correct it.

3) Consciously Avoiding My Mistake
This is when I actually start to notice that I'm on the cusp of making the mistake, and I take steps to behave differently than I would have in the default situation. In many cases, I will have to learn new skills for handling situations. For example, when I find myself feeling resentment towards something or someone, I may write about it until I am clear on the underlying feelings and fears that are leading towards that resentment. At that point, I may decide to change my behavior to address those fears, or alternatively I may share my underlying feelings with the original object of my resentment. 

One example of this would be that I find fault in my coworkers, but the underlying fear is that I'm jealous they have skills I lack. After doing some reflection, I will either realize that I have the skills (and just lack confidence), or I will just tell the coworkers that I admire their abilities (and am slightly envious of them for those skills). Then I can look at the issues I found, and find a reasonable way to address these. Regardless, I will probably have to make this into a conscious process for quite some time, and I may find that for quite some time, I'm getting it wrong as frequently as I get it right. Over time, the ratio will slowly start to tip, and eventually, I will find that the new behavior is (nearly) automatic. 

4) Autocorrecting
By the time that I reach the final stage, I rarely make the mistake because I have learned to autocorrect. The new behaviors I have taught myself become so well ingrained into my consciousness that I don't need to fall back on the behaviors that lead to the mistake. In a situation such as the example above, I may find myself doing a daily practice to keep myself clear of resentments, so I address any issues before they threaten to become blowups at other people. Or, I may find myself gravitating towards situations where I don't have as much propensity to become resentful at others (potentially because I realize that I offer plenty of value and don't need to hold jealousy). And, once I have dealt with all of the underlying issues, I am easily able to effect the things that I want (although probably not 100% of the time). This stage is the equivalent of "Mastery," and in fact it is a mastery of sorts. Mastery is probably more of an asymptote than something I can actually accomplish. There is probably always the opportunity to get to a higher level of "mastery," although at some point, you will find that the mistake is no longer holding you back from success, and it's time to shift attention to something else.

On Feeling Sorry For Myself

I find myself frequently falling into a pattern where I compare myself with other people, and this never leads to a positive outcome (I often find myself becoming depressed). The reality is that almost anyone I can think of has something that I don’t have (no matter how trivial), and thinking of it is an inevitable downer. Maybe they have a great marriage and a family, or maybe they have made a lot of money/have a job that seems great. Or maybe other people just like them more for some reason. But, at the end of the day, this way of thinking is always a trap. And here’s why...

First of all, no one’s life is perfect, and there is always something to find fault with. Which is to say that even if I had the same things that they had, I’m sure there would still be something to complain about. Complaining is a coping mechanism that we use to prevent ourselves from moving forward. When we complain, we are actually saying, “This won’t do. I’m going to use it as an excuse to avoid moving forward.” In truth, there is always something to complain about, and the people who succeed focus not on what’s wrong, but on what’s right. People who have found success managed to silence most of the complaints that arose, and moved forward despite them. I haven’t mastered my objections to the point where I can achieve at the level they do, so even if I were magically advanced past my self-imposed barriers, I would instantly be stuck with a new set of challenges.

Second, even if some people have more than I do, the vast majority of people in the world have less than I do. Like probably 1/10 of what I have, or even less. And I know for a fact that many of those people are quite happy with what they have. I’ve been given an obscenely large amount of both talent and luck (not to mention financial resources), and although there are few things in my life to complain about, I still find plenty of opportunities to complain. Whenever something bad happens, I get stuck in those moments, and then when it’s time to be grateful for the good things, I quickly put them out of my mind forever. On the other hand, there are many people in this world who don’t have a lot of good things happen in their lives, but when something positive does happen, they manage to show copious gratitude. Whenever something good happens, I find myself coming up with some way to marginalize that good event. For example, I might obsess about the next deadline. All of this shows that the key to being happy with oneself is not having more, but learning to be happy with what you have.

And a bit more on idealizing other people’s lives. I was recently hanging out with a friend who has a lot of things that I don’t (he’s married, has a family, makes a lot of money at work, just bought a nice house, etc…). But strangely, when we were talking, he actually seemed pretty unhappy, possibly the least content in the whole time I’ve known him. He was the happiest when I met him five years ago and he was a poor entrepreneur. And then it all came together.

The truth is that the most valuable thing you can have is flexibility to do whatever you want with your life. And, if you believe that you have absolutely nothing going for you, then at the very least, you probably have a lot of flexibility (good things take a lot of time and effort to sustain). So you could go out today and do pretty much anything you want (and if not today, then maybe tonight or this weekend). Write a book (or a blog post). Take a trip (or even just walk all the way out to the beach). Start building a new product that you’ve always thought the world needed. Learn a new skill. Take a course online. The possibilities are actually endless. It’s your choice to actually go and seize the day, so start taking advantage of all those possibilities. Because you won’t always have as much flexibility as you have now, and when you don’t, you will romanticize about where you were right now. So what are you waiting for? Go and do it!!!

Why 4K On The Desktop is a Big Deal

This week, a bunch of hardware manufacturers announced 4K monitors (http://www.engadget.com/2014/01/06/asus-28-inch-4k-display/) for about $800. These monitors, which actually aren’t quite 4K (more like 3.8K), cram 3840x2160 pixels into a 28-inch screen. For those of you doing the math, that’s exactly 4x as many pixels as in a traditional HD display, or 2.25x as many as many as the current state-of-the art display, the 27-inch (2560x1440) monitor. 

Computer monitors have had a pretty interesting evolution (or lack thereof) over the years. For a long time, monitors were pretty low-resolution - you were lucky to get 1024x768 in the mid-90s (and for that, you had to spring for a state-of-the-art 17-incher). Then 20-inch CRTs came out in the late 90s, and got us up to 1600x1200. But since then, things have remained relatively stagnant - as monitors transitioned from CRT to LCD, maximum resolutions stayed put, and the only evolution they’ve had is to match the widescreen (16:9) aspect ratio of HDTVs (great if you use your computer for watching movies). Your 2014 23" HD display has a resolution of 1920x1080, which actually gives you fewer vertical pixels than your 20" monitor had 10 years ago. Many professionals use 27” or 30” monitors with more pixels, but the information density is the same. Most desktop displays currently max out at about 100PPI. 4K monitors will finally change this.

So why would you want more pixels on your display? The answer is more complicated than just “you can fit more information onto the screen,” because the human eye can only perceive detail down to a certain level. My 13.3-inch Retina MacBook display contains as many pixels as most 30-inch monitors, but I can’t display it at full-resolution, because the text would be too small (leading to eye strain and eventually blindness). So, the trick is that you can display an image at less than the full resolution, but use the additional pixels to improve the image quality. Apple calls this “HiDPI” - they render the image at twice the indicated resolution, and then display it on a high-resolution display.

Apple’s “retina displays” depend on the principle that your eye can only perceive a certain amount of detail. This is why 1080p screens don’t make much sense on 5” smartphones, except for marketing purposes (a 720p display should be sufficient). When the pixels drop below a certain size, your eye ceases to see them any more, and all that you can perceive is the image they represent. When Apple’s Retina MacBooks first came out, I was skeptical, but then I saw one up close, and decided that they were pretty amazing (I bought the second generation rMBP shortly after it was announced). I don’t get significantly more information on my screen than I did before, but everything definitely looks a lot better.

The threshold for a retina image depends on how far away the device will be (http://isthisretina.com/). For a smartphone, which we hold about a foot from our face, we need about 300 ppi to reach true retina density. For a laptop or tablet, which is perhaps 18 inches away, the required density drops to maybe 220. A desktop monitor is a bit further yet, at about 24-inches, and at that distance, we need maybe 150 ppi to hit retina densities. The upcoming 28-inch 4K displays squeak in under the bar, and will give a significant increase in display sharpness.

While $800 seems like a lot to spend on a monitor, existing 4K monitors cost about $3,000, so that’s a big price drop. 27-inch displays currently cost about $5-700 (excepting the cheap Taiwanese monitors off of eBay), and an Apple Thunderbolt display costs about $1,000. The result is that the new crop of displays will give a pretty big visual upgrade at only a modest price increase, and expect prices to drop further over time. In fact, I wouldn’t be surprised if Apple releases a Thunderbolt display based on this panel, styled the "Retina Thunderbolt Display.”

We only have one remaining issue - many existing computers have trouble displaying a 4K image at full refresh rate (60hz). HDMI and Displayport, the two most common connectors on modern PCs and monitors, max out their bandwidth at about 2560x1600x60hz. If you want to push twice the number of pixels, you have to halve the refresh rate. This is why the one affordable 4K display (

Seiki Digital SE39UY04 39-Inch 4K Ultra HD 120Hz LED TV

Product by Seiki Digital More about this product
List Price: $699.00
Price: $499.99
You Save: $199.01 (28%)
) is limited to 30hz when you’re using it as a monitor. You have two options for getting around this. One is to put two HDMIs/Displayports on the monitor, and use each for half the image. This doesn’t work too well on most laptops, which only have one video out port. 

The second option is to use a feature of DisplayPort called MST (Multi-Stream Transport) mode. Originally designed to allow Displayport monitors to be daisy chained, it will also enable a single 4K monitor at 60hz. The problem with this is that not all video cards support MST quite yet. The new (trashcan) Mac Pro supposedly includes 4K support via MST mode, and rumor has it that the Retina Macbook Pro supports MST on Windows but not in Mac OS (http://9to5mac.com/2013/12/23/new-retina-macbook-pros-can-drive-4k-displays-at-60hz-when-running-windows-mac-os-needs-new-drivers/). Expect wider support to come in the near future.

So, 4K support is coming soon, and despite all of the nerdy details, I’m guessing that it will actually yield the first major upgrade to desktop displays that we’ve seen in many years. The future is starting to look a lot sharper.

How I Usually Write a Blog Post

The most challenging thing about blogging regularly is that it requires me to suspend my self-judgment for long enough to let the magic happen. I have more than enough potential ideas for blog posts - during the average day, I’m flooded with thoughts, and pretty much any of them could turn into a reasonable post. Usually these ideas flit out of my head as quickly as they enter, but when I take enough time to actually write those ideas down (walking around with a smartphone or paper notepad definitely helps), I quickly come up with an imposing-looking list of topics. So, I should never have to worry what my next blog post will be ”about,” because it’s already taken care of. But a strange thing happens when I think about sitting down to write - all sorts of fears and judgments do their best to dissuade me. For example:

“What if it isn’t as good as my last post?” 

“What if no one wants to read it?”

“What if it isn’t on-topic for my blog? (one of my favorite things is when a writer I like publishes something completely off topic - it gives me a chance to see how he or she can apply that familiar style to a completely new and different arena)"

“What if people read my post, and that causes them think that I’m stupid or not a good writer?” 

“What if I can’t think of enough interesting content to fill 1,000 words (if you read all of my recent posts, you will find that most of them are approximately 1,000 words, including this one)?"

There are literally hundreds of reasons why I could avoid sitting down and writing a post, and they probably all come up at some point during the creative process. I’ve found that setting aside an hour works pretty well - if I write for longer, all the better, but an hour is enough to get somewhere, and if nothing comes out, well, I’ve wasted more than an hour on many occasions. So I sit down, and deal with the inevitable distractions that come up whenever I sit down at my computer (such as checking my email and looking through every single browser tab that’s already open). And then, when my mind is mostly clear, I can finally start to write.

I begin writing on the intended topic (or just typing things out in a stream-of-consciousness sort of manner, which typically works pretty well), and all the while, I’m coming up with reasons why no one would ever want to read my writing. About 40 percent of the time, the demons win, and I drop the post by the end of the second or third paragraph.

“See, there wasn’t enough material here to fill an entire blog post,” I tell myself, and then I promptly delete everything I’ve written. Or, even worse, it sits in limbo on my computer forever, in a Gmail draft or in a sticky note (I clear these out once every year or two).

Usually, though, I keep on typing, because sometimes when I hit the end of the second or third paragraph, a strange thing happens, and I start to drop into a flow state. For just a split second, I can see the big picture, and my job becomes simple – just find the words required to paint that picture so that everyone else can see it.

I’ve learned the important thing is being able to silence the demons long enough to see the big vision, bang out a draft, and hit the post button. There are a number of blog posts that I wasn’t going to publish because "they weren’t good enough." And then I begrudgingly showed them to one other person, who managed to convince me otherwise (all it took was a little outside validation).

But the coolest thing about a blogging practice is that I’m actually writing for myself, and the goal is to write, not to win love and adoration. If anyone else finds my writing interesting, then that’s awesome, but if I’m the only one who ever reads this post, then I probably got something out of the process of writing it (and publishing it publicly on the Internet for everyone to potentially read).

So, I keep writing, and as I go, I add, delete, and rearrange a bunch of sentences and paragraphs. I take some of the passive voice sentences and rewrite them in the active voice (thanks to Mr Bruner, my high school English teacher). It’s likely that I find a bunch of things don’t really fit with my original argument, so I get rid of them. And I add in a few points to flesh out the places where the argument isn’t completely clear and coherent. And then, finally, there is this point where the first draft of the blog post seems more or less complete.

I typically read through it, thinking up a bunch of reasons why I should just hit the delete button and put the post out of its misery. But, after a few more read-throughs, with some minor changes each time, I figure that it’s pretty much now or never, and I might as well just publish my writing for the hell of it. I copy and paste the document from my notepad into Microsoft Word, look for typos, add some witty headings, and then check the word count (hopefully I’m close to the magic number 1000).

Finally, I open up my blogging software, and paste in the final article. After taking one last read-through, considering scrapping the whole thing once more, and hovering over the button for about 5 minutes, I hit submit, and the article (*finally*) goes live. At that point, I read through it once more just to check for typos, and I probably find one or two nitpicks to adjust. Then I post to Twitter and a few aggregator sites, and check my blog stats page every five minutes to see whether anyone new has read my post.

And, that’s it. My work is done, at least until the next regularly scheduled session.