• Lyft vs Uber: A Tale of Two S-1’s

    You can learn a great deal from reading and comparing the financial filings of two close competitors. Tech-finance nerd that I am, you can imagine how excited I was to see Lyft’s and Uber’s respective S-1’s become public within mere weeks of each other.

    While the general financial press has covered a lot of the top-level figures on profitability (or lack thereof) and revenue growth, I was more interested in understanding the unit economics — what is the individual “unit” (i.e. a user, a sale, a machine, etc.) of the business and what does the history of associated costs and revenues say about how the business will (or will not) create durable value over time.

    For two-sided regional marketplaces like Lyft and Uber, an investor should understand the full economic picture for (1) the users/riders, (2) the drivers, and (3) the regional markets. Sadly, their S-1’s don’t make it easy to get much on (2) or (3) — probably because the companies consider the pertinent data to be highly sensitive information. They did, however, provide a fair amount of information on users/riders and rides and, after doing some simple calculations, a couple of interesting things emerged

    Uber’s Users Spend More, Despite Cheaper Rides

    As someone who first knew of Uber as the UberCab “black-car” service, and who first heard of Lyft as the Zimride ridesharing platform, I was surprised to discover that Lyft’s average ride price is significantly more expensive than Uber’s and the gap is growing! In Q1 2017, Lyft’s average bookings per ride was $11.74 and Uber’s was $8.41, a difference of $3.33. But, in Q4 2018, Lyft’s average bookings per ride had gone up to $13.09 while Uber’s had declined to $7.69, increasing the gap to $5.40.

    Sources: Lyft S-1Uber S-1

    This is especially striking considering the different definitions that Lyft and Uber have for “bookings” — Lyft excludes “ pass-through amounts paid to drivers and regulatory agencies, including sales tax and other fees such as airport and city fees, as well as tips, tolls, cancellation, and additional fees” whereas Uber’s includes “ applicable taxes, tolls, and fees “. This gap is likely also due to Uber’s heavier international presence (where they now generate 52% of their bookings). It would be interesting to see this data on a country-by-country basis (or, more importantly, a market-by-market one as well).

    Interestingly, an average Uber rider appears to also take ~2.3 more rides per month than an average Lyft rider, a gap which has persisted fairly stably over the past 3 years even as both platforms have boosted the number of rides an average rider takes. While its hard to say for sure, this suggests Uber is either having more luck in markets that favor frequent use (like dense cities), with its lower priced Pool product vs Lyft’s Line product (where multiple users can share a ride), or its general pricing is encouraging greater use.

    Sources: Lyft S-1Uber S-1

    Note: the “~monthly” that you’ll see used throughout the charts in this post are because the aggregate data — rides, bookings, revenue, etc — given in the regulatory filings is quarterly, but the rider/user count provided is monthly. As a result, the figures here are approximations based on available data, i.e. by dividing quarterly data by 3

    What does that translate to in terms of how much an average rider is spending on each platform? Perhaps not surprisingly, Lyft’s average rider spend has been growing and has almost caught up to Uber’s which is slightly down.

    Sources: Lyft S-1Uber S-1

    However, Uber’s new businesses like UberEats are meaningfully growing its share of wallet with users (and nearly perfectly dollar for dollar re-opens the gap on spend per user that Lyft narrowed over the past few years). In 2018 Q4, the gap between the yellow line (total bookings per user, including new businesses) and the red line (total bookings per user just for rides) is almost $10 / user / month! Its no wonder that in its filings, Lyft calls its users “riders”, but Uber calls them “Active Platform Consumers”.

    Despite Pocketing More per Ride, Lyft Loses More per User

    Long-term unit profitability is more than just how much an average user is spending, its also how much of that spend hits a company’s bottom line. Perhaps not surprisingly, because they have more expensive rides, a larger percent of Lyft bookings ends up as gross profit (revenue less direct costs to serve it, like insurance costs) — ~13% in Q4 2018 compared with ~9% for Uber. While Uber’s has bounced up and down, Lyft’s has steadily increased (up nearly 2x from Q1 2017). I would hazard a guess that Uber’s has also increased in its more established markets but that their expansion efforts into new markets (here and abroad) and new service categories (UberEats, etc) has kept the overall level lower.

    Sources: Lyft S-1Uber S-1

    Note: the gross margin I’m using for Uber adds back a depreciation and amortization line which were separated to keep the Lyft and Uber numbers more directly comparable. There may be other variations in definitions at work here, including the fact that Uber includes taxes, tolls, and fees in bookings that Lyft does not. In its filings, Lyft also calls out an analogous “Contribution Margin” which is useful but I chose to use this gross margin definition to try to make the numbers more directly comparable.

    The main driver of this seems to be higher take rate (% of bookings that a company keeps as revenue) — nearly 30% in the case of Lyft in Q4 2018 but only 20% for Uber (and under 10% for UberEats)

    Sources: Lyft S-1Uber S-1

    Note: Uber uses a different definition of take rate in their filings based on a separate cut of “Core Platform Revenue” which excludes certain items around referral fees and driver incentives. I’ve chosen to use the full revenue to be more directly comparable

    The higher take rate and higher bookings per user has translated into an impressive increase in gross profit per user. Whereas Lyft once lagged Uber by almost 50% on gross profit per user at the beginning of 2017, Lyft has now surpassed Uber even after adding UberEats and other new business revenue to the mix.

    Sources: Lyft S-1Uber S-1

    All of this data begs the question, given Lyft’s growth and lead on gross profit per user, can it grow its way into greater profitability than Uber? Or, to put it more precisely, are Lyft’s other costs per user declining as it grows? Sadly, the data does not seem to pan out that way

    Sources: Lyft S-1Uber S-1

    While Uber had significantly higher OPEX (expenditures on sales & marketing, engineering, overhead, and operations) per user at the start of 2017, the two companies have since reversed positions, with Uber making significant changes in 2018 which lowered its OPEX per user spend to under $9 whereas Lyft’s has been above $10 for the past two quarters. The result is Uber has lost less money per user than Lyft since the end of 2017

    Sources: Lyft S-1Uber S-1

    The story is similar for profit per ride. Uber has consistently been more profitable since 2017, and they’ve only increased that lead since. This is despite the fact that I’ve included the costs of Uber’s other businesses in their cost per ride.

    Sources: Lyft S-1Uber S-1

    Does Lyft’s Growth Justify Its Higher Spend?

    One possible interpretation of Lyft’s higher OPEX spend per user is that Lyft is simply investing in operations and sales and engineering to open up new markets and create new products for growth. To see if this strategy has paid off, I took a look at the Lyft and Uber’s respective user growth during this period of time.

    Sources: Lyft S-1Uber S-1

    The data shows that Lyft’s compounded quarterly growth rate (CQGR) from Q1 2016 to Q4 2018 of 16.4% is only barely higher than Uber’s at 15.3% which makes it hard to justify spending nearly $2 more per user on OPEX in the last two quarters.

    Interestingly, despite all the press and commentary about #deleteUber, it doesn’st seem to have really made a difference in their overall user growth (its actually pretty hard to tell from the chart above that the whole thing happened around mid-Q1 2017).

    How are Drivers Doing?

    While there is much less data available on driver economics in the filings, this is a vital piece of the unit economics story for a two-sided marketplace. Luckily, Uber and Lyft both provide some information in their S-1’s on the number of drivers on each platform in Q4 2018 which are illuminating.

    Image for post
    Sources: Lyft S-1Uber S-1

    The average Uber driver on the platform in Q4 2018 took home nearly double what the average Lyft driver did! They were also more likely to be “utilized” given that they handled 136% more rides than the average Lyft driver and, despite Uber’s lower price per ride, saw more total bookings.

    It should be said that this is only a point in time comparison (and its hard to know if Q4 2018 was an odd quarter or if there is odd seasonality here) and it papers over many other important factors (what taxes / fees / tolls are reflected, none of these numbers reflect tips, are some drivers doing shorter shifts, what does this look like specifically in US/Canada vs elsewhere, are all Uber drivers benefiting from doing both UberEats and Uber rideshare, etc). But the comparison is striking and should be alarming for Lyft.

    Closing Thoughts

    I’d encourage investors thinking about investing in either to do their own deeper research (especially as the competitive dynamic is not over one large market but over many regional ones that each have their own attributes). That being said, there are some interesting takeaways from this initial analysis

    • Lyft has made impressive progress at increasing the value of rides on its platform and increasing the share of transactions it gets. One would guess that, Uber, within established markets in the US has probably made similar progress.
    • Despite the fact that Uber is rapidly expanding overseas into markets that face more price constraints than in the US, it continues to generate significantly better user economics and driver economics (if Q4 2018 is any indication) than Lyft.
    • Something happened at Uber at the end of 2017/start of 2018 (which looks like it coincides nicely with Dara Khosrowshahi’s assumption of CEO role) which led to better spending discipline and, as a result, better unit economics despite falling gross profits per user
    • Uber’s new businesses (in particular UberEats) have had a significant impact on Uber’s share of wallet.
    • Lyft will need to find more cost-effective ways of growing its business and servicing its existing users & drivers if it wishes to achieve long-term sustainability as its current spend is hard to justify relative to its user growth.

    Special thanks to Eric Suh for reading and editing an earlier version!

    Thought this was interesting or helpful? Check out some of my other pieces on investing / finance.

  • How to Regulate Big Tech

    There’s been a fair amount of talk lately about proactively regulating — and maybe even breaking up — the “Big Tech” companies.

    Full disclosure: this post discusses regulating large tech companies. I own shares in several of these both directly (in the case of Facebook and Microsoft) and indirectly (through ETFs that own stakes in large companies)

    Source: MIT Sloan

    Like many, I have become increasingly uneasy over the fact that a small handful of companies, with few credible competitors, have amassed so much power over our personal data and what information we see. As a startup investor and former product executive at a social media startup, I can especially sympathize with concerns that these large tech companies have created an unfair playing field for smaller companies.

    At the same time, though, I’m mindful of all the benefits that the tech industry — including the “tech giants” — have brought: amazing products and services, broader and cheaper access to markets and information, and a tremendous wave of job and wealth creation vital to may local economies. For that reason, despite my concerns of “big tech”‘s growing power, I am wary of reaching for “quick fixes” that might change that.

    As a result, I’ve been disappointed that much of the discussion has centered on knee-jerk proposals like imposing blanket stringent privacy regulations and forcefully breaking up large tech companies. These are policies which I fear are not only self-defeating but will potentially put into jeopardy the benefits of having a flourishing tech industry.

    The Challenges with Regulating Tech

    Technology is hard to regulate. The ability of software developers to collaborate and build on each other’s innovations means the tech industry moves far faster than standard regulatory / legislative cycles. As a result, many of the key laws on the books today that apply to tech date back decades — before Facebook or the iPhone even existed, making it important to remember that even well-intentioned laws and regulations governing tech can cement in place rules which don’t keep up when the companies and the social & technological forces involved change.

    Another factor which complicates tech policy is that the traditional “big is bad” mentality ignores the benefits to having large platforms. While Amazon’s growth has hurt many brick & mortar retailers and eCommerce competitors, its extensive reach and infrastructure enabled businesses like Anker and Instant Pot to get to market in a way which would’ve been virtually impossible before. While the dominance of Google’s Android platform in smartphones raised concerns from European regulators, its hard to argue that the companies which built millions of mobile apps and tens of thousands of different types of devices running on Android would have found it much more difficult to build their businesses without such a unified software platform. Policy aimed at “Big Tech” should be wary of dismantling the platforms that so many current and future businesses rely on.

    Its also important to remember that poorly crafted regulation in tech can be self-defeating. The most effective way to deal with the excesses of “Big Tech”, historically, has been creating opportunities for new market entrants. After all, many tech companies previously thought to be dominant (like Nokia, IBM, and Microsoft) lost their positions, not because of regulation or antitrust, but because new technology paradigms (i.e. smartphones, cloud), business models (i.e. subscription software, ad-sponsored), and market entrants (i.e. Google, Amazon) had the opportunity to flourish. Because rules (i.e. Article 13/GDPR) aimed at big tech companies generally fall hardest on small companies (who are least able to afford the infrastructure / people to manage it), its important to keep in mind how solutions for “Big Tech” problems affect smaller companies and new concepts as well.

    Framework for Regulating “Big Tech”

    If only it were so easy… Source: XKCD

    To be 100% clear, I’m not saying that the tech industry and big platforms should be given a pass on rules and regulation. If anything, I believe that laws and regulation play a vital role in creating flourishing markets.

    But, instead of treating “Big Tech” as just a problem to kill, I think we’d be better served by laws / regulations that recognize the limits of regulation on tech and, instead, focus on making sure emerging companies / technologies can compete with the tech giants on a level playing field. To that end, I hope to see more ideas that embrace the following four pillars:

    I. Tiering regulation based on size of the company

    Regulations on tech companies should be tiered based on size with the most stringent rules falling on the largest companies. Size should include traditional metrics like revenue but also, in this age of marketplace platforms and freemium/ad-sponsored business models, account for the number of users (i.e. Monthly Active Users) and third party partners.

    In this way, the companies with the greatest potential for harm and the greatest ability to bear the costs face the brunt of regulation, leaving smaller companies & startups with greater flexibility to innovate and iterate.

    II. Championing data portability

    One of the reasons it’s so difficult for competitors to challenge the tech giants is the user lock-in that comes from their massive data advantage. After all, how does a rival social network compete when a user’s photos and contacts are locked away inside Facebook?

    While Facebook (and, to their credit, some of the other tech giants) does offer ways to export user data and to delete user data from their systems, these tend to be unwieldy, manual processes that make it difficult for a user to bring their data to a competing service. Requiring the largest tech platforms to make this functionality easier to use (i.e., letting others import your contact list and photos with the ease in which you can login to many apps today using Facebook) would give users the ability to hold tech companies accountable for bad behavior or not innovating (by being able to walk away) and fosters competition by letting new companies compete not on data lock-in but on features and business model.

    III. Preventing platforms from playing unfairly

    3rd party platform participants (i.e., websites listed on Google, Android/iOS apps like Spotify, sellers on Amazon) are understandably nervous when the platform owners compete with their own offerings (i.e., Google Places, Apple Music, Amazon first party sales)As a result, some have even called for banning platform owners from offering their own products and services.

    I believe that is an overreaction. Platform owners offering attractive products and services (i.e., Google offering turn-by-turn navigation on Android phones) can be a great thing for users (after all, most prominent platforms started by providing compelling first-party offerings) and for 3rd party participants if these offerings improve the attractiveness of the platform overall.

    What is hard to justify is when platform owners stack the deck in their favor using anti-competitive moves such as banning or reducing the visibility of competitors, crippling third party offeringsmaking excessive demands on 3rd parties, etc. Its these sorts of actions by the largest tech platforms that pose a risk to consumer choice and competition and should face regulatory scrutiny. Not just the fact that a large platform exists or that the platform owner chooses to participate in it.

    IV. Modernizing how anti-trust thinks about defensive acquisitions

    The rise of the tech giants has led to many calls to unwind some of the pivotal mergers and acquisitions in the space. As much as I believe that anti-trust regulators made the wrong calls on some of these transactions, I am not convinced, beyond just wanting to punish “Big Tech” for being big, that the Pandora’s Box of legal and financial issues (for the participants, employees, users, and for the tech industry more broadly) that would be opened would be worthwhile relative to pursuing other paths to regulate bad behavior directly.

    That being said, its become clear that anti-trust needs to move beyond narrow revenue share and pricing-based definitions of anti-competitiveness (which do not always apply to freemium/ad-sponsored business models). Anti-trust prosecutors and regulators need to become much more thoughtful and assertive around how some acquisitions are done simply to avoid competition (i.e., Google’s acquisition of Waze and Facebook’s acquisition of WhatsApp are two examples of landmark acquisitions which probably should have been evaluated more closely).

    Wrap-Up

    Source: OECD Forum Network

    This is hardly a complete set of rules and policies needed to approach growing concerns about “Big Tech”. Even within this framework, there are many details (i.e., who the specific regulators are, what specific auditing powers they have, the details of their mandate, the specific thresholds and number of tiers to be set, whether pre-installing an app counts as unfair, etc.) that need to be defined which could make or break the effort. But, I believe this is a good set of principles that balances both the need to foster a tech industry that will continue to grow and drive innovation as well as the need to respond to growing concerns about “Big Tech”.

    Special thanks to Derek Yang and Anthony Phan for reading earlier versions and giving me helpful feedback!

  • Why Tech Success Doesn’t Translate to Deeptech

    Source: Eric Hamilton

    Having been lucky enough to invest in both tech (cloud, mobile, software) and “deeptech” (materials, cleantech, energy, life science) startups (and having also ran product at a mobile app startup), it has been striking to see how fundamentally different the paradigms that drive success in each are.

    Whether knowingly or not, most successful tech startups over the last decade have followed a basic playbook:

    1. Take advantage of rising smartphone penetration and improvements in cloud technology to build digital products that solve challenges in big markets pertaining to access (e.g., to suppliers, to customers, to friends, to content, to information, etc.)
    2. Build a solid team of engineers, designers, growth, sales, marketing, and product people to execute on lean software development and growth methodologies
    3. Hire the right executives to carry out the right mix of tried-and-true as well as “out of the box” channel and business development strategies to scale bigger and faster

    This playbook appears deceptively simple but is very difficult to execute well. It works because for markets where “software is eating the world”:

    Source: Techcrunch
    • There is relatively little technology risk: With the exception of some of the most challenging AI, infrastructure, and security challenges, most tech startups are primarily dealing with engineering and product execution challenges — what is the right thing to build and how do I build it on time, under budget? — rather than fundamental technology discovery and feasibility challenges
    • Skills & knowledge are broadly transferable: Modern software development and growth methodologies work across a wide range of tech products and markets. This means that effective engineers, salespeople, marketers, product people, designers, etc. at one company will generally be effective at another. As a result, its a lot easier for investors/executives to both gauge the caliber of a team (by looking at their experience) and augment a team when problems arise (by recruiting the right people with the right backgrounds).
    • Distribution is cheap and fast: Cloud/mobile technology means that a new product/update is a server upgrade/browser refresh/app store download away. This has three important effects:
    1. The first is that startups can launch with incomplete or buggy solutions because they can readily provide hotfixes and upgrades.
    2. The second is that startups can quickly release new product features and designs to respond to new information and changing market conditions.
    3. The third is that adoption is relatively straightforward. While there may be some integration and qualification challenges, in general, the product is accessible via a quick download/browser refresh, and the core challenge is in getting enough people to use a product in the right way.

    In contrast, if you look at deeptech companies, a very different set of rules apply:

    Source: XKCD
    • Technology risk/uncertainty is inherent: One of the defining hallmarks of a deeptech company is dealing with uncertainty from constraints imposed by reality (i.e. the laws of physics, the underlying biology, the limits of current technology, etc.). As a result, deeptech startups regularly face feasibility challenges — what is even possible to build? — and uncertainty around the R&D cycles to get to a good outcome — how long will it take / how much will it cost to figure this all out?
    • Skills & knowledge are not easily transferable: Because the technical and business talent needed in deeptech is usually specific to the field, talent and skills are not necessarily transferable from sector to sector or even company to company. The result is that it is much harder for investors/executives to evaluate team caliber (whether on technical merits or judging past experience) or to simply put the right people into place if there are problems that come up.
    • Product iteration is slow and costly: The tech startup ethos of “move fast and break things” is just harder to do with deeptech.
    1. At the most basic level, it just costs a lot more and takes a lot more time to iterate on a physical product than a software one. It’s not just that physical products require physical materials and processing, but the availability of low cost technology platforms like Amazon Web Services and open source software dramatically lower the amount of time / cash needed to make something testable in tech than in deeptech.
    2. Furthermore, because deeptech innovations tend to have real-world physical impacts (to health, to safety, to a supply chain/manufacturing line, etc.), deeptech companies generally face far more regulatory and commercial scrutiny. These groups are generally less forgiving of incomplete/buggy offerings and their assessments can lengthen development cycles. Deeptech companies generally can’t take the “ask for forgiveness later” approaches that some tech companies (i.e. Uber and AirBnb) have been able to get away with (exhibit 1: Theranos).

    As a result, while there is no single playbook that works across all deeptech categories, the most successful deeptech startups tend to embody a few basic principles:

    1. Go after markets where there is a very clear, unmet need: The best deeptech entrepreneurs tend to take very few chances with market risk and only pursue challenges where a very well-defined unmet need (i.e., there are no treatments for Alzheimer’s, this industry needs a battery that can last at least 1000 cycles, etc) blocks a significant market opportunity. This reduces the risk that a (likely long and costly) development effort achieves technical/scientific success without also achieving business success. This is in contrast with tech where creating or iterating on poorly defined markets (i.e., Uber and Airbnb) is oftentimes at the heart of what makes a company successful.
    2. Focus on “one miracle” problems: Its tempting to fantasize about what could happen if you could completely re-write every aspect of an industry or problem but the best deeptech startups focus on innovating where they won’t need the rest of the world to change dramatically in order to have an impact (e.g., compatible with existing channels, business models, standard interfaces, manufacturing equipment, etc). Its challenging enough to advance the state of the art of technology — why make it even harder?
    3. Pursue technologies that can significantly over-deliver on what the market needs: Because of the risks involved with developing advanced technologies, the best deeptech entrepreneurs work in technologies where even a partial success can clear the bar for what is needed to go to market. At the minimum, this reduces the risk of failure. But, hopefully, it gives the company the chance to fundamentally transform the market it plays in by being 10x better than the alternatives. This is in contrast to many tech markets where market success often comes less from technical performance and more from identifying the right growth channels and product features to serve market needs (i.e., Facebook, Twitter, and Snapchat vs. MySpace, Orkut, and Friendster; Amazon vs. brick & mortar bookstores and electronics stores)

    All of this isn’t to say that there aren’t similarities between successful startups in both categories — strong vision, thoughtful leadership, and success-oriented cultures are just some examples of common traits in both. Nor is it to denigrate one versus the other. But, practically speaking, investing or operating successfully in both requires very different guiding principles and speaks to the heart of why its relatively rare to see individuals and organizations who can cross over to do both.

    Special thanks to Sophia Wang, Ryan Gilliam, and Kevin Lin Lee for reading an earlier draft and making this better!

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Migrating WordPress to AWS Lightsail and Going with Let’s Encrypt!

    (Update Jan 2021: Bitnami has made available a new tool bncert which makes it even easier to enable HTTPS with a Let’s Encrypt certificate; the instructions below using Let’s Encrypt’s certbot still work but I would recommend people looking to enable HTTPS to use Bitnami’s new bncert process)

    I recently made two big changes to the backend of this website to keep up with the times as internet technology continues to evolve.

    First, I migrated from my previous web hosting arrangements at WebFaction to Amazon Web Services’s new Lightsail offering. I have greatly enjoyed WebFaction’s super simple interface and fantastic documentation which seemed tailored to amateur coders like myself (having enough coding and customization chops to do some cool projects but not a lot of confidence or experience in dealing with the innards of a server). But, the value for money that AWS Lightsail offers ($3.50/month for Linux VPS including static IP vs. the $10/month I would need to pay to eventually renew my current setup) ultimately proved too compelling to ignore (and for a simple personal site, I didn’t need the extra storage or memory). This coupled with the deterioration in service quality I have been experiencing with WebFaction (many more downtime email alerts from WordPress’s Jetpack plugin and the general lagginess in the WordPress administrative panel) and the chance to learn more about the world’s pre-eminent cloud services provider made this an easy decision.

    Given how Google Chrome now (correctly) marks all websites which don’t use HTTPS/SSL as insecure and Let’s Encrypt has been offering SSL certificates for free for several years, the second big change I made was to embrace HTTPS to partially modernize my website and make it at least not completely insecure. Along the way, I also tweaked my URLs so that all my respective subdomains and domain variants would ultimately point to http://benjamintseng.com/.

    For anyone who is also interested in migrating an existing WordPress deployment on another host to AWS Lightsail and turning on HTTPS/SSL, here are the steps I followed (gleamed from some online research and a bit of trial & error). Its not as straightforward as some other setups, but its very do-able if you are willing to do a little bit of work in the AWS console:

    • Follow the (fairly straightforward) instructions in the AWS Lightsail tutorial around setting up a clean WordPress deploymentI would skip sub-step 3 of step 6 (directing your DNS records to point to the Lightsail nameservers) until later (when you’re sure the transfer has worked so your domain continues to point to a functioning WordPress deployment).
    • Unless you are currently not hosting any custom content (no images, no videos, no Javascript files, etc) on your WordPress deployment, I would ignore the WordPress migration tutorial at the AWS Lightsail website (which won’t show you how to transfer this custom content over) in favor of this Bitnami how-to-guide (Bitnami provides the WordPress server image that Lightsail uses for its WordPress instance) which takes advantage of the fact that the Bitnami WordPress includes the All-in-One WP Migration plugin which, for free, can do single file backups of your WordPress site up to 512 MB (larger sites will need to pay for the premium version of the plugin).
      • If, like me, you have other content statically hosted on your site outside of WordPress, I’d recommend storing it in WordPress as part of the Media Library which has gotten a lot more sophisticated over the past few years. Its where I now store the files associated with my Projects
      • Note: if, like me, you are using Jetpack’s site accelerator to cache your images/static file assets, don’t worry if upon visiting your site some of the images appear broken. Jetpack relies on the URL of the asset to load correctly. This should get resolved once you point your DNS records accordingly (literally the next step) and any other issues should go away after you mop up any remaining references to the wrong URLs in your database (see the bullet below where I reference the Better Search Replace plugin).
    • If you followed my advice above, now would be the time to change your DNS records to point to the Lightsail nameservers (sub-step 3 of step 6 of the AWS Lightsail WordPress tutorial) — wait a few hours to make sure the DNS settings have propagated and then test out your domain and make sure it points to a page with the Bitnami banner in the lower right (sign that you’re using the Bitnami server image, see below)
    The Bitnami banner in the lower-right corner of the page you should see if your DNS propagated correctly and your Lightsail instance is up and running
    • To remove that ugly banner, follow the instructions in this tutorial (use the AWS Lightsail panel to get to the SSH server console for your instance and, assuming you followed the above instructions, follow the instructions for Apache)
    • Assuming your webpage and domain all work (preferably without any weird uptime or downtime issues), you can proceed with this tutorial to provision a Let’s Encrypt SSL certificate for your instance. It can be a bit tricky as it entails spending a lot of time in the SSH server console (which you can get to from the AWS Lightsail panel) and tweaking settings in the AWS Lightsail DNS Zone manager, but the tutorial does a good job of walking you through all of it. (Update Jan 2021: Bitnami has made available a new tool bncert which makes it even easier to enable HTTPS. While the link above using Let’s Encrypt’s certbot still works, I would recommend people use Bitnami’s new bncert process going forward)
      • I would strongly encourage you to wait to make sure all the DNS settings have propagated and that your instance is not having any strange downtime (as mine did when I first tried this) as if you have trouble connecting to your page, it won’t be immediately clear what is to blame and you won’t be able to take reactive measures.
    • I used the plugin Better Search Replace to replace all references to intermediate domains (i.e. the IP addresses for your Lightsail instance that may have stuck around after the initial step in Step 1) or the non-HTTPS domains (i.e. http://yourdomain.com or http://www.yourdomain.com) with your new HTTPS domain in the MySQL databases that power your WordPress deployment (if in doubt, just select the wp_posts table). You can also take this opportunity to direct all your yourdomain.com traffic to www.yourdomain.com (or vice versa). You can also do this directly in MySQL but the plugin allows you to do this across multiple tables very easily and allows you to do a “dry run” first where it finds and counts all the times it will make a change before you actually execute it.
    • (Update Jan 2021: Bitnami has made available a new tool bncert which makes it even easier to enable HTTPS and also automatically handles the redirect. While the instructions below still technically work, I would just use Bitnami’s new bncert process going forward) If you want to redirect all the traffic to www.yourdomain.com to yourdomain.com, you have two options. If your domain registrar is forward thinking and does simple redirects for you like Namecheap does, that is probably the easiest path. That is sadly not the path I took because I transferred my domain over to AWS’s Route 53 which is not so enlightened. If you also did the same thing / have a domain registrar that is not so forward thinking, you can tweak the Apache server settings to achieve the same effect. To do this, go into the SSH server console for your Lightsail instance and:
      • Run cd ~/apps/wordpress/conf
      • To make a backup which you can restore (if you screw things up) run mv httpd-app.conf httpd-app.conf.old
      • I’m going to use the Nano editor because its the easiest for a beginner (but feel free to use vi or emacs if you prefer), but run nano httpd-app.conf
      • Use your cursor and find the line that says RewriteEngine On that is just above the line that says #RewriteBase /wordpress/
      • Enter the following lines
        • # begin www to non-www
        • RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
        • RewriteRule ^(.*)$ https://%1/$1 [R=permanent,L]
        • # end www to non-www
        • The first and last line are just comments so that you can go back and remind yourself of what you did and where. The middle two lines are where the server recognizes incoming URL requests and redirects them accordingly
        • With any luck, your file will look like the image below — hit ctrl+X to exit, and hit ‘Y’ when prompted (“to save modified buffer”) to save your work
      • Run sudo /opt/bitnami/ctlscript.sh restart to restart your server and test out the domain in a browser to make sure everything works
        • If things go bad, run mv httpd-app.conf.old httpd-app.conf and then restart everything by running sudo /opt/bitnami/ctlscript.sh restart
    What httpd-app.conf should look like in your Lightsail instance SSH console after the edits

    I’ve only been using AWS Lightsail for a few days, but my server already feels much more responsive. It’s also nice to go to my website and not see “not secure” in my browser address bar (its also apparently an SEO bump for most search engines). Its also great to know that Lightsail is integrated deeply into AWS which makes the additional features and capabilities that have made AWS the industry leader (i.e. load balancers, CloudFront as CDN, scaling up instance resources, using S3 as a datastore, or even ultimately upgrading to full-fledged EC2 instances) are readily available.

  • Advice VCs Want to Give but Rarely Do to Entrepreneurs Pitching Their Startups

    Source: Someecards

    I thought I’d re-post a response I wrote a while ago to a question on Quora as someone recently asked me the question: “What advice do you wish you could give but usually don’t to a startup pitching you?”

    • Person X on your team reflects poorly on your company — This is tough advice to give as its virtually impossible during the course of a pitch to build enough rapport and get a deep enough understanding of the inter-personal dynamics of the team to give that advice without it unnecessarily hurting feelings or sounding incredibly arrogant / meddlesome.
    • Your slides look awful — This is difficult to say in a pitch because it just sounds petty for an investor to complain about the packaging rather than the substance.
    • Be careful when using my portfolio companies as examples — While its good to build rapport / common ground with your VC audience, using their portfolio companies as examples has an unnecessarily high chance of backfiring. It is highly unlikely that you will know more than an inside investor who is attending board meetings and in direct contact with management, so any errors you make (i.e., assuming a company is doing well when it isn’t or assuming a company is doing poorly when it is doing well / is about to turn the corner) are readily caught and immediately make you seem foolish.
    • You should pitch someone who’s more passionate about what you’re doing — Because VCs have to risk their reputation within their firms / to the outside world for the deals they sign up to do, they have to be very selective about which companies they choose to get involved with. As a result, even if there’s nothing wrong with a business model / idea, some VCs will choose not to invest due simply to lack of passion. As the entrepreneur is probably deeply passionate about and personally invested in the market / problem, giving this advice can feel tantamount to insulting the entrepreneur’s child or spouse.

    Hopefully this gives some of the hard-working entrepreneurs out there some context on why a pitch didn’t go as well as they had hoped and maybe some pointers on who and how to approach an investor for their next pitch.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • The Four Types of M&A

    I’m oftentimes asked what determines the prices that companies get bought for: after all, why does one app company get bought for $19 billion and a similar app get bought at a discount to the amount of investor capital that was raised?

    While specific transaction values depend a lot on the specific acquirer (i.e. how much cash on hand they have, how big they are, etc.), I’m going to share a framework that has been very helpful to me in thinking about acquisition valuations and how startups can position themselves to get more attractive offers. The key is understanding that, all things being equal, why you’re being acquired determines the buyer’s willingness to pay. These motivations fall on a spectrum dividing acquisitions into four types:

    • Talent Acquisitions: These are commonly referred to in the tech press as “acquihires”. In these acquisitions, the buyer has determined that it makes more sense to buy a team than to spend the money, time, and effort needed to recruit a comparable one. In these acquisitions, the size and caliber of the team determine the purchase price.
    • Asset / Capability Acquisitions: In these acquisitions, the buyer is in need of a particular asset or capability of the target: it could be a portfolio of patents, a particular customer relationship, a particular facility, or even a particular product or technology that helps complete the buyer’s product portfolio. In these acquisitions, the uniqueness and potential business value of the assets determine the purchase price.
    • Business Acquisitions: These are acquisitions where the buyer values the target for the success of its business and for the possible synergies that could come about from merging the two. In these acquisitions, the financials of the target (revenues, profitability, growth rate) as well as the benefits that the investment bankers and buyer’s corporate development teams estimate from combining the two businesses (cost savings, ability to easily cross-sell, new business won because of a more complete offering, etc) determine the purchase price.
    • Strategic Gamechangers: These are acquisitions where the buyer believes the target gives them an ability to transform their business and is also a critical threat if acquired by a competitor. These tend to be acquisitions which are priced by the buyer’s full ability to pay as they represent bets on a future.

    What’s useful about this framework is that it gives guidance to companies who are contemplating acquisitions as exit opportunities:

    • If your company is being considered for a talent acquisition, then it is your job to convince the acquirer that you have built assets and capabilities above and beyond what your team alone is worth. Emphasize patents, communities, developer ecosystems, corporate relationships, how your product fills a distinct gap in their product portfolio, a sexy domain name, anything that might be valuable beyond just the team that has attracted their interest.
    • If a company is being considered for an asset / capability acquisition, then the key is to emphasize the potential financial trajectory of the business and the synergies that can be realized after a merger. Emphasize how current revenues and contracts will grow and develop, how a combined sales and marketing effort will be more effective than the sum of the parts, and how the current businesses are complementary in a real way that impacts the bottom line, and not just as an interesting “thing” to buy.
    • If a company is being evaluated as a business acquisition, then the key is to emphasize how pivotal a role it can play in defining the future of the acquirer in a way that goes beyond just what the numbers say about the business. This is what drives valuations like GM’s acquisition of Cruise (which was a leader in driverless vehicle technology) for up to $1B, or Facebook’s acquisition of WhatsApp (messenger app with over 600 million users when it was acquired, many in strategic regions for Facebook) for $19B, or Walmart’s acquisition of Jet.com (an innovator in eCommerce that Walmart needs to help in its war for retail marketshare with Amazon.com).

    The framework works for two reasons: (1) companies are bought, not sold, and the price is usually determined by the party that is most willing to walk away from a deal (that’s usually the buyer) and (2) it generally reflects how most startups tend to create value over time: they start by hiring a great team, who proceed to build compelling capabilities / assets, which materialize as interesting businesses, which can represent the future direction of an industry.

    Hopefully, this framework helps any tech industry onlooker wondering why acquisition valuations end up at a certain level or any startup evaluating how best to court an acquisition offer.

    Thought this was interesting? Check out some of my other pieces on how VC works / thinks

  • Snap Inc by the Numbers

    A look at what Snap’s S-1 reveals about their growth story and unit economics

    If you follow the tech industry at all, you will have heard that consumer app darling Snap Inc. (makers of the app Snapchat) has filed to go public. The ensuing Form S-1 that has recently been made available has left tech-finance nerds like yours truly drooling over the until-recently-super-secretive numbers behind their business.

    Oddly apt banner; Source: Business Insider

    Much of the commentary in the press to date has been about how unprofitable the company is (having lost over $500M in 2016 alone). I have been unimpressed with that line of thinking — as what the bottom line is in a given year is hardly the right measure for assessing a young, high-growth company.

    While full-time Wall Street analysts will pour over the figures and comparables in much greater detail than I can, I decided to take a quick peek at the numbers to gauge for myself how the business is doing as a growth investment, looking at:

    • What does the growth story look like for the business?
    • Do the unit economics allow for a path to profitability?

    What does the growth story look like for the business?

    As I’ve noted before, consumer media businesses like Snap have two options available to grow: (1) increase the number of users / amount of time spent and/or (2) better monetize users over time

    A quick peek at the DAU (Daily Active Users) counts of Snap reveal that path (1) is troubled for them. Using Facebook as a comparable (and using the midpoint of Facebook’s quarter-end DAU counts to line up with Snap’s average DAU over a quarter) reveals not only that Snap’s DAU numbers aren’t growing so much, their growth outside of North America (where they should have more room to grow) isn’t doing that great either (which is especially alarming as the S-1 admits Q4 is usually seasonally high for them).

    Last 3 Quarters of DAU growth, by region

    A quick look at the data also reveals why Facebook prioritizes Android development and low-bandwidth-friendly experiences — international remains an area of rapid growth which is especially astonishing considering how over 1 billion Facebook users are from outside of North America. This contrasts with Snap which, in addition to needing a huge amount of bandwidth (as a photo and video intensive platform) also (as they admitted in their S-1) de-emphasizes Android development. Couple that with Snap’s core demographic (read: old people can’t figure out how to use the app), reveals a challenge to where quick short-term user growth can come from.

    As a result, Snap’s growth in the near term will have to be driven more by path (2). Here, there is a lot more good news. Snap’s quarterly revenue per user more than doubled over the last 3 quarters to $1.029/DAU. While its a long way off from Facebook’s whopping $7.323/DAU (and over $25 if you’re just looking at North American users), it suggests that there is plenty of opportunity for Snap to increase monetization, especially overseas where its currently able to only monetize about 1/10 as effectively as they are in North America (compared to Facebook which is able to do so 1/5 to 1/6 of North America depending on the quarter).

    2016 and 2015 Q2-Q4 Quarterly Revenue per DAU, by region

    Considering Snap has just started with its advertising business and has already convinced major advertisers to build custom content that isn’t readily reusable on other platforms and Snap’s low revenue per user compared even to Facebook’s overseas numbers, I think its a relatively safe bet that there is a lot of potential for the number to go up.

    Do the unit economics allow for a path to profitability?

    While most folks have been (rightfully) stunned by the (staggering) amount of money Snap lost in 2016, to me the more pertinent question (considering the over $1 billion Snap still has in its coffers to weather losses) is whether or not there is a path to sustainable unit economics. Or, put more simply, can Snap grow its way out of unprofitability?

    Because neither Facebook nor Snap provide regional breakdowns of their cost structure, I’ve focused on global unit economics, summarized below:

    2016 and 2015 Q2-Q4 Quarterly Financials per DAU

    What’s astonishing here is that neither Snap nor Facebook seem to be gaining much from scale. Not only are their costs of sales per user (cost of hosting infrastructure and advertising infrastructure) increasing each quarter, but the operating expenses per user (what they spend on R&D, sales & marketing, and overhead — so not directly tied to any particular user or dollar of revenue) don’t seem to be shrinking either. In fact, Facebook’s is over twice as large as Snap’s — suggesting that its not just a simple question of Snap growing a bit further to begin to experience returns to scale here.

    What makes the Facebook economic machine go, though, is despite the increase in costs per user, their revenue per user grows even faster. The result is profit per user is growing quarter to quarter! In fact, on a per user basis, Q4 2016 operating profit exceeded Q2 2015 gross profit(revenue less cost of sales, so not counting operating expenses)! No wonder Facebook’s stock price has been on a tear!

    While Snap has also been growing its revenue per user faster than its cost of sales (turning a gross profit per user in Q4 2016 for the first time), the overall trendlines aren’t great, as illustrated by the fact that its operating profit per user has gotten steadily worse over the last 3 quarters. The rapid growth in Snap’s costs per user and the fact that Facebook’s costs are larger and still growing suggests that there are no simple scale-based reasons that Snap will achieve profitability on a per user basis. As a result, the only path for Snap to achieve sustainability on unit economics will be to pursue huge growth in user monetization.

    Tying it Together

    The case for Snap as a good investment really boils down to how quickly and to what extent one believes that the company can increase their monetization per user. While the potential is certainly there (as is being realized as the rapid growth in revenue per user numbers show), what’s less clear is whether or not the company has the technology or the talent (none of the key executives named in the S-1 have a particular background building advertising infrastructure or ecosystems that Google, Facebook, and even Twitter did to dominate the online advertising businesses) to do it quickly enough to justify the rumored $25 billion valuation they are striving for (a whopping 38x sales multiple using 2016 Q4 revenue as a run-rate [which the S-1 admits is a seasonally high quarter]).

    What is striking to me, though, is that Snap would even attempt an IPO at this stage. In my mind, Snap has a very real shot at being a great digital media company of the same importance as Google and Facebook and, while I can appreciate the hunger from Wall Street to invest in a high-growth consumer tech company, not having a great deal of visibility / certainty around unit economics and having only barely begun monetization (with your first quarter where revenue exceeds cost of sales is a holiday quarter) poses challenges for a management team that will need to manage public market expectations around forecasts and capitalization.

    In any event, I’ll be looking forward to digging in more when Snap reveals future figures around monetization and advertising strategy — and, to be honest, Facebook’s numbers going forward now that I have a better appreciation for their impressive economic model.

    Thought this was interesting or helpful? Check out some of my other pieces on investing / finance.

  • Dr. Machine Learning

    How to realize the promise of applying machine learning to healthcare

    Not going to happen anytime soon, sadly: the Doctor from Star Trek: Voyager; Source: TrekCore

    Despite the hype, it’ll likely be quite some time before human physicians will be replaced with machines (sorry, Star Trek: Voyager fans).

    While “smart” technology like IBM’s Watson and Alphabet’s AlphaGo can solve incredibly complex problems, they are probably not quite ready to handle the messiness of qualitative unstructured information from patients and caretakers (“it kind of hurts sometimes”) that sometimes lie (“I swear I’m still a virgin!”) or withhold information (“what does me smoking pot have to do with this?”) or have their own agendas and concerns (“I just need some painkillers and this will all go away”).

    Instead, machine learning startups and entrepreneurs interested in medicine should focus on areas where they can augment the efforts of physicians rather than replace them.

    One great example of this is in diagnostic interpretation. Today, doctors manually process countless X-rays, pathology slides, drug adherence records, and other feeds of data (EKGs, blood chemistries, etc) to find clues as to what ails their patients. What gets me excited is that these tasks are exactly the type of well-defined “pattern recognition” problems that are tractable for an AI / machine learning approach.

    If done right, software can not only handle basic diagnostic tasks, but to dramatically improve accuracy and speed. This would let healthcare systems see more patients, make more money, improve the quality of care, and let medical professionals focus on managing other messier data and on treating patients.

    As an investor, I’m very excited about the new businesses that can be built here and put together the following “wish list” of what companies setting out to apply machine learning to healthcare should strive for:

    • Excellent training data and data pipeline: Having access to large, well-annotated datasets today and the infrastructure and processes in place to build and annotate larger datasets tomorrow is probably the main defining . While its tempting for startups to cut corners here, that would be short-sighted as the long-term success of any machine learning company ultimately depends on this being a core competency.
    • Low (ideally zero) clinical tradeoffs: Medical professionals tend to be very skeptical of new technologies. While its possible to have great product-market fit with a technology being much better on just one dimension, in practice, to get over the innate skepticism of the field, the best companies will be able to show great data that makes few clinical compromises (if any). For a diagnostic company, that means having better sensitivty and selectivity at the same stage in disease progression (ideally prospectively and not just retrospectively).
    • Not a pure black box: AI-based approaches too often work like a black box: you have no idea why it gave a certain answer. While this is perfectly acceptable when it comes to recommending a book to buy or a video to watch, it is less so in medicine where expensive, potentially life-altering decisions are being made. The best companies will figure out how to make aspects of their algorithms more transparent to practitioners, calling out, for example, the critical features or data points that led the algorithm to make its call. This will let physicians build confidence in their ability to weigh the algorithm against other messier factors and diagnostic explanations.
    • Solve a burning need for the market as it is today: Companies don’t earn the right to change or disrupt anything until they’ve established a foothold into an existing market. This can be extremely frustrating, especially in medicine given how conservative the field is and the drive in many entrepreneurs to shake up a healthcare system that has many flaws. But, the practical reality is that all the participants in the system (payers, physicians, administrators, etc) are too busy with their own issues (i.e. patient care, finding a way to get everything paid for) to just embrace a new technology, no matter how awesome it is. To succeed, machine diagnostic technologies should start, not by upending everything with a radical solution, but by solving a clear pain point (that hopefully has a lot of big dollar signs attached to it!) for a clear customer in mind.

    Its reasons like this that I eagerly follow the development of companies with initiatives in applying machine learning to healthcare like Google’s DeepMind, Zebra Medical, and many more.

  • Why VR Could be as Big as the Smartphone Revolution

    Technology in the 1990s and early 2000s marched to the beat of an Intel-and-Microsoft-led drum.

    Source: IT Portal

    Intel would release new chips at a regular cadence: each cheaper, faster, and more energy efficient than the last. This would let Microsoft push out new, more performance-hungry software, which would, in turn, get customers to want Intel’s next, more awesome chip. Couple that virtuous cycle with the fact that millions of households were buying their first PCs and getting onto the Internet for the first time — and great opportunities were created to build businesses and products across software and hardware.

    But, over time, that cycle broke down. By the mid-2000s, Intel’s technological progress bumped into the limits of what physics would allow with regards to chip performance and cost. Complacency from its enviable market share coupled with software bloat from its Windows and Office franchises had a similar effect on Microsoft. The result was that the Intel and Microsoft drum stopped beating as they became unable to give the mass market a compelling reason to upgrade to each subsequent generation of devices.

    The result was a hollowing out of the hardware and semiconductor industries tied to the PC market that was only masked by the innovation stemming from the rise of the Internet and the dawn of a new technology cycle in the late 2000s in the form of Apple’s iPhone and its Android competitors: the smartphone.

    Source: Mashable

    A new, but eerily familiar cycle began: like clockwork, Qualcomm, Samsung, and Apple (playing the part of Intel) would devise new, more awesome chips which would feed the creation of new performance-hungry software from Google and Apple (playing the part of Microsoft) which led to demand for the next generation of hardware. Just as with the PC cycle, new and lucrative software, hardware, and service businesses flourished.

    But, just as with the PC cycle, the smartphone cycle is starting to show signs of maturity. Apple’s recent slower than expected growth has already been blamed on smartphone market saturation. Users are beginning to see each new generation of smartphone as marginal improvements. There are also eery parallels between the growing complaints over Apple software quality from even Apple fans and the position Microsoft was in near the end of the PC cycle.

    While its too early to call the end for Apple and Google, history suggests that we will eventually enter a similar phase with smartphones that the PC industry experienced. This begs the question: what’s next? Many of the traditional answers to this question — connected cars, the “Internet of Things”, Wearables, Digital TVs — have not yet proven themselves to be truly mass market, nor have they shown the virtuous technology upgrade cycle that characterized the PC and smartphone industries.

    This brings us to Virtual Reality. With VR, we have a new technology paradigm that can (potentially) appeal to the mass market (new types of games, new ways of doing work, new ways of experiencing the world, etc.). It also has a high bar for hardware performance that will benefit dramatically from advances in technology, not dissimilar from what we saw with the PC and smartphone.

    Source: Forbes

    The ultimate proof will be whether or not a compelling ecosystem of VR software and services emerges to make this technology more of a mainstream “must-have” (something that, admittedly, the high price of the first generation Facebook/OculusHTC/Valve, and Microsoft products may hinder).

    As a tech enthusiast, its easy to get excited. Not only is VR just frickin’ cool (it is!), its probably the first thing since the smartphone with the mass appeal and virtuous upgrade cycle that can bring about the huge flourishing of products and companies that makes tech so dynamic to be involved with.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Laszlo Bock on Building Google’s Culture

    Much has been written about what makes Google work so well: their ridiculously profitable advertising business model, the technology behind their search engine and data centers, and the amazing pay and perks they offer.

    Source: the book

    My experiences investing in and working with startups, however, has taught me that building a great company is usually less about a specific technical or business model innovation than about building a culture of continuous improvement and innovation. To try to get some insight into how Google does things, I picked up Google SVP of People Operations Laszlo Bock’s book Work Rules!

    Bock describes a Google culture rooted in principles that came from founders Larry Page and Sergey Brin when they started the company: get the best people to work for you, make them want to stay and contribute, and remove barriers to their creativity. What’s great (to those interested in company building) is that Bock goes on to detail the practices Google has put in place to try to live up to these principles even as their headcount has expanded.

    The core of Google’s culture boils down to four basic principles and much of the book is focused on how companies should act if they want to live up to them:

    1. Presume trust: Many of Google’s cultural norms stem from a view that people are well-intentioned and trustworthy. While that may not seem so radical, this manifested at Google as a level of transparency with employees and a bias to say yes to employee suggestions that most companies are uncomfortable with. It raises interesting questions about why companies that say their talent is the most important thing treat them in ways that suggest a lack of trust.
    2. Recruit the best: Many an exec pays lip service to this, but what Google has done is institute policies that run counter to standard recruiting practices to try to actually achieve this at scale: templatized interviews / forms (to make the review process more objective and standardized), hiring decisions made by cross-org committees (to insure a consistently high bar is set), and heavy use of data to track the effectiveness of different interviewers and interview tactics. While there’s room to disagree if these are the best policies (I can imagine hating this as a hiring manager trying to staff up a team quickly), what I admired is that they set a goal (to hire the best at scale) and have actually thought through the recruiting practices they need to do so.
    3. Pay fairly [means pay unequally]: While many executives would agree with the notion that superstar employees can be 2-10x more productive, few companies actually compensate their superstars 2-10x more. While its unclear to me how effective Google is at rewarding superstars, the fact that they’ve tried to align their pay policies with their beliefs on how people perform is another great example of deviating from the norm (this time in terms of compensation) to follow through on their desire to pay fairly.
    4. Be data-driven: Another “in vogue” platitude amongst executives, but one that very few companies live up to, is around being data-driven. In reading Bock’s book, I was constantly drawing parallels between the experimentation, data collection, and analyses his People Operations team carried out and the types of experiments, data collection, and analyses you would expect a consumer internet/mobile company to do with their users. Case in point: Bock’s team experimented with different performance review approaches and even cafeteria food offerings in the same way you would expect Facebook to experiment with different news feed algorithms and notification strategies. It underscores the principle that, if you’re truly data-driven, you don’t just selectively apply it to how you conduct business, you apply it everywhere.

    Of course, not every company is Google, and not every company should have the same set of guiding principles or will come to same conclusions. Some of the processes that Google practices are impractical (i.e., experimentation is harder to set up / draw conclusions from with much smaller companies, not all professions have such wide variations in output as to drive such wide variations in pay, etc).

    What Bock’s book highlights, though, is that companies should be thoughtful about what sort of cultural principles they want to follow and what policies and actions that translates into if they truly believe them. I’d highly recommend the book!

  • What Happens After the Tech Bubble Pops

    In recent years, it’s been the opposite of controversial to say that the tech industry is in a bubble. The terrible recent stock market performance of once high-flying startups across virtually every industry (see table below) and the turmoil in the stock market stemming from low oil prices and concerns about the economies of countries like China and Brazil have raised fears that the bubble is beginning to pop.

    While history will judge when this bubble “officially” bursts, the purpose of this post is to try to make some predictions about what will happen during/after this “correction” and pull together some advice for people in / wanting to get into the tech industry. Starting with the immediate consequences, one can reasonably expect that:

    • Exit pipeline will dry up: When startup valuations are higher than what the company could reasonably get in the stock market, management teams (who need to keep their investors and employees happy) become less willing to go public. And, if public markets are less excited about startups, the price acquirers need to pay to convince a management team to sell goes down. The result is fewer exits and less cash back to investors and employees for the exits that do happen.
    • VCs become less willing to invest: VCs invest in startups on the promise that future IPOs and acquisitions will make them even more money. When the exit pipeline dries up, VCs get cold feet because the ability to get a nice exit seems to fade away. The result is that VCs become a lot more price-sensitive when it comes to investing in later stage companies (where the dried up exit pipeline hurts the most).
    • Later stage companies start cutting costs: Companies in an environment where they can’t sell themselves or easily raise money have no choice but to cut costs. Since the vast majority of later-stage startups run at a loss to increase growth, they will find themselves in the uncomfortable position of slowing down hiring and potentially laying employees off, cutting back on perks, and focusing a lot more on getting their financials in order.

    The result of all of this will be interesting for folks used to a tech industry (and a Bay Area) flush with cash and boundlessly optimistic:

    1. Job hopping should slow: “Easy money” to help companies figure out what works or to get an “acquihire” as a soft landing will be harder to get in a challenged financing and exit environment. The result is that the rapid job hopping endemic in the tech industry should slow as potential founders find it harder to raise money for their ideas and as it becomes harder for new startups to get the capital they need to pay top dollar.
    2. Strong companies are here to stay: While there is broad agreement that there are too many startups with higher valuations than reasonable, what’s also become clear is there are a number of mature tech companies that are doing exceptionally well (i.e. Facebook, Amazon, Netflix, and Google) and a number of “hotshots” which have demonstrated enough growth and strong enough unit economics and market position to survive a challenged environment (i.e. Uber, Airbnb). This will let them continue to hire and invest in ways that weaker peers will be unable to match.
    3. Tech “luxury money” will slow but not disappear: Anyone who lives in the Bay Area has a story of the ridiculousness of “tech money” (sky-high rents, gourmet toast,“its like Uber but for X”, etc). This has been fueled by cash from the startup world as well as free flowing VC money subsidizing many of these new services . However, in a world where companies need to cut costs, where exits are harder to come by, and where VCs are less willing to subsidize random on-demand services, a lot of this will diminish. That some of these services are fundamentally better than what came before (i.e. Uber) and that stronger companies will continue to pay top dollar for top talent will prevent all of this from collapsing (and lets not forget San Francisco’s irrational housing supply policies). As a result, people expecting a reversal of gentrification and the excesses of tech wealth will likely be disappointed, but its reasonable to expect a dramatic rationalization of the price and quantity of many “luxuries” that Bay Area inhabitants have become accustomed to soon.

    So, what to do if you’re in / trying to get in to / wanting to invest in the tech industry?

    • Understand the business before you get in: Its a shame that market sentiment drives fundraising and exits, because good financial performance is generally a pretty good indicator of the long-term prospects of a business. In an environment where its harder to exit and raise cash, its absolutely critical to make sure there is a solid business footing so the company can keep going or raise money / exit on good terms.
    • Be concerned about companies which have a lot of startup exposure: Even if a company has solid financial performance, if much of that comes from selling to startups (especially services around accounting, recruiting, or sales), then they’re dependent on VCs opening up their own wallets to make money.
    • Have a much higher bar for large, later-stage companies: The companies that will feel the most “pain” the earliest will be those with with high valuations and high costs. Raising money at unicorn valuations can make a sexy press release but it doesn’t amount to anything if you can’t exit or raise money at an even higher valuation.
    • Rationalize exposure to “luxury”: Don’t expect that “Uber but for X” service that you love to stick around (at least not at current prices)…
    • Early stage companies can still be attractive: Companies that are several years from an exit & raising large amounts of cash will be insulated in the near-term from the pain in the later stage, especially if they are committed to staying frugal and building a disruptive business. Since they are already relatively low in valuation and since investors know they are discounting off a valuation in the future (potentially after any current market softness), the downward pressures on valuation are potentially lighter as well.

    Thought this was interesting or helpful? Check out some of my other pieces on investing / finance.

  • An “Unbiased Opinion”

    I recently read a short column by gadget reviewer Vlad Savov in The Verge provocatively titled “My reviews are biased — that’s why you should trust them” which made me think. In it, Vlad addresses the accusation he hears often that he’s biased:

    Of course I’m biased, that’s the whole point… subjectivity is an inherent — and I would argue necessary — part of making these reviews meaningful. Giving each new device a decontextualized blank slate to be reviewed against and only asserting the bare facts of its existence is neither engaging nor particularly useful. You want me to complain about the chronically bloopy Samsung TouchWiz interface while celebrating the size perfection of last year’s Moto X. Those are my preferences, my biased opinions, and it’s only by applying them to the pristine new phone or tablet that I can be of any use to readers. To be perfectly impartial would negate the value of having a human conduct the review at all. Just feed the new thing into a 3D scanner and run a few algorithms over the resulting data to determine a numerical score. Job done.”

    [emphasis mine]

    As Vlad points out, in an expert you’re asking for advice from, bias is a good thing. Now whether or not Vlad has unhelpful biases or is someone who’s opinion you value is a separate question entirely, but if there’s one thing I’ve learned — an unbiased opinion is oftentimes an uneducated one and tend to come from panderers who fit one of three criteria:

    1. they think you don’t want them to express an opinion and are trying to respect your wishes
    2. they don’t know anything
    3. they are trying to sell you something, not mutually exclusive with (2)

    The individuals who are the most knowledgeable and thoughtful about a topic almost certainly have a bias and that’s a bias that you want to hear.

  • 3D Printing as Disruptive Innovation

    Last week, I attended a MIT/Stanford VLAB event on 3D printing technologies. While I had previously been aware of 3D printing (which works basically the way it sounds) as a way of helping companies and startups do quick prototypes or letting geeks of the “maker” persuasion make random knickknacks, it was at the event that I started to recognize the technology’s disruptive potential in manufacturing. While the conference itself was actually more about personal use for 3D printing, when I thought about the applications in the industrial/business world, it was literally like seeing the first part/introduction of a new chapter or case study from Clayton Christensen, author of The Innovator’s Dilemma (and inspiration for one of the more popular blog posts here :-)) play out right in front of me:

    • Like many other disruptive innovations when they began, 3D printing today is unable to serve the broader manufacturing “market”. Generally speaking, the time needed per unit output, the poor “print resolution”, the upfront capital costs, and some of the limitations in terms of materials are among the reasons that the technology as it stands today is uncompetitive with traditional mass manufacturing.
    • Even if 3D printing were competitive today, there are big internal and external stumbling blocks which would probably make it very difficult for existing large companies to embrace it. Today’s heavyweight manufacturers are organized and incentivized internally along the lines of traditional assembly line manufacturing. They also lack the partners, channels, and supply chain relationships (among others) externally that they would need to succeed.
    • While 3D printing today is very disadvantaged relative to traditional manufacturing technologies (most notably in speed and upfront cost), it is extremely good at certain things which make it a phenomenal technology for certain use cases:
      • Rapid design to production: Unlike traditional manufacturing techniques which take significant initial tooling and setup, once you have a 3D printer and an idea, all you need to do is print the darn thing! At the conference, one of the panelists gave a great example: a designer bought an Apple iPad on a Friday, decided he wanted to make his own iPad case, and despite not getting any help from Apple or prior knowledge of the specs, was able by Monday to be producing and selling the case he had designed that weekend. Idea to production in three days. Is it any wonder that so many of the new hardware startups are using 3D printing to do quick prototyping?
      • Short runs/lots of customizationChances are most of the things you use in your life are not one of a kind (i.e. pencils, clothes, utensils, dishware, furniture, cars, etc). The reason for this is that mass production make it extremely cheap to produce many copies of the same thing. The flip side of this is that short production runs (where you’re not producing thousands or millions of the same thing) and production where each item has a fair amount of customization or uniqueness is really expensive. With 3D printing, however, because each item being produced is produced in the same way (by the printer), you can produce one item at close to the same per unit price as producing a million – this makes 3D printing a very interesting technology for markets where customization & short runs are extremely valuable.
      • Shapes/structures that injection molding and machining find difficult: There are many shapes where traditional machining (taking a big block of material and whittling it down to the desired shape) and injection molding (building a mold and then filling it with molten material to get the desired shape) are not ideal: things like producing precision products that go into airplanes and racecars or printing the scaffolds with which bioengineers hope to build artificial organs are uniquely addressable by 3D printing technologies.
      • Low laborThe printer takes care of all of it – thus letting companies cut costs in manufacturing and/or refocus their people to steps in the process which do require direct human intervention.
    • And, of course, with the new markets which are opening up for 3D printing, its certainly helpful that the size, cost, and performance of 3D printers has improved dramatically and is continuing to improve – to the point where the panelists were very serious when they articulated a vision of the future where 3D printers could be as widespread as typical inkjet/laser printers!

    Ok, so why do we care? While its difficult to predict precisely what this technology could bring (it is disruptive after all!), I think there are a few tantalizing possibilities of how the manufacturing game might change to consider:

    • The ability to do rapid design to productionmeans you could dofast fashion for everything – in the same way that companies like Zara can produce thousands of different products in a season (and quickly change them to meet new trends/styles), broader adoption of 3D printing could lead to the rise of new companies where design/operational flexibility and speed are king, as the companies best able to fit their products to the flavor-of-the-month gain more traction.
    • The ability to do customization means you can manufacture custom parts/products cost-effectively and without holding as much inventory; production only needs to begin after an order is on hand (no reason to hold extra “copies” of something that may go out of fashion/go bad in storage when you can print stuff on the fly) and the lack of retooling means companies can be a lot more flexible in terms of using customization to get more customers.
    • I’m not sure how all the second/third-order effects play out, but this could also put a damper on outsourced manufacturing to countries like China/India – who cares about cheaper manufacturing labor overseas when 3D printing makes it possible to manufacture locally without much labor and avoid import duties, shipping delays, and the need to hold on to parts/inventory?

    I think there’s a ton of potential for the technology itself and its applications, and the possible consequences for how manufacturing will evolve are staggering. Yes, we are probably a long way off from seeing this, but I think we are on the verge of seeing a disruptive innovation take place, and if you’re anything like me, you’re excited to see it play out.

  • Boa Constrictors Listen to Your Heart So They Know When You’re Dead

    Source: Paul Whitten

    For January I decided to blog a paper I heard about on the excellent Nature podcast about a deliciously simple and elegant experiment to test a very simple question: given how much time and effort boa constrictors (like the one on above, photo taken by Paul Whitten) need to kill prey by squeezing them to death, how do they know when to stop squeezing?

    Hypothesizing that boa constrictors could sense the heartbeat of their prey, some enterprising researchers from Dickinson College decided to test the hypothesis by fitting dead rats with bulbs connected to water pumps (so that the researchers could simulate a heartbeat) and tracking how long and hard the boas would squeeze for:

    • rats without a “heartbeat” (white)
    • rats with a “heartbeat” for 10 min (gray)
    • rats with a continuous “heartbeat” (black)
    Source: Figure 2, Boback. et al

    The results are shown in figure 2 (to the right). The different color bars show the different experimental groups (white: no heartbeat, gray: heartbeat for 10 min before stopping, and black: continuous heartbeat). Figure 2a (on top) shows how long the boas squeezed for whereas Figure 2b (on bottom) shows the total “effort” exerted by the boas. As obvious from the chart, the longer the simulated heartbeat went, the longer and harder the boas would squeeze.

    Conclusion? I’ll let the paper speak for itself: “snakes use the heartbeat in their prey as a cue to modulate constriction effort and to decide when to release their prey.”

    Interestingly, the paper goes a step further for those of us who aren’t ecology experts and notes that being attentive to heartbeat would probably be pretty irrelevant in the wild for small mammals (which, ironically, includes rats) and birds which die pretty quickly after being constricted. Where this type of attentiveness to heartrate is useful is in reptilian prey (crocodiles, lizards, other snakes, etc) which can survive with reduced oxygen for longer. From that observation, the researchers thus concluded that listening for heartrate probably evolved early in evolutionary history at a time when the main prey for snakes were other reptiles and not mammals and birds.

    In terms of where I’d go next after this – my main point of curiosity is on whether or not boa constrictors are listening/feeling for any other signs of life (i.e. movement or breathing). Obviously, they’re sensitive to heart rate, but if an animal with simulated breathing or movement – would that change their constricting activity as well? After all, I’m sure the creative guys that made an artificial water-pump-heart can find ways to build an artificial diaphragm and limb muscles… right?

    Paper: Boback et al., “Snake modulates constriction in response to prey’s heartbeat.” Biol Letters. 19 Dec 2011. doi: 10.1098/rsbl.2011.1105

    Check out my other academic paper walkthroughs/summaries

  • Mosquitoes are Drawn to Your Skin Bacteria

    This month’s paper (from open access journal PLoS ONE) is yet again about the impact on our health of the bacteria which have decided to call our bodies home. But, instead of the bacteria living in our gut, this month is about the bacteria which live on our skin.

    It’s been known that the bacteria that live on our skin help give us our particular odors. So, the researchers wondered if the mosquitos responsible for passing malaria (Anopheles) were more or less drawn to different individuals based on the scent that our skin-borne bacteria impart upon us (also, for the record, before you freak out about bacteria on your skin, remember that like the bacteria in your gut, the bacteria on your skin are natural and play a key role in maintaining the health of your skin).

    Looking at 48 individuals, they noticed a huge variation in terms of attractiveness to Anopheles mosquitos (measured by seeing how much mosquitos prefer to fly towards a chamber with a particular individual’s skin extract versus a control) which they were able to trace to two things. The first is the amount of bacteria on your skin. As shown in Figure 2 below, is that the more bacteria that you have on your skin (the higher your “log bacterial density”), the more attractive you seem to be to mosquitos (the higher your mean relative attractiveness).

    Source: Figure 2, Verhulst et al

    The second thing they noticed was that the type of bacteria also seemed to be correlated with attractiveness to mosquitos. Using DNA sequencing technology, they were able to get a mini-census of what sort of bacteria were present on the skins of the different patients. Sadly, they didn’t show any pretty figures for the analysis they conducted on two common types of bacteria (Staphylococcus and Pseudomonas), but, to quote from the paper:

    The abundance of Staphylococcus spp. was 2.62 times higher in the HA [Highly Attractive to mosquitoes] group than in the PA [Poorly Attractive to mosquitoes] group and the abundance of Pseudomonas spp. 3.11 times higher in the PA group than in the HA group.

    Using further genetic analyses, they were also able to show a number of other types of bacteria that were correlated with one or the other.

    So, what did I think? While I think there’s a lot of interesting data here, I think the story could’ve been tighter. First and foremost, for obvious reasons, correlation does not mean causation. This was not a true controlled experiment – we don’t know for a fact if more/specific types of bacteria cause mosquitos to be drawn to them or if there’s something else that explains both the amount/type of bacteria and the attractiveness of an individual’s skin scent to a mosquito. Secondly, Figure 2 leaves much to be desired in terms of establishing a strong trendline. Yes, if I  squint (and ignore their very leading trendline) I can see a positive correlation – but truth be told, the scatterplot looks like a giant mess, especially if you include the red squares that go with “Not HA or PA”. For a future study, I think it’d be great if they could get around this to show stronger causation with direct experimentation (i.e. extracting the odorants from Staphylococcus and/or Pseudomonas and adding them to a “clean” skin sample, etc)

    With that said, I have to applaud the researchers for tackling a fascinating topic by taking a very different angle. Coverage of malaria is usually focused on how to directly kill or impede the parasite (Plasmodium falciparums). This is the first treatment of the “ecology” of malaria – specifically the ecology of the bacteria on your skin! While the authors don’t promise a “cure for malaria”, you can tell they are excited about what they’ve found and the potential to find ways other than killing parasites/mosquitos to help deal with malaria, and I look forward to seeing the other ways that our skin bacteria impact our lives.

    Paper: Verhulst et al. “Composition of Human Skin Microbiota Affects Attractiveness to Malaria Mosquitoes.” PLoS ONE 6(12). 17 Nov 2011. doi:10.1371/journal.pone.0028991

    Check out my other academic paper walkthroughs/summaries

  • Fat Flora

    Source: Healthy Soul

    November’s paper was published in Nature in 2006, and covers a topic I’ve become increasingly interested in: the impact of the bacteria that have colonized our bodies on our health (something I’ve blogged about here and here).

    The idea that our bodies are, in some ways, more bacteria than human (there are 10x more gut bacteria – or flora — than human cells on our bodies) and that those bacteria can play a key role on our health is not only mind-blowing, it opens up another potential area for medical/life sciences research and future medicines/treatments.

    In the paper, a genetics team from Washington University in St. Louis explored a very basic question: are the gut bacteria from obese individuals different from those from non-obese individuals? To study the question, they performed two types of analyses on a set of mice with a genetic defect leading to an inability of the mice to “feel full” (and hence likely to become obese) and genetically similar mice lacking that defect (the s0-called “wild type” control).

    The first was a series of genetic experiments comparing the bacteria found within the gut of obese mice with those from the gut of “wild-type” mice (this sort of comparison is something the field calls metagenomics). In doing so, the researchers noticed a number of key differences in the “genetic fingerprint” of the two sets of gut bacteria, especially in the genes involved in metabolism.

    Source: Figure 3, Turnbaugh et al.

    But, what did that mean to the overall health of the animal? To answer that question, the researchers did a number of experiments, two of which I will talk about below. First, they did a very simple chemical analysis (see figure 3b to the left) comparing the “leftover energy” in the waste (aka poop) of the obese mice to the waste of wild-type mice (and, yes, all of this was controlled for the amount of waste/poop). Lo and behold, the obese mice (the white bar) seemed to have gut bacteria which were significantly better at pulling calories out of the food, leaving less “leftover energy”.

    Source: Figure 3, Turnbaugh et al.

    While an interesting result, especially when thinking about some of the causes and effects of obesity, a skeptic might look at that data and say that its inconclusive about the role of gut bacteria in obesity – after all, obese mice could have all sorts of other changes which make them more efficient at pulling energy out of food. To address that, the researchers did a very elegant experiment involving fecal transplant: that’s right, colonize one mouse with the bacteria from another mouse (by transferring poop). The figure to the right (figure 3c) shows the results of the experiment. After two weeks, despite starting out at about the same weight and eating similar amounts of the same food, wild type mice that received bacteria from other wild type mice showed an increase in body fat of about 27%, whereas the wild type mice that received bacteria from the obese mice showed an increase of about 47%! Clearly, gut bacteria in obese mice are playing a key role in calorie uptake!

    In terms of areas of improvement, my main complaint about this study is just that it doesn’t go far enough. The paper never gets too deep on what exactly were the bacteria in each sample and we didn’t really get a sense of the real variation: how much do bacteria vary from mouse to mouse? Is it the completely different bacteria? Is it the same bacteria but different numbers? Is it the same bacteria but they’re each functioning differently? Do two obese mice have the same bacteria? What about a mouse that isn’t quite obese but not quite wild-type either? Furthermore, the paper doesn’t show us what happens if an obese mouse has its bacteria replaced with the bacteria from a wild-type mouse. These are all interesting questions that would really help researchers and doctors understand what is happening.

    But, despite all of that, this was a very interesting finding and has major implications for doctors and researchers in thinking about how our complicated flora impact and are impacted by our health.

    Paper: Turnbaugh et al., “An obesity-associated gut microbiome with increased capacity for energy harvest.” Nature (444). 21/28 Dec 2006. doi:10.1038/nature05414

    Check out my other academic paper walkthroughs/summaries

  • Antibody-omics

    I’m pretty late for paper of the month, so here we go

    “Omics” is the hot buzz-suffix in the life sciences for anything which uses the new sequencing/array technologies we now have available. You don’t study genes anymore, you study genomics. You don’t study proteins anymore – that’s so last century, you study proteomics now. And, who studies metabolism? Its all about metabolomics. There’s even a blog covering this with the semi-irreverent name “Omics! Omics!”.

    This month’s paper from Science is from researchers at the NIH because it was the first time I ever encountered the term “antibodyome”. As some of you know, antibodies are the “smart missiles” of your immune system – they are built to recognize and attack only one specific target (i.e. a particular protein on a bacteria/virus). This ability is so remarkable that, rather than rely on human-generated constructs, researchers and biotech companies oftentimes choose to use antibodies to make research tools (i.e. using fluorescent antibodies to label specific things) and therapies (i.e. using antibodies to proteins associated with cancer as anti-cancer drugs).

    How the immune system does this is a fascinating story in and of itself. In a process called V(D)J recombination – the basic idea is that your immune system’s B-cells mix, match, and scramble certain pieces of your genetic code to try to produce a wide range of antibodies to hit potentially every structure they could conceivably see. And, once they see something which “kind of sticks”, they undergo a process called affinity maturation to introduce all sorts of mutations in the hopes that you create an even better antibody.

    Which brings us to the paper I picked – the researchers analyzed a couple of particularly effective antibodies targeted at HIV, the virus which causes AIDS. What they found was that these antibodies all bound the same part of the HIV virus, but when they took a closer look at the 3D structures/the B-cell genetic code which made them, they found that the antibodies were quite different from one another (see Figure 3C below)

    Source: Figure 3C, Wu et al.

    What’s more, not only were they fairly distinct from one another, they each showed *significant* affinity maturation – while a typical antibody has 5-15% of their underlying genetic code modified, these antibodies had 20-50%! To get to the bottom of this, the researchers looked at all the antibodies they could pull from the patient – their “antibodyome” (in the same way that a patient’s genome would be all of their genes) — and along with data from other patients, they were able to construct a genetic “family tree” for these antibodies (see Figure 6C below)

    Source: Figure 6, Wu et al.

    The analysis shows that many of the antibodies were derived from the same initial genetic VDJ “mix-and-match” but that afterwards, there were quite a number of changes made to that code to get the situation where a diverse set of structures/genetic codes could attack the same spot on the HIV virus.

    While I wish the paper probed deeper into actual experimentation to take this analysis further (i.e. artificially using this method to create other antibodies with similar behavior), this paper goes a long way into establishing an early picture of what “antibodyomics” is. Rather than study the total impact of an immune response or just the immune capabilities of one particular B-cell/antibody, this sort of genetic approach lets researchers get a very detailed, albeit comprehensive look at where the body’s antibodies are coming from. Hopefully, longer term this also turns into a way for researchers to make better vaccines.

    Paper:  Wu et al., “Focused Evolution of HIV-1 Neutralizing Antibodies Revealed by Structures and Deep Sequencing.” Science (333). 16 Sep 2011. doi: 10.1126/science.1207532

    Check out my other academic paper walkthroughs/summaries

  • The Marketing Glory of NVIDIA’s Codenames

    While code names are not rare in the corporate world, more often than not, the names tend to be unimaginative. NVIDIA’s code names, however, are pure marketing glory.

    Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers. How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

    Source: Rage3D

    Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 

    But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters. For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Loganas in the name for the X-men Wolverine. And then Starkas in the alter ego for Iron Man.

    Source: NVIDIA

    Everybody wants a little Iron Man in their tablet.

  • Web vs Native

    When Steve Jobs first launched the iPhone in 2007, Apple’s perception of where the smartphone application market would move was in the direction of web applications. The reasons for this are obvious: people are familiar with how to build web pages and applications, and it simplifies application delivery.

    Yet in under a year, Apple changed course, shifting the focus of iPhone development from web applications to building native applications custom-built (by definition) for the iPhone’s operating system and hardware. While I suspect part of the reason this was done was to lock-in developers, the main reason was certainly the inadequacy of available browser/web technology. While we can debate the former, the latter is just plain obvious. In 2007, the state of web development was relatively primitive relative to today. There was no credible HTML5 support. Javascript performance was paltry. There was no real way for web applications to access local resources/hardware capabilities. Simply put, it was probably too difficult for Apple to kludge together an application development platform based solely on open web technologies which would get the sort of performance and functionality Apple wanted.

    But, that was four years ago, and web technology has come a long way. Combine that with the tech commentator-sphere’s obsession with hyping up a rivalry between “native vs HTML5 app development”, and it begs the question: will the future of application development be HTML5 applications or native?

    There are a lot of “moving parts” in a question like this, but I believe the question itself is a red herring. Enhancements to browser performance and the new capabilities that HTML5 will bring like offline storage, a canvas for direct graphic manipulation, and tools to access the file system, mean, at least to this tech blogger, that “HTML5 applications” are not distinct from native applications at all, they are simply native applications that you access through the internet. Its not a different technology vector – it’s just a different form of delivery.

    Critics of this idea may cite that the performance and interface capabilities of browser-based applications lag far behind those of “traditional” native applications, and thus they will always be distinct. And, as of today, they are correct. However, this discounts a few things:

    • Browser performance and browser-based application design are improving at a rapid rate, in no small part because of the combination of competition between different browsers and the fact that much of the code for these browsers is open source. There will probably always be a gap between browser-based apps and native, but I believe this gap will continue to narrow to the point where, for many applications, it simply won’t be a deal-breaker anymore.
    • History shows that cross-platform portability and ease of development can trump performance gaps. Once upon a time, all developers worth their salt coded in low level machine language. But this was a nightmare – it was difficult to do simple things like showing text on a screen, and the code written only worked on specific chips and operating systems and hardware configurations. I learned C which helped to abstract a lot of that away, and, keeping with the trend of moving towards more portability and abstraction, the mobile/web developers of today develop with tools (Python, Objective C, Ruby, Java, Javascript, etc) which make C look pretty low-level and hard to work with. Each level of abstraction adds a performance penalty, but that has hardly stopped developers from embracing them, and I feel the same will be true of “HTML5”.
    • Huge platform economic advantages. There are three huge advantages today to HTML5 development over “traditional native app development”. The first is the ability to have essentially the same application run across any device which supports a browser. Granted, there are performance and user experience issues with this approach, but when you’re a startup or even a corporate project with limited resources, being able to get wide distribution for earlier products is a huge advantage. The second is that HTML5 as a platform lacks the control/economic baggage that iOS and even Android have where distribution is controlled and “taxed” (30% to Apple/Google for an app download, 30% cut of digital goods purchases). I mean, what other reason does Amazon have to move its Kindle application off of the iOS native path and into HTML5 territory? The third is that web applications do not require the latest and greatest hardware to perform amazing feats. Because these apps are fundamentally browser-based, using the internet to connect to a server-based/cloud-based application allows even “dumb devices” to do amazing things by outsourcing some of that work to another system. The combination of these three makes it easier to build new applications and services and make money off of them – which will ultimately lead to more and better applications and services for the “HTML5 ecosystem.”

    Given Google’s strategic interest in the web as an open development platform, its no small wonder that they have pushed this concept the furthest. Not only are they working on a project called Native Client to let users achieve “native performance” with the browser, they’ve built an entire operating system centered entirely around the browser, Chrome OS, and were the first to build a major web application store, the Chrome Web Store to help with application discovery.

    While it remains to be seen if any of these initiatives will end up successful, this is definitely a compelling view of how the technology ecosystem evolves, and, putting on my forward-thinking cap on, I would not be surprised if:

    1. The major operating systems became more ChromeOS-like over time. Mac OS’s dashboard widgets and Windows 7’s gadgets are already basically HTML5 mini-apps, and Microsoft has publicly stated that Windows 8 will support HTML5-based application development. I think this is a sign of things to come as the web platform evolves and matures.
    2. Continued focus on browser performance may lead to new devices/browsers focused on HTML5 applications. In the 1990s/2000s, there was a ton of attention focused on building Java accelerators in hardware/chips and software platforms who’s main function was to run Java. While Java did not take over the world the way its supporters had thought, I wouldn’t be surprised to see a similar explosion just over the horizon focused on HTML5/Javascript performance – maybe even HTML5 optimized chips/accelerators, additional ChromeOS-like platforms, and potentially browsers optimized to run just HTML5 games or enterprise applications?
    3. Web application discovery will become far more important. The one big weakness as it stands today for HTML5 is application discovery. Its still far easier to discover a native mobile app using the iTunes App Store or the Android Market than it is to find a good HTML5 app. But, as platform matures and the platform economics shift, new application stores/recommendation engines/syndication platforms will become increasingly critical.

    Thought this was interesting? Check out some of my other pieces on Tech industry

  • Standards Have No Standards

    Many forms of technology requires standards to work. As a result, it is in the best interest of all parties in the technology ecosystem to participate in standards bodies to ensure interoperability.

    The two main problem with getting standards working can be summed up, as all good things in technology can be, in the form of webcomics. 

    Problem #1, from XKCDpeople/companies/organizations keep creating more standards.

    Source: XKCD

    The cartoon takes the more benevolent look at how standards proliferate; the more cynical view is that individuals/corporations recognize that control or influence over an industry standard can give them significant power in the technology ecosystem. I think both the benevolent and the cynical view are always at play – but the result is the continual creation of “bigger and badder” standards which are meant to replace but oftentimes fail to completely supplant existing ones. Case in point, as someone who has spent a fair amount of time looking at technologies to enable greater intelligence/network connectivity in new types of devices (think TVs, smart meters, appliances, thermostats, etc.), I’m still puzzled as to why we have so many wireless communication standards and protocols for achieving it (Bluetooth, Zigbee, ZWave, WiFi, DASH7, 6LowPAN, etc)

    Problem #2: standards aren’t purely technical undertakings – they’re heavily motivated by the preferences of the bodies and companies which participate in formulating them, and like the US’s “wonderful” legislative process, involves mashing together a large number of preferences, some of which might not necessarily be easily compatible with one another. This can turn quite political and generate standards/working papers which are too difficult to support well (i.e. like DLNA). Or, as Dilbert sums it up, these meetings are full of people who are instructed to do this:

    Source: Dilbert

    Or this:

    Source: Dilbert

    Our one hope is that the industry has enough people/companires who are more vested in the future of the technology industry than taking unnecessarily cheap shots at one another… It’s a wonder we have functioning standards at all, isn’t it?

    Thought this was interesting? Check out some of my other pieces on Tech industry