Skip to content →

Category: Blog

Bad Defense of the English Major in New York Times

This past Sunday, the New York Times posted an editorial by a writing teacher lamenting the decline of the English major in today’s universities:

The teaching of the humanities has fallen on hard times… Undergraduates will tell you that they’re under pressure — from their parents, from the burden of debt they incur, from society at large — to choose majors they believe will lead as directly as possible to good jobs. Too often, that means skipping the humanities…

In 1991, 165 students graduated from Yale with a B.A. in English literature. By 2012, that number was 62. In 1991, the top two majors at Yale were history and English. In 2013, they were economics and political science.

The writer believes that one result of this decline is that students (and hence graduates) are now less effective at writing clearly and are losing the “rational grace and energy in your conversation with the world around you” which accompanies an appreciation of and familiarity with great literature.

While the title of this post may suggest otherwise, I don’t disagree with the writer on the value of writing and literature. On a personal level, it was only after I left college that I began to appreciate the perspective on life that the (sadly) limited literature I am familiar with afforded me. On a more practical level, I’ve also witnessed firsthand otherwise intelligent individuals struggle in achieving their professional goals as a result of poor writing and communication skills.

However, what jumped out to me about the editorial was less the message on the intrinsic value of English or the writer’s thoughtful criticisms of how humanities courses are taught today, but more how the writer effectively brushed aside the underlying financial reasons pushing students away from declaring English (or another humanity) as a major. Its easy for the writer to argue that “a rich sense of the possibilities of language, literary and otherwise” as an answer to the question of “what is an English major good for?” But that sort of sense, while personally valuable, doesn’t pay off student loans. Its easy to criticize the “new and narrowing vocational emphasis in the way students and their parents think about what to study in college”, but it doesn’t account for the little choice that parents and students have in the matter when trying to make their checkbooks balance.

And therein lies the weakness of most impassioned pleas for students to pursue English majors and humanities instead of “more practical” majors: we don’t live in the world of the below For Lack of a Better Comic:

Comic #35

We live in a world where, sadly, students need to find jobs that can cover their debts. And the pragmatic reaction for the humanities educator is to either find a way to make their majors better suited to helping students find and compete for jobs that can cover their financial burdens or find new ways to enrich students who have chosen to pursue the “narrow vocational emphasis” they have been forced to.

Leave a Comment

Why We All Thought (Wrongly That) Female Fertility Plummeted After 35

The Atlantic published a great article last week debunking the commonly held view that women must have children by 35 or risk high chance of infertility. Its a very interesting read, and one that I hope reassures young women who are feeling unnecessary pressure to choose between kids and careers.

But, what was especially interesting here was the article’s exploration of why this “fact” was so ingrained into our minds and the minds of fertility experts. Beyond just bad statistics (apparently the “widely cited statistic that one in three women ages 35 to 39 will not be pregnant after a year of trying is based on … French birth records from 1670 to 1830”), the article highlighted the availability heuristic: where top of mind experiences, even if they are statistically unrepresentative, twist your perceptions:

Women who are actively trying to get pregnant at age 35 or later might be less fertile than the average over-35 woman. Some highly fertile women will get pregnant accidentally when they are younger, and others will get pregnant quickly whenever they try, completing their families at a younger age. Those who are left are, disproportionately, the less fertile.

This sort of bias isn’t restricted just to fertility – its extremely common among entrepreneurs and investors who oftentimes fall into the trap of generalizing their own opinions and the opinions of their friends & family to the broader market, forgetting that the views of middle-aged, highly educated, Silicon Valley, upper-income, tech-savvy folks they know aren’t always indicative of what the broader population (i.e. the broader market) thinks.

Everyone is subject to bias – what’s important is that we constantly ask ourselves if our biases are creeping in.

Leave a Comment

Brainteasers are a Complete Waste of Time

In hiring that is.

At least that’s what Google’s Head of People Operations (the Google-y term for HR) Laszlo Bock said in a recent interview with the New York Times. While most of the article is about the fascinating data-driven approach Bock’s group has taken to try to improve how they hire and retain the best employees, his point on the lack of success with using Fermi problems (brainteasers that physicist Enrico Fermi was apparently known for enjoying) that Microsoft and Google were famous for asking in job interviews caught my eye:

On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

[emphasis mine]

I personally loathe brainteasers and have walked away from companies who have dared to use those questions on me (I’ve been asked the golf ball question by two interviewers at one company). While they supposedly push the interviewee to demonstrate intellectual horsepower, the complete irrelevance to the important functions of the job, the lack of information this type of question shines on the character or ethic or work experiences of the job candidate, and the fact that many of these are easily looked up online make answers to these questions pretty useless in determining job fit.

So, if you’re an interviewer – do your interviewees and your company a favor: skip the Fermi problems and focus, instead, on ways to probe relevant knowledge and a candidate’s cultural fit in a rigorous, repeatable process.

2 Comments

Tipping is Stupid

Maybe its because I miss Japan, a land with a culture which prides itself on extremely good service but where tipping is practically unheard of (also where I just spent two weeks on my honeymoon). But, lately, I’ve been thinking that tipping is stupid.

    • I’ve found that, except in rare instances of extremely good or extremely bad service, the amount of tip given has almost nothing to do with the quality of service – which makes it a terrible system for rewarding good service and punishing bad service.
    • It feels horribly arbitrary who and how much you tip. Why do we tip the bagboy but not the people who work at the front desk? Why do we tip cab drivers but not a airplane pilot or a flight attendant? And just how much are we supposed to tip? Why 15% at restaurants…? What about cabs? Bellboys?
    • It encourages establishments to pay their employees less, and to be less smart about how they pay their employees. At the end of the day, I tip because I know the recipients tend to be in professions which count on tips for a substantial portion of their income. But this fact begs the question: why should this come in the form of a tip as opposed to a proper wage? After all, employers are in a far better position to have a holistic view on which employees to compensate and how much to insure good service – and why that should be left to the whims of different customer moods and mores is beyond me.

Much to my delight and surprise, while driving home from work, I had the chance to listen to a great story about an establishment called Sushi Yasuda in Manhattan on APM’s Marketplace (the podcast I turn to for news). The quote which caught my attention:

Following the custom in Japan, Sushi Yasuda’s service staff are fully compensated by their salary. Therefore gratuities are not accepted. Thank you.

[Emphasis mine] And, is Sushi Yasuda suffering from bad food and poor service? 987 reviews on Yelp suggest otherwise.

I personally plan to swing by the next time I’m in New York, but I’m hoping more restaurants embrace Sushi Yasuda’s example of paying their workers better and abolishing what, to me, seems like an archaic and irrational practice.

Leave a Comment

What Accel Thinks About Big Data

Capitalizing on widespread interest in companies built around “Big Data”, VC firm Accel yesterday unveiled its “Big Data Fund 2”, a $100M fund aiming to make investments in technology companies which help their customers make sense of the massive volumes of data that they are now able to gather and generate.

While I personally have gotten a little sick of the “Big Data” moniker (its become like “cloud computing” – just one of many buzzwords that companies slap on their websites and press releases), what jumped out to me in reading the press release and the tech blog coverage was the emphasis of the fund away from companies commercializing Big Data infrastructure technology and towards companies building “data driven software”.

Now, no VC’s “rules” about a fund are ever absolute – they will find ways to put money into (what they perceive as) good investments, regardless of what they’ve said in press releases – but the message shift jumped out to me as potentially a very bold statement by Accel on how they perceive the state of the “Big Data” industry.

All industries go through phases – in the early days, the focus is around laying the infrastructure and foundation, and the best tech investments tend to be companies working on infrastructure which ultimately serves as a platform for others (for example: Intel [computing] and Cisco [internet] and Qualcomm [mobile]). Eventually, the industry moves on to the next phase – where the infrastructure layer becomes extremely difficult for small companies to compete in and the best tech investments tend to be in companies which take advantage of the platform to build new and interesting applications (for example: Adobe or VMWare [computing] and Amazon.com [internet] and Rovio [mobile]).

Of course, its hard to know when that transition happens and, as often happens with tech, the “applications” phase of one industry (e.g., Facebook, Salesforce.com, etc.) can oftentimes serve as the infrastructure phase for another (e.g., social applications, CRM-driven applications, etc.). But, what Accel’s “Big Data Fund 2”’s mission suggests is that Accel believes the “Big Data industry” has moved beyond infrastructure and is on towards the second phase where the most promising early-stage investments are no longer in infrastructure to help companies manage/make use of Big Data, but in applications that generate the value directly.

Leave a Comment

Small Personal Update

I, sadly, haven’t done a great job of keeping up with this blog this year. I hope to change that over the coming weeks, but I’d be remiss if I didn’t mention…

David dip

I got married last month to the love of my life :-). Special thanks to my good friend David Chu for this fantastic shot!

One Comment

Android Bluetooth (Smart) Blues

Readers of this blog will know that I’m a devout Fandroid, and the past few years of watching Android rise in market share across all segments and geographies and watching the platform go from curiosity for nerds and less-well-off individuals to must-support platform has been very gratifying to see.

Yet despite all that, there is one prominent area in which I find iOS so much better in that even I – a proud Fandroid venture capitalist – have been forced to encourage startups I meet with and work with to develop iOS-first: support for Bluetooth Smart.

LogoBluetoothSmart

In a nutshell, Bluetooth Smart (previously known as Bluetooth Low Energy) is a new kind of wireless technology which lets electronics connect wirelessly to phones, tablets, and computers. As its previous name suggests, the focus is on very low power usage which will let new devices like smart watches and fitness devices and low power sensors go longer without needing to dock or swap batteries – something that I – as a tech geek — am very interested in seeing get built and I – as a venture capitalist — am excited to help fund.

While Bluetooth Smart has made it much easier for new companies to build new connected hardware to the market, the technology needs device endpoints to support it. And therein lies the problem. Apple added support for Bluetooth Smart in the iPhone 4S and 5 – meaning that two generations of iOS products support this new technology. Google, however, has yet to add any such support to the Android operating system – leaving Bluetooth Smart support on the Android side to be shoddy and highly fragmented despite many Android devices possessing the hardware necessary to support it.

To be fair, part of this is probably due to the differences in how Apple and Google approached Bluetooth. While Android has fantastic support for Bluetooth 4.0 (what is called “Bluetooth Classic”) and has done a great job of making that open and easy to access for hardware makers, Apple made it much more difficult for hardware makers to do novel things with Bluetooth 4.0 (requiring an expensive and time-consuming MFi license – two things which will trip up any startup). Possibly in response to complaints about that, Apple had the vision to make their Bluetooth Smart implementation much more startup-friendly and, given the advantages of using Bluetooth Smart over Bluetooth Classic, many startups have opted to go in that direction.

The result is that for many new connected hardware startups I meet, the only sensible course of action for them is to build for iOS first, or else face the crippling need to either support Android devices one at a time (due to the immaturity and fragmentation in Bluetooth Smart support) or get an MFi license and work with technology that is not as well suited for low power applications. Consequently, I am forced to watch my chosen ecosystem become a second-class citizen for a very exciting new class of startups and products.

I’m hoping that at Google I/O this year (something I thankfully snagged a ticket for :-)), in addition to exciting announcements of new devices and services and software, Google will make time to announce support for Bluetooth Smart in the Android operating system and help this Fandroid VC not have to tell the startups he meets to build iOS-first.

One Comment

NVIDIA’s At It Again

Although I’m not attending NVIDIA’s GPU Technology conference this year (as I did last year), it was hard to avoid the big news NVIDIA CEO Jen-Hsun Huang announced around NVIDIA’s product roadmap. And, much to the glee of my inner nerd, NVIDIA has continued its use of colorful codenames.

The newest addition to NVIDIA’s mobile lineup (their Tegra line of products) is Parker — named after the alter-ego of Marvel’s Spiderman. Parker joins a family which includes Kal-El (Superman) [the Tegra 2], Wayne (Batman) [the Tegra 3], Stark (Iron Man) [Tegra 4], and Logan (Wolverine) [Tegra 5].

And as for NVIDIA’s high-performance computing lineup (their Tesla line of products), they’ve added yet another famous scientist: Alessandro Volta, the inventor of the battery (and the reason our unit for electric potential difference is the “Volt”). Volta joins brilliant physicists Nikola Tesla, Enrico Fermi, Johannes Kepler, and James Maxwell.

(Images from Anandtech)

Leave a Comment

Reading the Tea Leaves on PlayStation 4 Announcement

Sony’s announcement of the PlayStation 4 today has gotten a wide array of responses from the internet (including, amusingly, dismay at the fact that Sony never actually showed the console itself). What was interesting to me was less the console itself but what is revealed about the tech industry in the pretty big changes Sony made over the PlayStation’s previous incarnations. They give a sign of things to come as we await the “XBox 720” (or whatever they call it), Valve’s “Steambox” console, and (what I ultimately think will prevail) the next generation of mobile platform-based consoles like Green Throttle.

  • Sony switched to a more standard PC technology architecture over its old custom supercomputer-like Cell architecture. This is probably due to the increasingly ridiculous costs of putting together custom chips as well as the difficulties for developers in writing software for exotic hardware: Verge link
  • New controller that includes new interface modalities which capture some of the new types of user experiences that users have grown accustomed to from the mobile world (touch, motion) and from Microsoft’s wildly successful Kinect product via their “Eye Camera” (2 1280×800 f/2.0 cameras with 4 channel microphone array): Verge link
  • Strong emphasis during the announcement on streaming cloud gameplay: It looks like Sony is trying to make the most of its $380M acquisition of Gaikai to
    • demo service letting users try the full versions of the games immediately as opposed to after downloading a large, not always available demo
    • drive instant play for downloaded games (because you can stream the game from the cloud while it downloads in the background)
    • provide support for games for the PS3/2/1 without dedicated hardware (and maybe even non-PlayStation games on the platform?)

    Verge link

  • Focus on more social interactions via saving/streaming/uploading video of gameplay: the success of sites like Machinima hint at the types of social engagement that console gamers enjoy. So given the push in the mobile/web gaming world to “social”, it makes perfect sense for Sony to embrace this (so much so that apparently Sony will have dedicated hardware to support video compression/recording/uploading in the background) even if it means support for third party services like UStream (Verge link)
  • Second screen interactivity: The idea of the console as the be-all-end-all site of experience is now thoroughly dead. According to the announcement, the PlayStation 4 includes the ability to “stream” gameplay to handheld PlayStation Vitas (Verge link) as well as the ability to deliver special content/functionality that goes alongside content to iOS/Android phones and tablets (Verge link). A lot of parallels to Microsoft’s XBox Smart Glass announcement last year and the numerous guys trying to put together a second screen experience for TVs and set-top boxes

Regardless of if the PS4 succeeds, these are interesting changes from Sony’s usual extremely locked-down, heavily customized MO and while there are still plenty of details to be described, I think it shows just how much the rise of horizontal platforms, the boom in mobile, the maturation of the cloud as a content delivery platform, and the importance of social engagement have pervaded every element of the tech industry.

(Update: as per Pat Miller’s comments, I’ve corrected some of the previous statements I made about the PlayStation 4’s use of Gaikai technology)

2 Comments

2012 in Blog

I may have not blogged as much as I wanted to in the last quarter of the year, but that won’t stop me from carrying out my annual tradition of sending off the previous year in blog!

Happy New Year everybody! Here’s to a great 2013 and thank you from the bottom of my heart for reading (and continuing to read) my little corner on the internet!

Leave a Comment

Hard Work That Goes Unnoticed

First off, I want to apologize for not blogging more recently. A combination of a major work-related presentations and a desire to carve out time to work on other, soon-to-be-announced projects made blogging time pretty difficult to find.

Secondly, I want to wish all my friends and readers a very wonderful holiday season and the best wishes for a very happy new year!

Lastly, I wanted to make a note that I just changed hosts, moving everything over from my previous host Bluehost to my new host WebFaction (which gives me much greater flexibility and control over my servers). Sadly, despite working many hours on the transfer, moving servers is one of those things where, provided you did everything right, nobody should notice anything different – the ultimate thankless task. So I’m writing this in part so you now know :-).

In the spirit of the post I made some three years ago when I first moved my blog from Blogger to its own domain, I thought I’d share my experiences with the transfer in the event that this is helpful to someone else who is thinking about moving their existing WordPress setup to a Webfaction server (much of this is adapted from the WebFaction doc: http://docs.webfaction.com/user-guide/inbound.html#inbound-moving-a-wordpress-site).

    1. Point your domain registrar to WebFaction’s name servers. If your domains were registered with Bluehost, this is something I believe you can access from BlueHost’s Domain Manager tab. I had previously transferred all my domains to a registrar called NameCheap where the access point for this is under “Manage Domains” (which you can see when you mouseover the “Domains” button in the toolbar). After selecting the domain, you wish to modify, click on “Domain Name Server Setup” and after selecting the “Specify Custom DNS Servers” radio button, you can enter WebFaction’s name servers in the text boxes.
    2. Use WebFaction’s control panel to register the domain. Click on “Domains/Websites” on the left-hand side and then click the “Add Domains” button on the top of the resulting page.
    3. Use WebFaction’s control panel to build a new “Static/CGI/PHP Application”. Go to “Applications” under the “Domains/Websites” header on the left-hand-side and click on the “Add new application” button. Enter a name and make sure to select “Static” in the first dropdown menu.
    4. Use WebFactions’ control panel to build a new “website” which maps the application you just created with the domain you just registered. Go to “Websites” under the “Domains/Websites” header on the left-hand-side and click on the “Add new website” button. Enter a name, type out the domain you wish to use in the text field labeled “Domains”, and when you hit “Add a web app”, click on “Reuse an existing web app” and select the application you just created.
    5. Export the database from BlueHost. The easiest way to do this is to go into BlueHost and use the PHPMyAdmin tool to log into the administrative interface for the database which corresponds to your WordPress installation and use the export functionality to get a SQL “dump file”. Download that file and then upload it to your WebFaction server/use FTP to pull it off of your BlueHost server.
    6. Create a New MySQL Database in WebFaction. Click on “Databases” on the left-hand-side of the WebFaction control panel and click on “Create New Database.” Enter a name, hit the “Create” button, and take note of the password that was autogenerated and the user name (the name of the database before the mandatory “_” character).
    7. Import the database dump file into the database you created in Step 6. Use an SSH tool to get shell command access to your WebFaction server. From the resulting command line, run the command “mysql -u db_name -p -D db_name < dump_file” where the italicized db_name should be replaced by the name of the database you just created and the italicized dump_file should be the name of the dump file that you’ve uploaded/transferred to your WebFaction server. The command will ask you for the password to your database (which you of course wrote down from the previous step, but, if not, can easily rectify by clicking on the “Change database password” link under the “Databases” heading on the left-hand side of the WebFaction control panel) and will complete the import.
    8. Copy the BlueHost directory containing your WordPress install into the WebFaction directory corresponding to the application you created in Step 3. You can do this by manually downloading the files from BlueHost to your computer and then re-uploading (long, slow, annoying) or you can use wget or FTP to mediate the transfer directly between WebFaction and BlueHost (my recommended choice). Make sure you copy the .htaccess file or your URLs will break and you will need to recreate one from the WordPress admin panel after you’ve completed the transfer
    9. Edit the wp-config.php file in the root directory to properly tell WordPress:
        • What the database name is: edit the line that looks like “define(‘DB_NAME’, ‘<database_name>’);” and replace “<database_name>” with the name of the database you created in step 6 (keeping the single quotes)
        • What the right user name is: edit the line that looks like “define(‘DB_USER’, ‘<user>’);” and replace “<user>” with the name of the user in step 6(keeping the single quotes)
        • What the database password is: edit the line that looks like “define(‘DB_PASSWORD’, ‘<password>’);” and replace “<password>” with the password from step 6 (keeping the single quotes)
        • Where the database is: edit the line that looks like “define(‘DB_HOST’, ‘<host>’);” and replace “<host>” with “localhost” (keeping the single quotes)
    10. Test the sucker! Note: you may need to wait up to 48 hours for the domain name shenanigans to be properly reflected by DNS servers
2 Comments

Where do the devices fit?

About a month ago, I got Google’s new Nexus 7 tablet, and have been quite happy with the purchase (not that surprising given my self-proclaimed “Fandroid” status). Android’s Jelly Bean update works remarkably well and the Nexus 7 is wonderfully light and fast.

However, with the purchase of the Nexus 7, this brought the total number of “smart internet connected devices I own and use” to a total of 6:

  • Samsung Galaxy Nexus Verizon edition (4.65” Android phone)
  • a Nexus 7 (7” Android tablet)
  • a Motorola Xoom (10” Android tablet)
  • Chromebook (12” ChromeOS notebook)
  • Thinkpad T4o0 for personal use and a Thinkpad T410 for work (both 14” Windows 7 laptops)

nexus-devicesBeyond demonstrating my unreasonable willingness to spend money on newfangled gadgets (especially when Google puts its brand on them), owning these devices has been an interesting natural experiment to see just what use cases each device category is best suited for. After all, independent of the operating system you choose, there’s quite a bit of overlap between a 10” tablet and the Chromebook/laptop, between the 7” tablet and the 10” tablet, and between the 7” tablet and the 4.65” phone. Would one device supplant the others? Would they coexist? Would some coexist and others fall by the wayside?

Well, after about a month of adding a 5th device to the mix, I can say:

  • I wound up using all the devices, albeit for different things. This was actually quite a surprise to me. Before I bought the Nexus 7, I figured that I would end up either completely replacing the Xoom or find that I couldn’t do without the larger screen. But, I found the opposite happening – that the Nexus 7 took over for some things and the Xoom for others. What things?
    • Smartphone: The smartphone has really become my GPS and on-the-go music listening, photo taking, and quick reading device. Its small size means it fits in my pocket and goes everywhere I go, but its small screen size means I tend to prefer using other devices if they’re around. Because it’s everywhere I go, it’s the most logical device to turn to for picture-taking (despite the Galaxy Nexus’s lackluster camera), GPS-related functionality (checking in, finding directions, etc) and when I want/need to quickly read something (like work email) or listen to music/podcast in the car.
    • 7” tablet: I’ve really taken to the Nexus 7 form factor – and it’s become my go-to-device for reading and YouTube watching. The device is extremely light and small enough to fit in one hand, making it perfect for reading in bed or in a chair (unlike its heavier 10” and laptop-form-factor cousins). The screen is also large enough that watching short-form videos on it makes sense. It is, however, too big to be as mobile as a smartphone (and lacks cellular connectivity, making it useless if there is no WiFi network nearby).
    • 10” tablet: Because of the screen size and its heft, my 10” Motorola Xoom has really become my go-to-device for movie watching, game playing, and bringing to meetings. While the smaller 7” form factor is fine for short-form videos like the ones you’d see on YouTube, it is much too small to get the visual impact you want while watching a movie or playing a game. The larger screen size also gives you more room to play with while taking notes in a meeting, something the smaller screen size only makes possible if you like squinting at small font. It is, however, at least to this blogger, too big and too heavy, to make a great casual reading device, especially when lying in bed 🙂
    • 12” Chromebook: What does a Chromebook have that its smaller tablet cousins don’t? Three things: a keyboard, a mouse, and a full PC flavor of Chrome. The result is that in situations where I want to use Flash-based websites (i.e. the free version of Hulu, Amazon Videos, many restaurant/artist websites, etc) or play Flash-based games (i.e. most Facebook games) or access sophisticated web apps which aren’t touch-driven (i.e. WordPress, posting to Tumblr) or which don’t have full functioned apps attached (i.e. Google Drive/Docs), I turn to the Chromebook.
    • 14” Laptop: So where does my 14” laptop fit (and how could I possibly have enough room in my digital life that I’m actively researching options for my next Thinkpad machine)? Simple: it’s for everything else. I track my finances in Excel, make my corporate presentations in PowerPoint, do my taxes in Turbo Tax, compose blog posts on Windows Live Writer, program/develop on text editors and IDEs, write long emails, edit photos and movies… these are all things which are currently impossible or inconveniently hard to do on devices which don’t have the same screen size, keyboard/mouse caliber, operating system, and processing hardware as modern PCs. And, while the use of new devices has exploded as their cost and performance get better, the simple truth is power users will have a reason to have a “real PC” for at least several more years.
  • Applications/services which sync across devices are a godsend. While I’ve posted before about the power of web-based applications, you really learn to appreciate the fact that web applications & services store their information to a central repository in the cloud when you are trying to access the same work on multiple devices. That, combined with Google Chrome not only working on every device I have, but also actively syncing passwords and browser history between devices and showing the open browser tabs I have on other systems, makes owning and using multiple devices a heckuva lot easier.

How do you use the different devices you own? Has any of that usage segmentation surprised you?

(Image credit – GetAndroidStuff.com)

2 Comments

No Digital Skyscrapers

A colleague of mine shared an interesting article by Sarah Lacy from tech site Pando Daily about the power of technology building the next set of “digital skyscrapers” – Lacy’s term for enduring, 100-year brands in/made possible by technology. On the one hand, I wholeheartedly agree with one of the big takeaways Lacy wants the reader to walk away with: that more entrepreneurs need to strive to make a big impact on the world and not settle for quick-and-easy payouts. That is, after all, why venture capitalists exist: to fund transformative ideas.

But, the premise of the article that I fundamentally disagreed with – and in fact, the very reason I’m interested in technology is that the ability to make transformative ideas means that I don’t think its possible to make “100-year digital skyscrapers”.

In fact, I genuinely hope its not possible. Frankly, if I felt it were, I wouldn’t be in technology, and certainly not in venture capital. To me, technology is exciting and disruptive because you can’t create long-standing skyscrapers. Sure, IBM and Intel have been around a while — but what they as companies do, what their brands mean, and their relative positions in the industry have radically changed. I just don’t believe the products we will care about or the companies we think are shaping the future ten years from now will be the same as the ones we are talking about today, nor were they the ones we talked about ten years ago, and they won’t be the same as the ones we talk about twenty years from now. I’ve done the 10 year comparison before to illustrate the rapid pace of Moore’s Law, but just to be illustrative again: remember, 10 years ago:

    • the iPhone (and Android) did not exist
    • Facebook did not exist (Zuckerberg had just started at Harvard)
    • Amazon had yet to make a single cent of profit
    • Intel thought Itanium was its future (something its basically given up on now)
    • Yahoo had just launched a dialup internet service (seriously)
    • The Human Genome Project had yet to be completed
    • Illumina (posterchild for next-generation DNA sequencing today) had just launched its first system product

And, you know what, I bet 10 years from now, I’ll be able to make a similar list. Technology is a brutal industry and it succeeds by continuously making itself obsolete. It’s why its exciting, and it’s why I don’t think and, in fact, I hope that no long-lasting digital skyscrapers emerge.

One Comment

A Few Months with the Chromebook

(Hello visitors, if you’re interested in this post, you may be interested to know that I have posted my impressions of the newer Chromebook Pixel here)

Last year, I had the chance to attend the New Game conference sponsored by Google to talk about the use of HTML5 in games. The conference itself was fascinating as it brought up many of the themes I’ve mentioned before on this blog about the rise of HTML5 as a tool to build compelling software, but one of the highlights was that Google gave every attendee a free Samsung Series 5 Chromebook to test out and use for development purposes.

chromebooks-portability

I’ve blogged a few times before about Chromebooks and how they represent the next logical step in Google’s belief in the web as the core platform for software delivery (seeing how they’re machines that are built almost entirely around the browser), and I jumped at the chance to test it out.

I initially tested out the Chromebook shortly after getting it for a week or two. To be completely honest, I was underwhelmed. While there were many cool things about the operating system (it always being up to date, the built in Google Talk experience, and the ability to use Google Docs as a scratchpad for starters), the machine just felt very slow (likely in part because of the low-end Intel Atom processor inside). The device never seemed to sync properly with my Google account, the lack of a desktop made this feel more like a browser with a keyboard than an operating system, and poor support for offline functionality and handling of peripherals made it feel very constraining. I meant to write up a review on the blog but I never got around to it and it faded from memory, collecting dust in storage…

Flash forward to May when Google unveiled a pretty bold re-vamp of the Chrome OS operating system that lies behind the Chromebook: Aura. Aura replaced what was formerly a within-one-browser-window experience with something which looks and feels a lot more like a traditional operating system with a taskbar, multiple windows (and hence true multi-tasking), a desktop background, a “system tray/control panel” to readily access system-wide settings (i.e. battery life, which WiFi network I’m connected to, screen brightness, etc), and an application launcher. My previous problems with syncing with my Google account are gone (and its support for tab syncing – where I can continue browsing a webpage I was reading on another device – make using this very natural). The experience also just feels faster – both the act of browsing as well as thinsg like how quickly the touchpad responds to commands. Chromebooks now also handle more file types natively (whether downloaded or from removable media), and, with the recently announced offline Google Drive integration, Chromebooks have gotten a lot more useful and have taken another step to achieve the “web file system” vision I blogged about before.

Much to my surprise, I’ve even found myself turning to my Chromebook regularly as a casual consumption device. It being instant-on, browser-centric, and ready support for multiple user accounts makes it a perfect device to watch TV epsiodes or movies from Google Play, Netflix, or Amazon Videos or to share interesting articles to my Tumblr (something that my touch-centric Galaxy Nexus and Motorola Xoom do less well at).

Realistically, there are a set of apps which I’ve found to work much better on Windows/Linux (mainly coding, using Microsoft Excel, and composing blog posts) and which prevent me from using a Chromebook for 100% of my computing needs. But, while important, these only take up a minority of my time on a computer — what actually stops me from using the Chromebook much more actively as a primary machine in my job and day-to-day are two far more pressing items:

  1. Evernote does not work. I am a very active user of Evernote for note-taking and note organization, and its unfailing ability to crash whatever tab is open to it on a Chromebook is a pretty major roadblock for me.
  2. Some web apps don’t play nice because they don’t recognize Chrome OS properly. The key culprit here for me is Microsoft Outlook. A conspiracy theorist might think this was some ploy by Microsoft to get people using Chrome OS to switch to Windows or by Google to get Outlook users to switch to Google Apps – but at the end of the day, Microsoft’s very nice, new Outlook Web App, which works beautifully on Chrome on my Windows 7 machine, treats the Chromebook as if it were a barely functioning computer running Internet Explorer 6 – leaving me with a crippled web experience for what is my corporate email lifeline. If Google made it possible to spoof the browser identification or found a way to convince Microsoft to give Chrome OS flying colors when it comes to serving up web apps, that would make me a MUCH happier camper.

These issues aside, there is no doubt in my mind that Chrome OS/Chromebooks are a concept worthy of consideration for people who are thinking about buying a new computer for themselves or their loved ones: if you spend most of your time on the computer on the web and don’t need to code or create/consume massive files, these machines are perfect (they boot fast, they update themselves, and they are built with the web in mind). I think this sort of model also will probably work quite well in quite a few enterprise/educational settings, given how easy they are to support and to share between multiple users. This feels to me like an increasingly real validation of my hypothesis of the web as software platform and, while there’s quite a few remaining rough spots, I was very impressed by the new Aura revision and look forward to more refreshes coming out from the Chrome team and more time with my Chromebook :-).

(Image credit – Chromebook – Google)

7 Comments

The Cable Show

A few weeks ago, I attended the 2012 Cable Show – the cable television industry’s big trade show – in Boston to get a “on the ground floor” view of how the leading content owners and cable television/cable technology providers saw the future of video delivery, and thought I’d share some pictures and impressions

  • While there is a significant piece of the show that is like a typical technology conference (mainly cable infrastructure/set top box technology vendors like Motorola, Elemental, Azuki, etc showing off their latest and greatest), by far the biggest booths are SXSW-style attempt (flashy booths, gimmicks) by the content owners (NBC, Disney, etc) to get people to notice them. Almost every major content provider booth had a full bar inside, there were lots of gimmicks (see some of the pictures below — Fox and NBC trotted out some of their celebrities, many booths had photo booth games to show off their latest shows – like A&E with its show Duck Dynasty, Turner even invited a lollipop maker to create lollipops in the shape of some of their cartoon characters, etc).
  • The relationship between content owners and cable companies that has built the profits in the industry is being tested by the rise of internet video. Until recently, I had always been confused as to why Hulu and Netflix seemed so restrictive in terms of content availability. It was only upon understanding just how profitable the existing arrangement between the cable/satellite providers (who are the only ones who can sell access to ESPN, HBO, CNN, etc) and the content owners (who can charge the cable/satellite providers for each subscriber, even those who don’t watch their particular networks) that I began to understand why you can’t get ESPN or HBO online (unless you have a cable/satellite subscription) — much to the detriment of the consumer. Thankfully, I saw some promising signs:
    • At the Cable Show, every content provider and cable provider was talking about “TV Everywhere”. Nearly every single booth touted some sort of new, more flexible way to deliver content over the internet and to new devices like tablets and phones. Granted, they were still operating within the existing sandbox (you can’t watch it without a cable subscription), but the increasing competitive overlap between the cable TV-over-internet services (like Xfinity TV online) and the content providers’-over-internet services (like HBO GO), I feel, will come to a breaking point as
      • Networks like HBO realize they could get a ton of standalone users and make a ton of standalone money by going direct to consumers
      • Smaller networks  increasingly feel squeezed as cable companies give a bigger and bigger cut of total content dollars to networks like HBO and Disney/ESPN/ABC, and resort to going direct to consumers.
    • New “TV networks” are getting real traction. One of the most real threats, from my perspective, to the cable-content owner dynamic is the rise of new content networks like Revision3, Blip.TV, College Humor, and YouTube’s new $200M initiative to build original high-quality “shows”. Why? Because it shows that you don’t need to use cable/satellite or to be a major content owner to get massive distribution. Its why Discovery Networks (owners of the Discovery Channel and TLC) bought Revision3. Its why Hulu, Netflix, and Amazon are funding their own content. After all, Hulu has quite a few made-for-Hulu programs. Netflix has not only created some interesting new shows, they’ve even decided to resurrect the canceled TV series Arrested Development. And its why Amazon just announced its first four original studio projects. Are these going to give HBO’s Game of Thrones a run for its money? Probably not anytime soon — but traction is traction, and the better off these alternatives do, the more likely the existing content/distributors are forced to adapt to the times.

I think this industry is ripe for change — and while it’ll probably come slower than it should, there’s no doubt that we’re going to see massive changes to how the traditional TV industry works.

2 Comments

Advice for Entering College Freshmen

As it’s currently high school graduation season, I’ve been asked by a few friends (who have younger siblings about to graduate) about the advice I’d give to new high school graduates about to become college freshman. While I defer to (older :-D) folks like Guy Kawasaki and Charles Wheelan for some of the deeper insights about how to live one’s life, there is one distinct line of thought that I left with my friends’ siblings and that I wanted to share with all new high school grads/entering college freshmen:

Like with most truths about life, this will seem contradictory. First, take classes seriously. I know its not the sexiest bit of advice, but hear me out: college is one of the last places where you will be surrounded by scholars (both faculty and students) and where your one job in life is to learn how to think. Take advantage of that while it lasts, and make every effort to push your mental horizons. Second, and here’s the contradictory part: don’t take your classes TOO seriously. While I don’t mean its a good thing to fail, I’d encourage new students to never be afraid of skipping a class or a homework assignment if it means finding time for a friend or making time for a great opportunity. College is more about the friends you make and the things you learn outside of the classroom than the time you spend in/on it — and that’s why at the end, I wish I had both taken my classes more seriously and less seriously — in different ways.

I hope this is helpful and congratulations to the class of 2012!

PS: I’d be remiss if I also didn’t mention the post I did a few months ago on general career advice for students 🙂

2 Comments

My Next Business Card

If you’ve been listening to the radio at all in the past month, you’ve likely heard Carly Rae Jepsen’s very cute song “Call Me Maybe” (the twist at the end of the video is worth a watch alone).

Well, I just heard one of the greatest suggestions: business cards based on this song. To that end, I proposed the following rough sketch of my next business card 🙂

Hey I just met you
And this is crazy
But here’s my number
Benjamin Tseng
(XXX) XXX-XXXX

So call me maybe?

Good idea? Or GREATEST IDEA EVER?

4 Comments

American Politics’ Obsession with College

A few months ago, Republican presidential hopeful Rick Santorum attacked Barack Obama’s stated desire to have more Americans pursue higher education. Santorum’s reasons for doing this were fundamentally political: he wanted to portray Obama as a snobby liberal against the image he was hoping to convey of himself as a down-to-earth practical guy who doesn’t want to see money wasted on liberal indoctrination (or whatever it is Santorum thinks happens in colleges…)

While I’m no fan of Santorum’s hypocritical intentions there (anyone else notice how he neglected to mention his own college degree – let alone his MBA and JD?), I do think its worth considering the skeptical view towards American politics’ fixation (dare I say obsession?) with driving up college attendance.

shutterstock_50212558300x450This skeptical view is basically encapsulated in a single question: if the goal is to  get more Americans to go to college, why don’t we just make high school last eight years rather than four?

If we were to somehow able to achieve 100% (or even something like 60-70%) college admissions, an extended high school education (where the last four years might be more advanced and based on applications to different institutions) is basically what you would be creating.

If we think of getting the majority of kids into college in that sense, it begs the question: What would a world like that look like? I’d hazard the following two guesses:

  • First, the costs would be enormous. Even today, with many colleges being independent of the government and with many students bearing the brunt of the cost of college directly, there is huge government involvement with financing. An “eight year high school system”, even if we assumed miraculous levels of efficiency and public-private partnership never before seen in the education system and government, would likely require a huge amount of dollars spent by the taxpayer and by students – if only to provide the financial support lower-income families would need to attend higher education.
  • Second, I believe you would see the number of people going into advanced degrees (Masters, PhD’s, MBA’s, JD’s, MD’s, MPH’s, etc) would skyrocket. The reason for that is simple: if everyone goes to college – then its the same as if nobody went to college: the mark of attending college ceases to have any value in setting yourself apart from other people in the eyes of an employer. The funny thing is – one of the reasons I chose the “eight year high school” analogy is precisely because of the analogy that results: that college grads would became the equivalent of today’s high school grads: in the same trouble in terms of competing in the workforce and finding themselves needing to go to “college” (in this case getting an advanced degree).

One might even argue that a much more educated workforce is worth the cost but what this little thought experiment shows is that just extending high school by four years (the logical equivalent of getting much higher rates of college admissions) is not the obvious universal good that most politicians seem to suggest it is. The fact that students need to go to college at all to participate effectively in the workforce, in my opinion, says more about the lack of effectiveness of our K-12 system than about the value of college.

I think a more meaningful (and hopefully time-and-cost-effective) solution to our education system’s woes would actually be to address the real problems: (1) how students seem to not get enough out of K-12 to contribute to the workforce and (2) how students are forced to pursue expensive degrees just to compete.

  1. Bring K-12 quality up to what is needed for people to succeed in today’s workforce. I think this means investing in early education – study after study shows that some of the most effective education investments are those made in pre-kindergarten Head Start programs – embracing new technology-enabled approaches like Sal Khan’s brilliant Khan Academy, changing how we train and compensate teachers, and doubling down on training employable skills (like some of the ones I mentioned here). None of these are that controversial (although the devil is in the details) – what matters is being committed to the notion of increasing the value of K-12 rather than the just the years kids spend in school.
  2. Build an actually meaningful system of educational accreditation. Today, one of the most important ways to signal to employers that you can be a decent worker is a piece of paper that costs some $100,000+ called a college diploma. That piece of paper is not only extremely expensive, it also does a terrible job of elaborating what a person is good at (forcing many people to pursue further degrees). This system of accreditation really only serves colleges and the companies/people who make money off of them (i.e., admissions counselors/prep services, etc). An accreditation system which actually meaningfully communicated what people’s talents were (i.e. this person is extremely good at math, even though he did not major in math at a top 50 college; or this person is really good at machinework, even though she spent most of her last job planning events, etc) would be beneficial for both employers — who now have a better sense of who they are hiring — and workers — who can now be more discriminating about the value of their education and not needlessly participate in the rat race of tallying up schools/programs which only serve as a rubber stamp on your ability to pay expensive tuition.
These are not quick-and-easy fixes: they are after all major changes to how people/politicians view the world and require not only resources but some very slow-moving institutions to change how they think and operate, but it makes a lot more sense to me than continuing a dogma about how education should work, rather than taking a hard look at some of underlying issues.

(Image credit – SFGate/The Mommy Files)

8 Comments