The barriers to becoming a software engineer are real. People born in technical families, or who were introduced to programming at an early age have this easy confidence that lets them tackle new things, to keep learning — and, in our eyes, they just keep getting further and further ahead. Last year, I saw this gap and gave up. But all we really need is the opportunity to see that it’s not hopeless. It’s not about what we already know, it’s about how we learn. It’s about the tenacity of sitting in front of a computer and googling until you find the right answer. It’s about staring at every line of code until you understand what’s going on, or googling until you do. It’s about googling how-to, examples, errors, until it all begins to make sense.
Everything else will follow.
Practically speaking, nobody can possibly learn or know everything they need to succeed at life. Even the greatest college/graduate education is incapable of teaching you what you need to know two or three years out, let alone the practical ins and outs of the specific situation you may face. As a result, what drives success for knowledge workers today is a mix of three things:
the tenacity to tackle the many problems that you will face
the persistence and skill to figure out the answer — which oftentimes means knowing how to Google well (or Bing or Baidu, if that’s your cup of tea)
“Of course I’m biased, that’s the whole point… subjectivity is an inherent — and I would argue necessary — part of making these reviews meaningful. Giving each new device a decontextualized blank slate to be reviewed against and only asserting the bare facts of its existence is neither engaging nor particularly useful. You want me to complain about the chronically bloopy Samsung TouchWiz interface while celebrating the size perfection of last year’s Moto X. Those are my preferences, my biased opinions, and it’s only by applying them to the pristine new phone or tablet that I can be of any use to readers. To be perfectly impartial would negate the value of having a human conduct the review at all. Just feed the new thing into a 3D scanner and run a few algorithms over the resulting data to determine a numerical score. Job done.”
As Vlad points out, in an expert you’re asking for advice from, bias is a good thing. Now whether or not Vlad has unhelpful biases or is someone who’s opinion you value is a separate question entirely, but if there’s one thing I’ve learned — an unbiased opinion is oftentimes an uneducated one and tend to come from panderers who fit one of three criteria:
they think you don’t want them to express an opinion and are trying to respect your wishes
they don’t know anything
they are trying to sell you something, not mutually exclusive with (2)
The individuals who are the most knowledgeable and thoughtful about a topic almost certainly have a bias and that’s a bias that you want to hear.
Its hard for a device to get noticed in a world where new phones and tablets and smartwatches seem to come out every day. But one device unveiled back in March did for me: Motorola’s new smartwatch, the Moto 360 (see Motorola marketing video below).
So, being a true Fandroid, I bought a Moto 360 (clarification: my wonderful wife woke up at an unseemly hour and bought one for each of us) and have been using it for about a week — my take?
While there’s a lot of room for improvement, I like it.
This is by far the best looking smartwatch out there. Given how important appearance is for a watch, this is by far the most important positive that can be said of the Moto 360 — it just looks good. I was a little worried that the marketing materials wouldn’t accurately represent reality, but that fear turned out to be unfounded. The device not only looks nice up close, especially since its round design just looks so much better than pretty much every other smartwatch’s blocky rectangular designs, it also feels good: stainless steel body, a solid-feeling glass surface, and a very nice-feeling leather strap.
The battery life is nothing to brag about but will last you a full day. The key here is that the watch display can be used in two modes: (1) where the display is always on (and, from what I’ve read, will get something like 12 hours of battery life which won’t last you a whole day) and (2) where the display only turns on when you’ve triggered it which, in my experience, will get you something more like 20 hours of battery life — enough to get through a typical day. Obviously, I use (2) and what makes this possible is that turning on the screen is quite easy: you can do it by tapping on the touch-sensitive screen, by pushing the side button, or (although this only works 80% of the time) by moving your arm to be in a position where you can look at it. Now, I’d love a watch that could last at least months with the screen on before needing a charge but since I’m already charging my phone every night and since the wireless charging dock makes it easy to charge the device, this is an annoyance but hardly a dealbreaker.
The out-of-the-box experience needs some work. While the packaging is beautiful and fits well with how nice the watch itself looks, the Moto 360 unfortunately ships needing to be charged up to 80% before it can be used. Unfortunately this is not clear anywhere on the packaging or in the Android Wear smartphone app that you’re supposed to use to pair with the device or on the watch display so let me be explicit: if you buy the Moto 360, charge the device up before you download the Android Wear app or try to use it. Otherwise, nothing will happen — something which very much freaked out yours truly when I thought I had gotten a defective unit. Also, while I haven’t heard about this from anyone else, the Moto Connect app that Motorola wanted me to install also failed to provision an account for me correctly, leaving me unable to customize the finer details on the watchface designs that come with the watch. Not the end of the world, but definitely a set of problems a company like Motorola shouldn’t be facing.
I’m not sure the pedometer or heart rate sensor are super-accurate, but they’ve pretty much killed any need/desire on my part for a fitness wearable. The fitness functionality on the watch isn’t anything to write home about (its a simple step counter and heart rate sensor with basic history and heart-rate goal tracking). I’m also not entirely convinced that the heart rate sensor or the pedometer are particularly accurate (although its not like the competition is that great either), but their availability on a device I’m always going to be wearing because of its other functionality may pose a serious risk to fitness wearable companies which only do step tracking or heart rate detection.
Voice recognition is still not quite where it needs to be for me to make heavier use of the voice commands functionality.
The software doesn’t do a ton but that’s the way it should be. When I first started using Android Wear, I was a little bummed that it didn’t seem to have a ton of functionality: I couldn’t play games on it or browse maps or edit photos (or send my heartbeat or a random doodle to a random person…). But, after a day or two of wearing the device to social gatherings, I came to realize you really don’t want to do everything on your watch. Complicated tasks should be done on your phone or tablet or PC. They not only have larger screens but they are used in social contexts where that type of activity makes sense. Spending your time trying to do something on your smartwatch looks far more awkward (and probably looks far more rude) than doing the same thing on your phone or other device. Instead, I’ve come to rely on the Moto 360 as a way of supplementing my phone by letting me know (by vibrating and quickly lighting up the screen) about incoming notifications (like from an email or text or Facebook message), new alerts from Google Now (like access to the local weather or finding out about sudden traffic on the road to/from work), and by letting me deal with notifications the way I would if they were on my phone (like the ability to play and pause music or a podcast, or the ability to reply using voice commands to an email or text). This helps me be more present in social settings as I feel much less anxiety around needing to constantly check my phone for new updates (something I’ve been suffering from ever since my Crackberry days)
Android Wear’s approach makes it easy to claim support for many apps (simply by supporting notifications), but there needs to be more interesting apps and watchfaces for the platform to truly get mainstream appeal
All in all, I think the Moto 360 is hands down, the best smartwatch available right now (I’ll reserve my judgement when I get a chance to play with the Apple Watch). Its a great indicator of what Google’s Android Wear platform can achieve when done well and I’ve found its meaningfully changed how I’ve used my phone and eliminated my use of other fitness tracking devices. That being said, there’s definitely a lot of room for improvement: on battery life (especially in a world where the Pebble smartwatch can achieve nearly a week of battery life between charges), on voice recognition accuracy, on out-of-the-box setup experience, and on getting more apps and watchfaces on board. So, if you’re an early adopter type who’s comfortable with some of these rough edges and with waiting to see what apps/watchfaces come out and who is interested in some of the software value I described, this would be a great purchase. If not, you may want to wait for the hardware and software to improve another iteration or two before diving in.
I think the industry still needs a good answer to the average person around “why should I buy a smartwatch?” But, in any event, I’ll be very curious to see how this space evolves as more smartwatches come to market and especially how they change people’s relationships with their other devices.
The teaching of the humanities has fallen on hard times… Undergraduates will tell you that they’re under pressure — from their parents, from the burden of debt they incur, from society at large — to choose majors they believe will lead as directly as possible to good jobs. Too often, that means skipping the humanities…
In 1991, 165 students graduated from Yale with a B.A. in English literature. By 2012, that number was 62. In 1991, the top two majors at Yale were history and English. In 2013, they were economics and political science.
The writer believes that one result of this decline is that students (and hence graduates) are now less effective at writing clearly and are losing the “rational grace and energy in your conversation with the world around you” which accompanies an appreciation of and familiarity with great literature.
While the title of this post may suggest otherwise, I don’t disagree with the writer on the value of writing and literature. On a personal level, it was only after I left college that I began to appreciate the perspective on life that the (sadly) limited literature I am familiar with afforded me. On a more practical level, I’ve also witnessed firsthand otherwise intelligent individuals struggle in achieving their professional goals as a result of poor writing and communication skills.
However, what jumped out to me about the editorial was less the message on the intrinsic value of English or the writer’s thoughtful criticisms of how humanities courses are taught today, but more how the writer effectively brushed aside the underlying financial reasons pushing students away from declaring English (or another humanity) as a major. Its easy for the writer to argue that “a rich sense of the possibilities of language, literary and otherwise” as an answer to the question of “what is an English major good for?” But that sort of sense, while personally valuable, doesn’t pay off student loans. Its easy to criticize the “new and narrowing vocational emphasis in the way students and their parents think about what to study in college”, but it doesn’t account for the little choice that parents and students have in the matter when trying to make their checkbooks balance.
And therein lies the weakness of most impassioned pleas for students to pursue English majors and humanities instead of “more practical” majors: we don’t live in the world of the below For Lack of a Better Comic:
We live in a world where, sadly, students need to find jobs that can cover their debts. And the pragmatic reaction for the humanities educator is to either find a way to make their majors better suited to helping students find and compete for jobs that can cover their financial burdens or find new ways to enrich students who have chosen to pursue the “narrow vocational emphasis” they have been forced to.
Maybe its because I miss Japan, a land with a culture which prides itself on extremely good service but where tipping is practically unheard of (also where I just spent two weeks on my honeymoon). But, lately, I’ve been thinking that tipping is stupid.
I’ve found that, except in rare instances of extremely good or extremely bad service, the amount of tip given has almost nothing to do with the quality of service – which makes it a terrible system for rewarding good service and punishing bad service.
It feels horribly arbitrary who and how much you tip. Why do we tip the bagboy but not the people who work at the front desk? Why do we tip cab drivers but not a airplane pilot or a flight attendant? And just how much are we supposed to tip? Why 15% at restaurants…? What about cabs? Bellboys?
It encourages establishments to pay their employees less, and to be less smart about how they pay their employees. At the end of the day, I tip because I know the recipients tend to be in professions which count on tips for a substantial portion of their income. But this fact begs the question: why should this come in the form of a tip as opposed to a proper wage? After all, employers are in a far better position to have a holistic view on which employees to compensate and how much to insure good service – and why that should be left to the whims of different customer moods and mores is beyond me.
I personally plan to swing by the next time I’m in New York, but I’m hoping more restaurants embrace Sushi Yasuda’s example of paying their workers better and abolishing what, to me, seems like an archaic and irrational practice.
As it’s currently high school graduation season, I’ve been asked by a few friends (who have younger siblings about to graduate) about the advice I’d give to new high school graduates about to become college freshman. While I defer to (older :-D) folks like Guy Kawasaki and Charles Wheelan for some of the deeper insights about how to live one’s life, there is one distinct line of thought that I left with my friends’ siblings and that I wanted to share with all new high school grads/entering college freshmen:
Like with most truths about life, this will seem contradictory. First, take classes seriously. I know its not the sexiest bit of advice, but hear me out: college is one of the last places where you will be surrounded by scholars (both faculty and students) and where your one job in life is to learn how to think. Take advantage of that while it lasts, and make every effort to push your mental horizons. Second, and here’s the contradictory part: don’t take your classes TOO seriously. While I don’t mean its a good thing to fail, I’d encourage new students to never be afraid of skipping a class or a homework assignment if it means finding time for a friend or making time for a great opportunity. College is more about the friends you make and the things you learn outside of the classroom than the time you spend in/on it — and that’s why at the end, I wish I had both taken my classes more seriously and less seriously — in different ways.
I hope this is helpful and congratulations to the class of 2012!
While I’m no fan of Santorum’s hypocritical intentions there (anyone else notice how he neglected to mention his own college degree – let alone his MBA and JD?), I do think its worth considering the skeptical view towards American politics’ fixation (dare I say obsession?) with driving up college attendance.
This skeptical view is basically encapsulated in a single question: if the goal is to get more Americans to go to college, why don’t we just make high school last eight years rather than four?
If we were to somehow able to achieve 100% (or even something like 60-70%) college admissions, an extended high school education (where the last four years might be more advanced and based on applications to different institutions) is basically what you would be creating.
If we think of getting the majority of kids into college in that sense, it begs the question: What would a world like that look like? I’d hazard the following two guesses:
First, the costs would be enormous. Even today, with many colleges being independent of the government and with many students bearing the brunt of the cost of college directly, there is huge government involvement with financing. An “eight year high school system”, even if we assumed miraculous levels of efficiency and public-private partnership never before seen in the education system and government, would likely require a huge amount of dollars spent by the taxpayer and by students – if only to provide the financial support lower-income families would need to attend higher education.
Second, I believe you would see the number of people going into advanced degrees (Masters, PhD’s, MBA’s, JD’s, MD’s, MPH’s, etc) would skyrocket. The reason for that is simple: if everyone goes to college – then its the same as if nobody went to college: the mark of attending college ceases to have any value in setting yourself apart from other people in the eyes of an employer. The funny thing is – one of the reasons I chose the “eight year high school” analogy is precisely because of the analogy that results: that college grads would became the equivalent of today’s high school grads: in the same trouble in terms of competing in the workforce and finding themselves needing to go to “college” (in this case getting an advanced degree).
One might even argue that a much more educated workforce is worth the cost but what this little thought experiment shows is that just extending high school by four years (the logical equivalent of getting much higher rates of college admissions) is not the obvious universal good that most politicians seem to suggest it is. The fact that students need to go to college at all to participate effectively in the workforce, in my opinion, says more about the lack of effectiveness of our K-12 system than about the value of college.
I think a more meaningful (and hopefully time-and-cost-effective) solution to our education system’s woes would actually be to address the real problems: (1) how students seem to not get enough out of K-12 to contribute to the workforce and (2) how students are forced to pursue expensive degrees just to compete.
Bring K-12 quality up to what is needed for people to succeed in today’s workforce. I think this means investing in early education – study after study shows that some of the most effective education investments are those made in pre-kindergarten Head Start programs – embracing new technology-enabled approaches like Sal Khan’s brilliant Khan Academy, changing how we train and compensate teachers, and doubling down on training employable skills (like some of the ones I mentioned here). None of these are that controversial (although the devil is in the details) – what matters is being committed to the notion of increasing the value of K-12 rather than the just the years kids spend in school.
Build an actually meaningful system of educational accreditation. Today, one of the most important ways to signal to employers that you can be a decent worker is a piece of paper that costs some $100,000+ called a college diploma. That piece of paper is not only extremely expensive, it also does a terrible job of elaborating what a person is good at (forcing many people to pursue further degrees). This system of accreditation really only serves colleges and the companies/people who make money off of them (i.e., admissions counselors/prep services, etc). An accreditation system which actually meaningfully communicated what people’s talents were (i.e. this person is extremely good at math, even though he did not major in math at a top 50 college; or this person is really good at machinework, even though she spent most of her last job planning events, etc) would be beneficial for both employers — who now have a better sense of who they are hiring — and workers — who can now be more discriminating about the value of their education and not needlessly participate in the rat race of tallying up schools/programs which only serve as a rubber stamp on your ability to pay expensive tuition.
These are not quick-and-easy fixes: they are after all major changes to how people/politicians view the world and require not only resources but some very slow-moving institutions to change how they think and operate, but it makes a lot more sense to me than continuing a dogma about how education should work, rather than taking a hard look at some of underlying issues.
If you’ve been following the major tech trades, you’ll know that Scott Thompson, former head of eBay’s PayPal unit and CEO of internet giant Yahoo, has been embroiled in an embarrassing scandal about a misrepresentation on his resume. It turns out that Mr. Thompson’s resume specifically says that he had a degree in computer science from Stonehill College – something that turned out to be flat-out false, and which ultimately led to Thompson stepping down as CEO.
I may be in the minority here, but I feel that Thompson lying on his resume was probably not the biggest deal, especially since I doubt that degree made any real difference to why Yahoo hired him. But what was a heck of a lot worse was that the misrepresentation made its way into Yahoo’s 10K — “Mr. Thompson holds a Bachelor’s degree in accounting and computer science from Stonehill College” — a filing to the SEC that Thompson signed, certifying that:
I, Scott Thompson, certify that:
1. I have reviewed this Amendment No. 1 to the Annual Report on Form 10-K of Yahoo! Inc. for the year ended December 31, 2011; and
2. Based on my knowledge, this report does not contain any untrue statement of a material fact or omit to state a material fact necessary to make the statements made, in light of the circumstances under which such statements were made, not misleading with respect to the period covered by this report.
This changed Thompson’s sin from one of “just” padding a resume to one of either (1) not carefully reading one of the most important documents a company can issue and/or (2) outright lying to the government and to Yahoo’s investors – not a good sign for a new (and now ex-) CEO. And, also seriously calls into question the Yahoo board of directors’ judgment in that they failed to do a very simple thing such as running a basic background check on a key hire.
As someone who is an investor (both in my job in venture capital investing in startups and outside of my work in the public market) and has been lucky enough to participate in board meetings for some of our portfolio companies, these are particularly alarming signs. While the underlying lie is not really that big a deal, being able to trust the executives and the board members who are supposed to have your best interests at heart is – and misrepresentations in a regulatory filing speak very poorly to a person’s thoroughness, competence, and/or credibility.
The next Yahoo CEO has a difficult job ahead – not only will he/she need to address the underlying problem of Yahoo not having a coherent vision/strategy and having demoralized workers, he/she will likely need to manage the construction of a better board of directors and the implementation of new policies and procedures to prevent this type of thing from happening again.
I’ve blogged before about the strengths of the web as a software development platform and the extent to which web apps are now practically the same thing as the apps that we run on our computers and phones. But, frankly, one of the biggest things holding back the vision of the web as a full-fledged “operating system” is the lack of a web-centric “file system”. I use the quotes because I’m not referring to the underlying NTFS/ExtX/HFS/etc technology that most people think of when they hear “file system”: I’m referring to basic functionalities that we expect in our operating systems and file systems:
a place to reliably create, read, and edit data
the ability to search through stored information based on metadata
a way to associate data with specific applications and services that can operate on them (i.e. opening Photoshop files in Adobe Photoshop, MP3s in iTunes, etc)
a way to let any application with the right permissions and capabilities to act on that data
Now, a skeptic might point out that the HTML5 specification actually has a lot of local storage/file handling capabilities and that services like Dropbox already provide some of this functionality in the form of APIs that third party apps and services can use – but in both cases, the emphasis is first and foremost on local storage – putting stuff onto or syncing with the storage on your physical machine. As long as that’s true, the web won’t be a fully functioning operating system. Web services will routinely have to rely on local storage (which, by the way, reduces the portability of these apps between different machines), and applications will have to be more silo’d as they each need to manage their own storage (whether its stored on their servers or stored locally on a physical device).
What a vision of the web as operating system needs is a cloud-first storage service (where files are meant to reside on the cloud and where local storage is secondary) which is searchable, editable, and supports file type associations and allows web apps and services to have direct access to that data without having to go through a local client device like a computer or a phone/tablet. And, I think we are beginning to see that with Google Drive.
The local interface is pretty kludgy: the folder is really just a bunch of bookmark links, emphasizing that this is a web-centric product first and foremost
It offers many useful operating system-like functionality (like search and revision history) directly on the web where the files are resident
Google Drive greatly emphasizes how files stored on it have associated viewers and can be accessed by a wide range of apps, including some by Google (i.e. attachments on Gmail, opening/editing on Google Docs, and sharing with Google+) and some by third parties like HelloFax, WeVideo, and LucidChart
Whether or not Google succeeds longer-term at turning Google Drive into a true cloud “file system” will depend greatly on their ability to continue to develop the product and manage the potential conflicts involved with providing storage to web application competitors, but suffice to say, I think we’re at what could be the dawn of the transition from web as a software platform to web as an operating system. This is why I feel the companies that should pay more close attention to this development aren’t necessarily the storage/sync providers like Dropbox and Box.net – at least not for now – but companies like Microsoft and Apple which have a very different vision of how the future of computing should look (much more local software/hardware-centric) and who might not be in as good a position if the web-centric view that Google embodies takes off (as I think and hope it will).
The screen is gorgeous – I had always heard that Samsung’s Super AMOLED screens delivered particularly vivid colors — beats the LCD that I had on my old DROID2 without any question.
I will stop making fun of big screens – my previous phone, Motorola’s DROID 2, had a 3.7” screen (more or less the same as the iPhone 4). The Galaxy Nexus? A massive 4.65”! I’ll have to admit: it took a little while getting used to it — and don’t get me wrong, there are still moments when I curse my small hands 🙂 — but the difference in terms of extra screen space, ease of typing, etc is amazing. At SXSW, there was a moment when I had to use my colleague’s iPhone to enter information into a form (because his email client wasn’t working so he couldn’t send me the link). What had once been normal to me felt like the most cramped little device possible. I now begin to understand why my girlfriend wants the Samsung Galaxy Note’s massive 5.3” screen
LTE is blazing. I follow wireless news so I had always logically understood the numbers behind Verizon’s LTE – it was one reason I was always irked that AT&T and T-Mobile called their HSPA+ and other non-LTE/WiMax technologies “4G”. But, having never used an LTE device, I didn’t really understand the speed until I had used the LTE on my Galaxy Nexus. The speed is incredible. Its as fast, if not faster than WiFi (depending on connection strength) – how do I know this? I could point you to a speedtest screencap, but a use case is more illustrative: when the DSL died out in my house, it was my Galaxy Nexus to the rescue as we waited on AT&T (ironic considering I’m on the Verizon network!) to swing by and fix the connection.
LTE is blazing part 2: There are two downsides to LTE which are worth mentioning.
The first is that it burns up your battery extremely quickly and has a tendency to make your phone extremely hot. On both my DROID 2 and iPhone, it would take prolonged usage of the 3G network before that type of “burn” would kick in.
The second is that for whatever reason (I can’t tell if its the modem/RF in my phone or if its the Verizon network or if Verizon is just trying to throttle me 🙁 or some combination), my connection stability has not been great. I get kicked off the LTE network randomly, whereas 3G-only mode (CDMA) has given me much better network stability
While I miss the physical keyboard of the DROID2, the combination of the larger screen, faster phone, and combination of Swiftkey and intuitive in-text-field spell-checker makes it work. The larger screen means its easier to hit the right keys at the right time. The faster phone means no more weird latency between keypresses and actual registering of those presses. SwiftKey provides remarkably good autocorrect which is also predictive of next words, and the new in-text-field spell-checker means the words that I misspelled or have obvious grammatical errors on get underlined in the textfield directly, letting me choose between a number of alternatives for the best correction.
One thing that took a while for me to get used to was the weird positioning of the notification light. Whereas before, the notification light was, as with the Motorola Xoom, in the upper-right corner of the device – the notification light for the Galaxy Nexus is at the bottom of the phone – which is a little strange in my opinion…
Significantly improved performance. That significant UI slickness gap I mentioned in my last post comparing the iPhone 4 to the DROID 2? Basically gone. I don’t know if its the new operating system, the new chip, or some combination – but I no longer have iOS envy when it comes to performance.
But, the Galaxy Nexus doesn’t have the greatest camera. While the software interface for the camera has been revamped (and significantly improved in my opinion) and the zero-shutter picture taking is a nice touch, 5MP and the color performance of the camera just aren’t much to write home about. Thankfully, I’m such a terrible photographer, I don’t think it really matters what camera I have 🙂 so this is kind of a wash for me.
Ice Cream Sandwich keeps much of what I love about Android (refer to this prior post) and adds to it. I’ve had Ice Cream Sandwich on my Motorola Xoom (Android tablet) for some time as well – but it felt more incremental over the Android 3.0 Honeycomb operating system that it replaced than the dramatic change over Android 2.3 Gingerbread operating system that was on my DROID2 and on most Android phones today.
Better notifications: two major changes from the previous version of Android – the first is that individual notifications can be dismissed with a cool swipe gesture which just works wonderfully. The second is that the settings tool can now be easily accessed from the pulldown notification menu.
Resizable and dynamic widgets: With the exception of the occasional buggy implementation of the email widget (which, for whatever reason, stops reflecting the status of my corporate inbox), being able to scroll through my email and calendar or play music without going into the apps themselves or to rapidly turn on/off different wireless features without going into the settings or to create shortcuts to turn-by-turn navigation to specific addresses is amazing.
New turn-by-turn navigation has a much more natural sounding voice.
The New Chrome for Android browser, while lacking in Flash and the ability to enforce a desktop user-agent (to get the desktop version of a webpage), is not only extremely slick, it brings quick Google sign-in capabilities (saving me a ton of keystrokes when it comes to Google apps or other services which require Google login), instant synchronization with all Chrome browsers across all devices, and a number of awesome gestures to manage tabs. To be fair, I think Safari on iOS still shows a performance advantage in terms of avoiding artifacts (especially while scrolling while the page is loading), but the much improved tab management and the synchronization make it a far better browser, in my opinion, than anything else out there.
New multitasking makes it easy to see all the apps that are open, a quick screenshot (so you know what’s going on in those apps), and the simple swipe-to-close gesture that the new notifications menu has.
The new contacts app (now called People) and calendar app are significantly improved. Being able to pinch-zoom in the calendar app to shift the viewing frame is very cool and extremely helpful when switching between weekends/days where I have few and long meetings versus weekdays where I have many and short meetings.
Battery life is still something that needs to improve. Full context: at SXSW, everybody was charging their phones by the late afternoon: it didn’t matter if you were using Android, iOS, or Windows – everybody was charging up on spare power outlets or on the FedEx guys walking around with phone-charging jackets (no joke!) that you could plug into. But, with that said, there’s no doubt in my mind that the iPhone still wins hands down in a battery life race. I don’t know if this is primarily because of the larger screen & LTE connection on the Galaxy Nexus or if there are some runaway background processes/fundamental operating system limitations that are happening, but if I were Google or some of the Android phone makers, I would focus on tackling probably the last real but still very important advantage that the iPhone has.
Net-net, I think this device is pretty awesome. Sure, the battery life is not where I want it to be, and the camera, weird positioning of the notification light, and lack of physical keyboard are things I take fault at. But, the combination of having Ice Cream Sandwich, great screen, and LTE connectivity make me agree with the Verge’s review of the product: “The Galaxy Nexus is the best Android phone ever made… it could be the best smartphone ever produced … Since day one, I’ve been waiting for an Android device that lived up to the promise of such a powerful OS. I think I can stop waiting now.”
In the Pipeline’s Derek Lowe wrote a very thoughtful opinion piece for the ACS (American Chemical Society) journal Medicinal Chemistry Letters where he does something which I encourage all career-minded working people to do: hold up a mirror to his own industry (medicinal chemistry … obviously) and then gaze into his crystal ball to see where it might go in the future:
it is now the absolute worst time ever to be an ordinary medicinal chemist in a high-wage part of the world. The days when you could make a reliable living doing methyl–ethyl–butyl–futile work in the United States or Western Europe are gone, and what mechanism will ever be found to bring them back? There’s still a lot of that work that needs to be done, but it is getting done somewhere else, and as long as “somewhere else” operates more cheaply and reasonably on time, that situation will not change.
This means that the best advice is not to be ordinary. That is not easy, and it is no guarantee, either, but it is the only semisafe goal for which to aim. Medicinal chemists have to offer their employers something that cannot be had more cheaply in Shanghai or Bangalore. New techniques, proficiency with new equipment, ideas that have not become commodified yet: Those seem to be the only form of insurance, and even then, they are not always enough.
In the same way that the medicinal chemists from 5-10 years ago that Derek Lowe is writing about were caught off-guard by the impact of globalization, people in the postal service are watching technologies like email and internet advertising change the foundation of their jobs, people in the healthcare industry are watching new laws and regulations slowly come down the pipeline, and people in the book publishing industry are watching as eBooks and eReaders take off. I’m not claiming that these changes were obviously predictable – that’s what makes my job in venture interesting! — but, changes in science & technology, in globalization, and in demographics have and will dramatically impact every aspect of life/business and, frankly speaking, its the people who work in an industry (in the case of medicinal chemistry, it was guys like Derek Lowe) who have the best shot at gazing at a crystal ball, predicting and understanding the changes that will come down the pipeline, and, then, figuring out ways to get ahead of it (whether that means changing jobs, learning new skills, etc).
So, do yourself a favor 5-10 years from now – and gaze into your crystal ball.
I recently came back from a great two week trip to China and Japan. Because I needed an international phone plan/data access, I ended up giving up my beloved DROID2 (which lacks international roaming/data) for two weeks and using the iPhone 4 my company had given me.
Long story short: I still prefer my DROID2(although to a lesser extent than before).
So, what were my big observations after using the iPhone 4 for two weeks and then switching back to my DROID2?
Apple continues to blow me away with how good they are at
UI slickness: There’s no way around it – with the possible exception of the 4.0 revision of Android Ice Cream Sandwich (which I now have and love on my Motorola Xoom!) – no Android operating system comes close to the iPhone/iPad’s remarkable user interface smoothness. iOS animations are perfectly fluid. Responsiveness is great. Stability is excellent (while rare, my DROID2 does force restart every now and then — my iPhone has only crashed a handful of times). It’s a very well-oiled machine and free of the frustrations I’ve had at times when I. just. wished. that. darn. app. would. scroll. smoothly.
Battery life: I was at or near zero battery at the end of every day when I was in Asia – so even the iPhone needs improvement in that category. But, there’s no doubt in my mind that my DROID2 would have given out earlier. I don’t know what it is about iOS which enables them to consistently deliver such impressive battery life, but I did notice a later onset of “battery anxiety” during the day while using the iPhone than I would have on my DROID2.
Apple’s soft keyboard is good – very good — but nothing beats a physical keyboard plus SwiftKey. Not having my beloved Android phone meant I had to learn how to use the iPhone soft keyboard to get around – and I have to say, much to my chagrin, I actually got the hang of it. Its amazingly responsive and has a good handle on what words to autocorrect, what to leave alone, and even on learning what words were just strange jargon/names but still legitimate. Even back in the US on my DROID2, I find myself trying to use the soft keyboard a lot more than I used to (and discovering, sadly, that its not as good as the iPhone’s). However:
You just can’t type as long as you can on a hard physical keyboard.
Every now and then the iPhone makes a stupid autocorrection and it’s a little awkward to override it (having to hit that tiny “x”).
The last time I did the iPhone/DROID comparison, I talked about how amazing Swype was. While I still think it’s a great product, I’ve now graduated to SwiftKey(see video below) not only because I have met and love the CEO Jonathan Reynolds but because of its uncanny ability to compose my emails/messages for me. It learns from your typing history and from your blog/Facebook/Gmail/Twitter and inputs it into an amazing text prediction engine which not only predicts what words you are trying to type but also the next word after that! I have literally written emails where half of my words have been predicted by SwiftKey.
Notifications in iOS are terrible.
A huge issue for me: there is no notification light on an iPhone. That means the only way for me to know if something new has happened is if I hear the tone that the phone makes when I get a new notification (which I don’t always because its in my pocket or because – you know – something else in life is happening at that moment) or if I happen to be looking at the screen at the moment the notifications shows up (same problem). This means that I have to repeatedly check the phone throughout the day which can be a little obnoxious when you’re with people/doing something else and just want to know if an email/text message has come in.
What was very surprising to me was that despite having the opportunity to learn (and dare I say, copy) from what Android and WebOS had done, Apple chose quite possibly the weakest approach possible. Not only are the notifications not visible from the home screen – requiring me to swipe downward from the top to see if anything’s there — its impossible to dismiss notifications one at a time, really hard (or maybe I just have fat fingers?) to hit the clear button which dismisses blocks of them at a time, even after I hit clear, I’m not sure why some of the notifications don’t disappear, and it is surprisingly easy to accidentally hit a notification when you don’t intend to (which will force you into a new application — which wouldn’t be a big deal if iOS had a cross-application back button… which it doesn’t). Maybe this is just someone who’s too used to the Android way of doing things, but while this is way better than the old “in your face” iOS notifications, I found myself very frustrated here.
Cursor positioning feels a more natural on Android. I didn’t realize this would bug me until after using the iPhone for a few days. The setup: until Android’s Gingerbread update, highlighting text and moving the caret (where your next letter comes out when you type) was terrible on Android. It was something I didn’t realize in my initial comparison and something I came to envy about iOS: the magnifying glass that pops up when you want to move your cursor and the simple drag-and-drop highlighting of text. Thankfully with the Gingerbread update, Android completely closes that gap (see image on the right) and improves upon it. Unlike with iOS, I don’t need to long-hold on the screen to enter some eery parallel universe with a magnified view – in Android, you just click once, drag the arrow to where you want the cursor to be, and you’re good to go.
No widgets in iOS. There are no widgets in iOS. I can see the iOS fans thinking: “big deal, who cares? they’re ugly and slow down the system!” Fair points — so why do I care? I care because widgets let me quickly turn on or off WiFi/Bluetooth/GPS from the homescreen in Android, but in iOS, I would be forced to go through a bunch of menus. It means, on Android, I can see my next few calendar events, but in iOS, I would need to go into the calendar app. It means, on Android I can quickly create a new Evernote note and see my last few notes from the home screen, but in iOS, I would need to open the app. It means that on Android I can see what the weather will be like from the homescreen, but in iOS, I would need to turn on the weather app to see the weather. It means that on Android, I can quickly glance at a number of homescreens to see what’s going on in Google Voice (my text messages), Google Reader, Facebook, Google+, and Twitter, but on iOS, I need to open each of those apps separately. In short, I care about widgets because they are convenient and save me time.
Apps play together more nicely with Android. Android and iOS have a fundamentally different philosophy on how apps should behave with one another. Considering most of the main iOS apps are also on Android, what do I mean by this? Well, Android has two features which iOS does not have: a cross-application back button and a cross-application “intent” system. What this means is that apps are meant to push information/content to each other in Android:
If I want to “share” something, any app of mine that mediates that sharing – whether its email, Facebook, Twitter, Path, Tumblr, etc – its all fair game (see image on the right). On iOS, I can only share things through services that the app I’m in currently supports. Want to post something to Tumblr or Facebook or over email in an app that only supports Twitter? Tough luck in iOS. Want to edit a photo/document in an app that isn’t supported by the app you’re in? Again, tough luck in iOS. With the exception of things like web links (where Apple has apps meant to handle them), you can only use the apps/services which are sanctioned by the app developer. In Android, apps are supposed to talk with one another, and Google goes the extra mile to make sure all apps that can handle an “action” are available for the user to choose from.
In iOS, navigating between different screens/features is usually done by a descriptive back button in the upper-left of the interface. This works exactly like the Android back button does with one exception. These iOS back buttons only work within an application. There’s no way to jump between applications. Granted, there’s less of a need in iOS since there’s less cross-app communication (see previous bullet point), but when you throw in the ability of iOS5’s new notification system to take you into a new application altogether and when you’re in a situation where you want to use another service, the back button becomes quite handy.
And, of course, deluge of the he-said-she-said that I observed:
Free turn-by-turn navigation on Android is AWESOME and makes the purchase of the phone worth it on its own (mainly because my driving becomes 100x worse when I’m lost). Not having that in iOS was a pain, although thankfully, because I spent most of my time in Asia on foot, in a cab, or on public transit, it was not as big of a pain.
Google integration (Google Voice, Google Calendar, Gmail, Google Maps) is far better on Android — if you make as heavy use of Google services as I do, this becomes a big deal very quickly.
Chrome to Phone is awesome – being able to send links/pictures/locations from computer to phone is amazingly useful. I only wish someone made a simple Phone-to-Chrome capability where I could send information from my phone/tablet to a computer just as easily.
Adobe Flash performance is, for the record, not great and for many sites its simply a gateway for advertisements. But, its helpful to have to be able to open up terrible websites (especially those of restaurants) — and in Japan, many a restaurant had an annoying Flash website which my iPhone could not open.
Because of the growing popularity of Android, app availability between the two platforms is pretty equal for the biggest apps (with just a few noteworthy exceptions like Flipboard). To be fair, many of the Android ports are done haphazardly – leading to a more disappointing experience – but the flip side of this is that the more open nature of Android also means its the only platform where you can use some pretty interesting services like AirDroid (easy-over-Wifi way of syncing and managing your device), Google Listen (Google Reader-linked over-the-air podcast manager), BitTorrent Remote (use your phone to remote login to your computer’s BitTorrent client), etc.
I love that I can connect my Android phone to a PC and it will show up like a USB drive. iPhone? Not so much (which forced me to transfer my photos over Dropbox instead).
My ability to use the Android Market website to install apps over the air to any of my Android devices has made discovering and installing new apps much more convenient.
The iOS mail client (1) doesn’t let you collapse/expand folders and (2) doesn’t let you control which folders to sync to what extents/at what intervals, but the Android Exchange client does. For someone who has as many folders as I do (one of which is a Getting Things Done-esque “TODO” folder), that’s a HUGE plus in terms of ease of use.
To be completely fair – I don’t have the iPhone 4S (so I haven’t played with Siri), I haven’t really used iCloud at all, and the advantages in UI quality and battery life are a big deal. So unlike some of the extremists out there who can’t understand why someone would pick iOS/Android, I can see the appeal of “the other side.” But after using the iPhone 4 for two weeks and after seeing some of the improvements in my Xoom from Ice Cream Sandwich, I can safely say that unless the iPhone 5 (or whatever comes after the 4S) brings with it a huge change, I will be buying another Android device next. If anything, I’ve noticed that with each generation of Android, Android devices further closes the gap on the main advantages that iOS has (smoothness, stability, app selection/quality), while continuing to embrace the philosophy and innovations that keep me hooked.
Many students trying to pick classes/majors in college will end up consulting with their counselors/academic advisors who, in turn, will almost always reply with very generic advice along the lines of: “study what you love”.
But as my girlfriend once pointed out, the problem with asking academic advisors that question is that academic advisors tend to be academics – and in academia, you can make a career out of studying anything. Outside of academia, that is not so true. Look no further than the paradox of how we have record high unemployment for recent college graduates despite almost every startup I’ve spoken with expressing concerns about finding and retaining qualified employees?
Obviously, our education system is failing to meet the needs of our students and employers. But, other than hope that the system miraculously fixes itself, my advice to students is this: take classes that teach broadly employable skills. You don’t need to take a lot of them, and nobody’s asking you to major in a something that you don’t want to – college is, after all, about broadening your horizons and studying what interests you. But, in a competitive job market and a turbulent economy, the worker that is in the best position is the worker who can move between industries/jobs easily (getting out of bad jobs/industries and moving into better paid/more interesting ones) and who can quickly demonstrate value to their boss (so as to make them indispensable faster).
So what sort of skills am I referring to? Off the top of my head (I’m sure there are others), three come to mind:
Accounting – All organizations that deal with money need people with accounting chops. From my experience, the executives/employees who are the most versatile across industries are the CFOs — they can plug into almost any business or organization and can quickly help their employers out. You may not want to be an accountant, but in a pinch, having those skills can help you get hired or find work as you figure out your next move.
Programming – Programming as a skill is relatively generalizable. While I wouldn’t necessarily get an iPhone developer to write an operating system (or vice versa), folks with programming chops can quickly get up to speed on new projects at new companies, and, as a result, can quickly crank out functioning code to help with their employers.
Statistics – You don’t need to be a math genius to be hireable. But, as computers become faster and more important, more organizations are turning to number crunching as a way to stay competitive. Not only will “data scientists” and statisticians become more in demand, individuals who have familiarity with those tools will be in a better position at their companies and be able to quickly help out a new employer.
The skeptic will point out that a lot of this can be outsourced. And, that’s certainly true – but in my experience, there is not only a limit on what companies are willing to outsource, there is also just huge value for any employee to tack those skills onto what they are already doing. A salesperson who is also good at crunching statistics on who to sell to next is far more valuable than a “regular” salesperson. A marketing guy with programming chops probably has a better understanding of a product or a technology than a “regular” marketing guy. And, a operations guy who also understands the nitty gritty financial details is going to be able to do a better job than an operations guy who doesn’t. Not to mention: the skills are broadly applicable; so if one company doesn’t have a good spot, there’s always another organization somewhere that will.
I was asked recently by a friend about my thoughts on the “Occupy Wall Street” movement. While people a heck of a lot smarter and more articulate than me have weighed in, most of it has been focused on finger-pointing (who’s to blame) and judgment (do they actually stand for anything, “its the Tea Party of the Left”).
As corny as it sounds, my first thought after hearing about “Occupy Wall Street” wasn’t about right or wrong or even really about politics: it was about John Steinbeck and his book The Grapes of Wrath . It’s a book I read long ago in high school, but it was one which left a very deep impression on me. While I can’t even remember the main plot (other than that it dealt with a family of Great Depression and Dust Bowl-afflicted farmers who were forced to flee Oklahoma towards California), what I do remember was a very tragic description of the utter confusion and helplessness that gripped the people of that era (from Chapter 5):
“It’s not us, it’s the bank. A bank isn’t like a man. Or an owner with fifty thousand acres, he isn’t like a man either. That’s the monster.”
“Sure,” cried the tenant men, “but it’s our land. We measured it and broke it up. We were born on it, and we got killed on it, died on it. Even if it’s no good, it’s still ours. That’s what makes it ours—being born on it, working it, dying on it. That makes ownership, not a paper with numbers on it.”
“We’re sorry. It’s not us. It’s the monster. The bank isn’t like a man.”
“Yes, but the bank is only made of men.”
“No, you’re wrong there—quite wrong there. The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.
And therein lies the best description of the tragedy of the Great Depression, and of every economic crisis that I have ever read. The many un- and under-employed people in the US are clearly under a lot of stress. And, like with the farmers in Steinbeck’s novel, its completely understandable that they want to blame somebody. And, so they are going to point to the most obvious culprits: “the 1%”, the bankers and financiers who work on “Wall Street”.
But, I think Steinbeck understood this is not really about the individuals. Obviously, there was a lot of wrongdoing that happened on the part of the banks which led to our current economic “malaise.” But I think for the most part, the “1%” aren’t interested in seeing their fellow citizen unemployed and on the street. Even if you don’t believe in their compassion, their greed alone guarantees that they’d prefer to see the whole economy growing with everyone employed and productive, and their desire to avoid harassment alone guarantees they’d love to find a solution which ends the protests and the finger-pointing. They may not be suffering as much as those in the “99%”, but I’m pretty sure they are just as confused and hopeful that a solution comes about.
The real problem – Steinbeck’s “monster” – is the political and economic system people have created but can’t control. Our lives are driven so much by economic forces and institutions which are intertwined with one another on a global level that people can’t understand why they or their friends and family are unemployed, why food and gas prices are so expensive, why the national debt is so high, etc.
Now, a complicated system that we don’t have control of is not always a bad thing. After all, what is a democracy supposed to be but a political system that nobody can control? What is the point of a strong judiciary but to be a legal authority that legislators/executives cannot overthrow? Furthermore, its important for anyone who wants to change the system for the better to remember that the same global economic system which is causing so much grief today is more responsible than any other force for creating many of the scientific and technological advancements which make our lives better and for lifting (and continuing to lift) millions out of poverty such as those who live in countries like China and India.
But, its hard not to sympathize with the idea that the system has failed on its promise. What else am I (or anyone else) supposed to think in a world where corporate profits can go up while unemployment stays stubbornly near 10%, where bankers can get paid bonuses only a short while after their industry was bailed out with taxpayer money, and where the government seems completely unable to do more than bicker about an artificial debt ceiling?
But anyone with even a small understanding of economics knows this is not about a person or even a group of people. To use Steinbeck’s words, the problem is more than a man, it really is a monster. While we may not be able to kill it, letting it rampage is not a viable option either — the “Occupy Wall Street” protests are a testament to that. Their frustration is real and legitimate, and until politicians across both sides of the aisle and individuals across both ends of the income spectrum come together to find a way to “tame the monster’s rampage”, we’re going to see a lot more finger-pointing and anger.
If it hasn’t been clear from posts on this blog or from my huge shared posts activity feed, I am a huge fan of Google Reader. My reliance/use of the RSS reader tool from Google is second only to my use of Gmail. Its my main primary source of information and analysis on the world and, because a group of my close friends are actively sharing and commenting on the service, it is my most important social network.
Yes, that’s right. I’d give up Facebook and Twitter before I’d give up Google Reader.
I’ve always been disappointed by Google’s lack of attention to the product, so you would think that after announcing that they would find a way to better integrate the product with Google+ that I would be jumping for joy.
[A]fter reading Sarah Perez and Austin Frakt and after thinking about just how much I use Google Reader every day, I’m beginning to revise my initial forecast. Stay calm is quickly shifting toward full-bore Panic Mode.
(bolding and underlining from me)
Now, for the record, I can definitely see the value of integrating Google+ with Google Reader well. I think the key to doing that is finding a way to replace the not-really-used-at-all Sparks feature (which seems to have been replaced by a saved searches feature) in Google+ with Google Reader to make it easier to share high quality blog posts/content. So why am I so anxious? Well, looking at the existing products, there are two big things:
Google+ is not designed to share posts/content – its designed to share snippets. Yes, there are quite a few folks (i.e. Steve Yegge who made the now-famous-accidentally-public rant about Google’s approach to platforms vs Amazon/Facebook/Apple’s on products) who make very long posts on Google+ using it almost as a mini-blog platform. And, yes, one can share videos and photos on the site. However, what the platform has not proven to be able to share (and is, fundamentally, one of the best uses/features for Google Reader) is a rich site with embedded video, photos, rich text, and links. This blog post that you’re reading for instance? I can’t share this on Google+. All I can share is a text excerpt and an image – that reduces the utility of the service as a reading/sharing/posting platform.
Google Reader is not just “another circle” for Google+, it’s a different type of online social behavior. I gave Google props earlier this year for thinking through online social behavior when building their Circles and Hangouts features, but it slipped my mind then that my use of Google Reader was yet another way to do online social interaction that Google+ did not capture. What do I mean by that? Well, when you put friends in a circle, it means you have grouped that set of friends into one category and think of them as similar enough to want to receive their updates/shared items together and to send them updates/shared items, together. Now, this feels more natural to me than the original Facebook concept (where every friend is equal) and Twitter concept (where the idea is to just broadcast everything to everybody), but it misses one dynamic: followers may have different levels of interest in different types of sharing. When I share an article on Google Reader, I want to do it publicly (hence the public share page), but only to people who are interested in what I am reading/thinking. If I wanted to share it with all of my friends, I would’ve long ago integrated Google Reader shares into Facebook and Twitter. On the flip side, whether or not I feel socially close to the people I follow on Google Reader is irrelevant: I follow them on Google Reader because I’m interested in their shares/comments. With Google+, this sort of “public, but only for folks who are interested” sharing and reading mode is not present at all – and it strikes me as worrisome because the idea behind the Google Reader change is to replace its social dynamics with Google+
Now, of course, Google could address these concerns by implementing additional features – and if that were the case, that would be great. But, putting my realist hat on and looking at the tone of the Google Reader blog post and the way that Google+ has been developed, I am skeptical. Or, to sum it up, in the words of Austin Frakt at the Incidental Economist (again bolding/underlining is by me)
I will be entering next week with some trepidation. I’m a big fan of Google and its products, in general. (Love the Droid. Love the Gmail. Etc.) However, today, I’ve never been more frightened of the company. I sure hope they don’t blow this one!
Because of the subject matter here, I’ll re-emphasize the disclaimer that you can read on my About page: The views expressed in this blog are mine and mine alone and do not necessarily reflect the views of my current (or past) employers, their employees, partners, clients, and portfolio companies.
If you’ve been following either the cleantech world or politics, you’ll have heard about the recent collapse of Solyndra, the solar company the Obama administration touted as a shining example of cleantech innovation in America. Solyndra, like a few other “lucky” cleantech companies, received loan guarantees from the Department of Energy (like having Uncle Sam co-sign its loans), and is now embroiled in a political controversy over whether or not the administration acted improperly and whether or not the government should be involved in providing such support for cleantech companies.
Given my vantage point from the venture capital space evaluating cleantech companies, I thought I would weigh in with a few thoughts:
The failure of one solar company is hardly a reason to doubt cleantech as an enterprise. In every entrepreneurial industry where lots of bold, unproven ideas are being tested, you will see high failure rates. And, therein lies one of the beauties of a market economy – what the famous economist Joseph Schumpeter called “creative destruction.” That a large solar company like Solyndra failed is not a failing of the industry – if anything it’s a good thing. It means that one unproven idea/business model (Solyndra’s) was pushed out in favor of something better (in this case, more advanced crystalline silicon technologies and new thin film solar technologies) which means the employees/customers of Solyndra can now move on to more productive pastures (possibly another cleantech company which has a better shot at success).
The failure of Solyndra is hardly a reason to doubt the importance of government support for the cleantech industry. I believe that a strong “cleantech” industry is a good thing for the world and for the United States. Its good for the world in that it represents new, more efficient methods of harnessing, moving, and using energy and is a non-political (and, so, less controversial to implement) approach to addressing the problems of man-made climate change. Its good for the United States in that it represents a major new driver of market demand that the US is particularly well-suited to addressing because of its leadership in technology & innovation at a time when the US is struggling with job loss/economic decline/competition abroad. Or, to put it in a more economic way, what makes cleantech a worthy sector for government support is its strategic importance in the future growth of the global economy (potentially like a new semiconductor/software industry which drove much of the technology sector over the past two decades), the likelihood that the private sector will underinvest due to not correctly valuing the positive externalities (social good), and the fact that…
Private sector investors cannot do it all when it comes to supporting cleantech. One of the criticisms I’ve heard following the Solyndra debacle is that the government should not leave the support of industries like cleantech to the private sector. While I’m sympathetic to that argument, my experience in the venture investing world is that many private investors are not well equipped to providing all the levels of support that the industry would need. Private investors, for instance, are very bad at (and tend to shy away from) providing basic sciences R&D support – that’s research which is not directly linked to the bottom line (and so is outside of what a private company is good at managing) and, in fact, should be conducted by academics who collaborate openly across the research community. Venture capital investors are also not that well-suited to supporting cleantech pilots/deployments – those checks are very large and difficult to finance. These are two large examples of areas where private investors are unlikely to be able to provide all the support that the industry will need to advance and areas where there is a strong role for the government to play.
With all that said, I think there are far better ways for the government to go about supporting its domestic cleantech industry. Knowing a certain industry is strategic and difficult for the private sector to support completely is one thing – effectively supporting it is another. In this case, I have major qualms about how the Department of Energy is choosing to spend its time. The loan guarantee program not only puts taxpayer dollars at risk directly, it also picks winners and losers– something that industrial policy should try very hard not to do. Anytime you have the ability to pick winners and losers, you will create situations where the selection of winners and losers could be motivated by cronyism/favoritism. It also exposes the government to a very real criticism: shouldn’t a private sector investor like a venture capitalist do the picking? Its one thing when these are small prize grants for projects – its another when its large sums of taxpayer dollars at risk. Better, in my humble opinion, to find other ways to support the industry like:
Sponsoring basic R&D to help the industry with the research it needs to break past the next hurdles
Facilitating more dialogue between research and industry: the government is in a unique position to encourage more collaboration between researchers, between industry, between researchers AND industry, and across borders. Helping to set up more “meetings of the minds” is a great, relatively low-cost way of helping push an industry forward.
Issuing H1B visas for smart immigrants who want to stay and create/work for the next cleantech startup: I remain flabbergasted that there are countless intelligent individuals who want to do research/work/start companies in the US that we don’t let in.
Subsidizing cleantech project/manufacturing line finance: It may be possible for the government to use tax law or direct subsidies to help companies lower their effective interest payments on financing pilot line/project buildouts. Obviously, doing this would be difficult as we would want to avoid supporting the financing of companies which could fail, but it strikes me that this would be easier to “get right” than putting large swaths of taxpayer money at risk in loan guarantees.
Taxing carbon/water/pollution: If there’s one thing the government can do to drive research and demand for “green products” is to issue a tax which makes the consequences of inefficiency obvious. Economists call this a Pigovian tax and the idea is that there is no better way to get people to save energy/water and embrace cleaner energy than by making them. (Note: for those of you worried about higher taxes, the tax can be balanced out by tax cuts/rebates so as to not raise the total tax burden on the US, only shift that burden towards things like pollution/excess energy consumption)
This is not a complete list (nor is it intended to be one), but its definitely a set of options which are supportive of the cleantech industry, avoid the pitfall of picking winners and losers in a situation where the market should be doing that, and, except for the last, should not be super-controversial to implement.
Sadly, despite the abundance of interesting ideas and the steady pace of innovation/business model innovation, Solyndra seems to have turned investors and the public more sour towards solar and cleantech more broadly. Hopefully, we get past this rough patch soon and find a way to more effectively channel the government’s energies and funds to bolstering the cleantech industry in its quest for clean energy and greater efficiency.
This is a refreshingly bold move by Google. Frankly, I had expected Google to continue its fairly whiny, defensive path on this for some time as they and the rest of the Android ecosystem cobbled together a solution to the horrendous intellectual property situation they found themselves in. After all, while Android was strategically important to Google as a means of preventing another operating system (like Windows or iOS) from weakening their great influence on the mobile internet, one could argue that most of that strategic value came from just making Android available and keeping it updated. It wasn’t immediately obvious to me that it would make dollars-and-cents sense for Google to spend a lot of cash fighting battles that, frankly, Samsung, HTC, LG, and the others should have been prepared to fight on their own. That Google did this at all sends a powerful message to the ecosystem that the success of Android is critical to Google and that it will even go so far as to engage in “unnatural acts” (Google getting into the hardware business!?) to make it so.
It will be interesting to observe Google’s IP strategy going forward. Although its not perfect, Google has taken a fairly pro-open source stance when it comes to intellectual property. Case in point: after spending over $100M on video codec maker On2, Google moved to make On2’s VP8/WebM codec freely available for others to integrate as an alternative to the license-laden H.264 codec. Sadly, because of the importance of building up a patent armory in this business, I doubt Google will do something similar here – instead, Google will likely hold on to its patent arsenal and either use it as a legal deterrent to Microsoft/Apple/Nokia or find a smart way to license them to key partners to help bolster their legal cases. It will be interesting to see how Google changes its intellectual property practices and strategy now that its gone through this. I suspect we will see a shift away from the open-ness that so many of us loved about Google.
I don’t put much stock into speculation that Motorola’s hardware business will just be spun out again. This is true for a number of reasons:
I’m unaware of any such precedent where a large company acquires another large one, strips it of its valuable intellectual property, and then spins it out. Not only do I think regulators/antitrust guys would not look too kindly on such a deal, but I think Google would have a miserable time trying to convince new investors/buyers that a company stripped of its most valuable assets could stand on its own.
Having the Motorola business gives Google additional tools to build and influence the ecosystem. Other than the Google-designed Nexus devices and requirements Google imposes on its manufacturing partners to support the Android Market, Google actually has fairly little influence over the ecosystem and the specific product decisions that OEMs like Samsung and HTC make. Else, we wouldn’t see so many custom UI layers and bloatware bundled on new Android phones. Having Motorola in-house gives Google valuable hardware chops that it probably did not have before (which will be useful in building out new phones/tablets, new use cases like the Atrix’s (not very successful but still promising) webtop, its accessory development kit strategy, and Android@Home), and lets them always have a “backup option” to release a new service/feature if the other OEMs are not being cooperative.
Motorola’s strong set-top box business is not to be underestimated. Its pretty commonly known that GoogleTV did not go the way that Google had hoped. While it was a bold vision and a true technical feat, I think this is another case of Google not focusing on the product management side of things. Post-acquisition, however, Google might be able leverage Motorola’s expertise in working with cable companies and content providers to create a GoogleTV that is more attuned to the interests/needs of both consumers and the cable/content guys. And, even if that is not in the cards, Motorola may be a powerful ally in helping to bring more internet video content, like the kind found on YouTube, to more TVs and devices.
On Wednesday, August 31st, DC Comics will launch a historic renumbering of the entire DC Universe line of comic books with 52 first issues, including the release of JUSTICE LEAGUE by NEW YORK TIMES bestselling writer and DC Entertainment Chief Creative Officer Geoff Johns and bestselling artist and DC Comics Co-Publisher Jim Lee. The publication of JUSTICE LEAGUE issue 1 will launch day-and-date digital publishing for all these ongoing titles, making DC Comics the first of the two major American publishers to release all of its superhero comic book titles digitally the same day as in print.
While the decision to do this has been met with some controversy amongst the existing comic book fan community, I think this is a great idea.
But, the truth is I think this is the sort of thing which the comic book industry needs to do to stay relevant. For too long, the industry has taken the easy way out:
Cater to the most hardcore of fans: Its the classic Innovator’s Dilemma problem: its always easier to sell the same customers more and more profitable products than it is to pull in less profitable customers. In the short-term, this is fine – but over the long-term, this can be a disaster as the industry sees its user base dwindle. And, as I’ve mentioned before, as much of a fan as I am, even I’m finding the medium less appealing as this trend plays itself out.
Recycle old stories to make movies, TV shows, and cartoons: if the traditional comic medium itself is in danger (as I think it is if things keep going the way they are), the comics industry has adapted by pursuing movies, TV shows, and cartoons (case in point: Smallville). Now don’t get me wrong – I love that there are so many comic book-related movies. There is nothing a comic book fan wants more than to have other people interested in the characters and the stories (and, if there are fans out there who are anything like me, they revel in being able to answer questions about the backstories and characters involved). But, the problem with this approach is two things. First, this sort of medium is a classic long tail business: its great if you get a hit, but its really hard to make sure you have a hit – and, as a result, its really hard to bet the future of your business on. Secondly, unless I’m mistaken, the vast majority of the people watching these movies and shows are not becoming comic book/collectibles buyers or comic convention goers.
To me, the way forward for the industry is something that is hard and may even partially alienate the existing hardcore fanbase: but its to disrupt themselves. Yes, its easier to keep the hardcore fans happy and buying, but there’s not only a lot more money to be made by catering to a wider fanbase, if it doesn’t happen, sooner or later, something else will.
And that’s why I think that DC’s announcement is promising, for what I think the big guys need to do is:
Embrace digital: Yes, your traditional business model is tied to Diamond for distribution. But, digital will change this business the way its changed the music, movie, and newspaper industries – and unless you are quick to embrace it intelligently, you may find yourself in a very poor position.
Change your publishing schedule: Have your stories come out on a weekly basis, not a monthly one. The way to get people engaged (and to spend money) is to have them visit the comic store/website/digital store regularly. A bad 3-part story takes 3 months to finish with today’s monthly publishing schedule: that’s taking a huge risk that a fan will drop the book and forget to come back after 3 months. If the same 3-part story were finished in 3 weeks, then you have a different equation.
Be smart about product/pricing: Hardcore fans are willing to pay more for more. So, sell them trade paperbacks full of complex, intertwined stories and creator interviews/sketchbooks. Sell them 20-part stories which are full of cameos and references. Sell them special editions. But, for the mainstay storylines that should be accessible? Make them cheaper. After all, they’re the gateway drug to the full-on addiction :-). Think about new pricing models: how about $20 for “all you can eat” for one month on the digital comic store? Or how about buy a mainstay storyline and get 20% off of a related story? There’s plenty of room here.
Rationalize movie vs comic: I’m not a fan of twisting comic book storylines to fit movies. But, similarly, I worry about any casual comic book readers who pick up an issue and think to themselves: “what the heck?” It’s not easy, but the industry does need to find a way to bridge the two while staying truthful to both types of media. One idea: a free comic book “guide” (with movie stub) to smooth over the differences between the movie version and the comic version?
Get back to the character: Too often today, comic book storylines are about packing in every possible way for the world to end into a storyline. While this is something that is cool every so often, doing it too often is overkill and is oftentimes done at the expense of developing the character of the hero, the villain(s), and the supporting characters. Perry White, Aunt May, Alfred Pennyworth, and Jane Foster are not Lex Luthor, the Green Goblin, Joker, or Loki per se, but they are still important and deserve to be more than just the “scenery”.
There’s still more detail to be revealed in DC’s reboot, and its still not clear to me how much they live up to what I’ve outlined. Sadly, the pessimist in me is pretty sure they’ll fall short. But, as a fan of the medium, I hope they don’t.
There’s a recent Slate article which is making the rounds, especially amongst those who believe that pharmaceutical and biotechnology companies need to make less money and be more heavily regulated. The core conclusion is that the cost of R&D for a drug is not ~$1 billion as a widely cited study from 2003 established, but actually closer to $40-60 million.
The first is how anyone could have published a study like this which is off from the most recent/best estimate by a factor of 20x and not expect to see clear evidence reflected in the reality of the bio/pharma industry. Among the reasons why I think this estimate is ridiculous:
If the estimate were true, we’d see a lot more biotech startups (which tend to raise around that much in advance of Phase II trials) as there’d not only be a lot greater capacity for venture capital investors to fund them but also a greater likelihood that these startups can hit IPO/critical market without being bought out at an earlier stage.
If the estimate were true, I’d expect that instead of a “R&D productivity crisis”, we have a glut of new drugs coming out each and every year.
If the estimate were true, I’d expect that a company like Pfizer would, instead of boosting dividends and buying back shares, try to funnel more money into R&D – after all, isn’t it super cheap to build up a drug?
The second is more fundamental – why are people so focused on attacking pharma/biotechs on the purported difficulty/costliness of their R&D? I think folks like Marcia Angell who maintain that all the “real work” happens in government-funded universities and research institutions fail to understand the huge amount of screening, development, testing, and research that goes into turning something that’s only fit for an academic paper into something that’s sufficiently manufacturable, well-tested, and well-characterized to actually be useful in large scales in human beings. But even ignoring that oversight, in my mind this is attacking the wrong facet of the drug industry. From a societal well-being perspective, shouldn’t we want to praise them for their R&D? Maybe its duplicative, maybe it doesn’t even add that much value, maybe its not even as expensive as they say it is – but I think reasonable people will agree that more R&D dollars can be a good thing. To me, what we should focus on is not their R&D, but the games they play with advertising & marketing, with the intellectual property system, with not properly reporting things, etc. There are plenty of worthy things to attack the industry for – lets stop attacking them on something we actually want to see happen.
I got back from a trip to Asia late last week. Like all my trips abroad, it was very eye-opening, and I am definitely very grateful for my fund’s cross-Pacific approach for giving me a chance to build a more international perspective on venture capital and business.
A few thoughts:
I was very pleasantly surprised by Beijing. Now, obvious caveat: I stayed at a fancy hotel, threw a swanky corporate party, and was mostly chauffeured around the city on the fund’s dime – so obviously, I’m getting a semi-biased perspective on Beijing. With that said, there’s just no getting around the fact that China is big and booming. Before this trip, my only exposure to China’s rapid growth was in statistics and business reports. So, it registered on only a superficial, intellectual level. But to see it up-close was a completely different matter. Parts of Beijing are as bustling and well-developed as a New York or San Francisco, with well-dressed businesspeople and shoppers moving about quickly to get to their next destination. The infrastructure quality within the city is significantly better than I had pictured (and, likely far better than many poorly-planned cities in the West) and, aside from the air quality and traffic, the city was far cleaner and better organized than I expected – especially when comparing it to my experiences in India.
No business can afford to not have a presence in China. Everyone has their favorite China market statistic – mine, given my venture capital/tech leanings, is this: there are more internet users in China (~400 million) than there are people in the US. While, for obvious reasons, I’ll always have a soft spot for Taiwan, you just can’t get around the size and growth of the Chinese market. We’ll all need to know Chinese and Chinese business culture some day in the same way that everyone today needs to know English and US business culture.
There’s an “energy” around China that you only see in places like Silicon Valley. This is very difficult for me to describe, but in places where there is a lot of entrepreneurial energy, the people have a palpable “hunger” for opportunity that you don’t see elsewhere. You can sense this amongst entrepreneurs in the Silicon Valley, and you can definitely feel this in China. The entrepreneurs that I met were young and relatively inexperienced, but very eager and very “hungry” for a chance to prove themselves. It’s a great feeling, and I think it bodes very well for China’s future.
Tiananmen Square and the Forbidden City are massive. For some reason, I had always thought that they weren’t particularly large. My mistake – and my feet didn’t forgive me for it. It was a gorgeous site, though, and as someone who wrote a report about the two greatest emperors of the Qing Dynasty, a very rare chance for me to see some of the actual history in front of me.
Tokyo is amazing. This is my second visit to Japan, and as a self-admitted Japan-ophile, really, you don’t get much better Japanese anime, art, service, or food anywhere else than Tokyo. The skyline is gorgeous (I got to check out the Tokyo tower skydeck), the electronic store district (Akihabara) is very cool, the karaoke scene is phenomenal, and the food is spectacular. Oh, and I must have a Japanese-style toilet some day. Heated seats. Built-in bidet. Electronic controls (some even with motion sensors so it knows when to lift up the seat). Or, in other words: as close to heaven as you can get while in the bathroom .
Tokyo is like one, big contiguous shopping mall. Everywhere you go: shops, stores, billboards, and posters. I went to a particularly well-known shopping complex in Tokyo’s fashion district (Harajuku) called LaForet Grand Bazaar (see right), and it was like traveling to a different planet, where the population was 95% obsessive shoppers, 5% shouting store clerks. One store I passed had a very interesting system: each shopper could only browse/shop while a 5 minute song was playing and, after the song finished, had to buy what they kept. Even I couldn’t resist the allure – as I went to a Uniqlo store in the Ginza district and bought a dress shirt.
Japan needs to change. The Japan-ophile in me wants to see Japan continue to thrive as a major economic force, but speaking to some locals has cemented the conclusions of my own research: I’m concerned for Japan’s future growth prospects. While Japan will maintain its lead in consumer electronics and automobiles for some time, there are definitely signs that this leadership could dwindle. The entrepreneurial spirit that I sensed in China was not as palpable in Japan, where the people place a lot more emphasis on rigid hierarchies and social structures. That translates into less risk-taking, fewer bold strategies, an unwillingness to fire unproductive workers and kill off bad projects, and a business culture which probably places too much emphasis on loyalty and connections rather than results. These are sweeping generalizations, of course, as exceptions like Japan’s electric vehicle and mobile social networking industries exist which are booming and full of creative spirit. But I do worry that the Japanese economy will continue to experience slow growth or decline if they don’t find a way to make the bold changes that they’ll need.