Skip to content →

Tag: Editorial

This generation’s Superman

One of my favorite comic blogs is CBR’s Comics Should be Good. In a recent post, the blog pointed out something which I hadn’t realized before:

Okay, this is just a weird thought that struck me after I got the news that Smallville had been renewed yet again.

I suddenly realized that there are almost as many hours of Smallville on film as there are of all the other Superman TV adaptations combined.

Boggles the mind, doesn’t it? What really pulled me up short was the startling notion that for two or three generations of grade-school kids, Smallville is their primary — maybe only — experience of any kind of Superman story at all.

It makes you wonder: is the correct interpretation that comics is dying and being replaced by a lesser art form? Or that it is simply evolving to tell its stories using a new medium? Or maybe a little bit of both?

image My take is that the comics industry made a big mistake years ago in investing in creative directions which became impossible for lay-people to follow along. As devoted as I am to the medium, even I find a lot of today’s stories difficult to follow and lacking in the original character work that made them so memorable. Take Superman – when’s the last time a good Perry White story was written? Or a good Jimmy Olson? You probably have to go back over 10 years to find them.

With Smallville, the barrier to entry is not only much lower (although after ten years, even Smallville has started to fall into continuity traps), its brought back the romantic soap-opera and angst-ridden introspection which has done so well for series like the X-Men or Spiderman, and wrapped it up with an impressive array of special effects and modern television-making in a mostly-weekly format.

I hope the industry sees this both in terms of lessons to be learned about how to revitalize the original medium (make it more frequent than monthly, add back the supporting cast, reduce the dependence on excessive continuity, add back real character drama), and in terms of how they can continue to adapt their rich stories for the future.

(Image credit)

One Comment

Education bubble?

image

One of my favorite RSS feeds is Business Insider’s Chart of the Day. This chart came up a few weeks ago and made me think. It’s quite staggering to imagine that college tuition has outpaced inflation as rapidly as it has (~10x vs. ~4x over 30 years). The graph made me think: Has the value of a college education increased 2.5x? If it has, then there isn’t necessarily a bubble. There are three ways to think about the value of a college education to evaluate this question.

  1. The most obvious is the average income comparison between an average high school graduate (only) and an average college graduate (only). Using US government statistics, we find that in 1978, a college-educated graduate made ~55% more than an individual with only a high school education. In 2008 (the last year I have data for), a college educated graduate made ~87% more – which amounts to a 60% increase in the gap (and even holds true if I adjust for the change in disposable income using the individual, under-65 poverty line as the base level of expenditure), but doesn’t quite match the 2.5x increase in tuition costs over inflation.
  2. Related to the above are the other effects of college education on lifetime income. It’s been demonstrated that college educated individuals live longer than those who don’t, and its commonly understood that college is a prerequisite for other income-boosting opportunities like graduate school. One’s lifetime income is also much more likely to trend upwards in life with a college education than with only a high school diploma. But there’s an interesting wrinkle here: does college education make you live longer and get promoted, or is it just an indirect way of finding individuals who tend to be wealthier and more intelligent? I unfortunately don’t have the data (or the time) to calculate the impact, but I would hazard a guess that it’s probably unlikely that you are extending your working life by the 56% needed to close the gap between the higher income and the higher tuition.
  3. The last “source of value” for a college education is the subjective value of meeting lifelong friends, having new experiences, and expanding your intellectual horizons. Just because its extremely intangible, doesn’t mean there’s not enormous value here. But, the question to ask here is not whether or not college has large intangible value (of course it does), but whether or not you believe that intangible value to have increased significantly (by over 2.5x as we’ve concluded the increase in lifetime income likely isn’t enough to explain the 2.5x increase in tuition cost relative to inflation) since 1978. I personally think that’s being overly aggressive.

I can’t pretend to be the expert on this, but at this point, I’d conclude that whereas the value of a college education has increased dramatically (maybe even by as much as 75-100%), it hasn’t gone up enough to justify a 2.5x increase relative to inflation. If you accept my conclusion then the obvious question is what is causing tuition to increase so much? Two possible explanations jumped out at me:

  • Tuitions haven’t caught up with the value of a college education: While my conclusion above was that the value of a college education hasn’t increased 2.5x from 1978 until today, one possible explanation of tuition price is that the value of a college education is still higher than what students pay for it, and, if that were true, we would expect tuitions to continue to increase.
  • Tuitions are higher than the value of a college education and are being propped up by a combination of two things:
    1. Tuitions are being boosted by a subsidy cycle: Free money from the government is one of the easiest ways to get price increases. While we often think of the US government’s subsidized loans and tax writeoffs for college tuition as a means to help more people attend college, an equivalent way of thinking about it is that it gives colleges free rein to increase prices without worrying about reducing the number of people who enter. In a “normal” market, this would be the end of it (slightly higher prices, but more people entering college), but because college education has become such a political affair (every family always thinks its “too expensive”, and every politician promises to make it cheaper), we always get more subsidies from more politicians which feeds back into the original problem.
    2. Families have bad expectations around the value of a college education: One explanation, which is making the rounds of the policy wonk blogosphere, is that this is all a big bubble. In the same way that people felt tech stocks in the late 90s and real estate in the 2000s were a good buy, its entirely possible that families have uninformed expectations about the value of a college education and thus believe the higher tuitions are worth it. If this is true, then we could be on a collision course with a generation of families (like in the 80s) suddenly realizing “the emperor (of college tuition) has no clothes!” (which might be precipitated by a long stagnation/decline in the wages of college educated individuals) followed by a potential crash in tuitions.

What should be done? Truthfully, it depends on which of these conclusions is correct. If its just that tuitions haven’t caught up with the value of a college education, then it makes sense that tuitions are increasing, and it may even make sense to increase tuition grants/subsidized loans (as it likely means not enough people are getting a college education because they couldn’t get access to the capital to pay for it).

However, if tuitions are over-valued, then it would be advisable to end the college tuition subsidy cycle and implement policies which potentially “soften the blow” of the coming college education value and tuition crash.

But to make the right policy decisions and tradeoffs, its important to first understand which of the two explanations is correct. And that’s a whole ‘nother set of data and analyses…

(Image credit – Chart of the Day)

5 Comments

My Take on Google/Verizon’s Net Neutrality Proposal

If you’ve been following the tech news at all, you’ll know about the great controversy surrounding the joint Google/Verizon proposal for net neutrality. Recently, Google came out with a defense of its own actions, and I thought I’d weigh in.

First, I think the community overreacted. There is a lot to not like about Google’s stance, but I think there are a few things to keep in mind:

  1. There are political limits to what strict net neutrality promoters can achieve. Its a fact of life that the telco providers have deep pockets (part of their having government-granted monopoly status) and stand to gain or lose a great deal from the outcome of net neutrality legislation and thus wield enormous influence over broadband legislation. Its also a fact of life that the path towards net neutrality is more easily served by finding common ground which preserves the most important aspects of net neutrality than it is to fight the telco providers kicking and screaming the whole way. What I mean to say is: we should not criticize Google for dealing with a telco or with making compromises on net neutrality. That’s an unreasonable stance typically held by people who don’t have to actually make policy. With that said, we should criticize Google for making the wrong compromises.
  2. I don’t think Android was the issue here. Many people may disagree with me here, but I don’t believe Android has much direct impact to Google’s bottom line. From my perspective, Google’s commitment to Android is about two things: (a) preventing Apple from dominating the smartphone market (and potentially the mobile ad market) by empowering a bunch of phone manufacturers to provide devices of comparable (or better) quality and (b) forcing all mobile phone platforms to have decent-enough web surfing/app-running capability (by providing a free alternative which did) so that Google can provide its services effectively on those platforms and serve more ads. If anything, Google’s incentives here are better aligned with net neutrality than most companies: it benefits the most if there are more people using the web, and the best way to push that is to encourage greater content diversity. While Google TV may change Android into a true profit center, it wouldn’t be for several years, and so I think it’d be a stretch to say it is a big enough deal to significantly impact Google’s political policy moves. I can buy the argument that Google pushed a deal with Verizon because they have a closer relationship via Android, but I think suggesting that Google subverted net neutrality as a concession to Verizon on Android is taking it too far.
  3. I think Google did a good job of emphasizing transparency. The proposal emphasizes that telcos need to be held to higher standards of transparency, something which is sorely lacking today, and something which we definitely want to and need to see in the future.

With that said, though, there are definitely things to criticize Google’s agreement on:

  1. Wireless: I can sympathize with the argument that wireless is different from wired networks and could require more aggressive traffic management. I even went so far as to call that out the last time I talked about this. But, given the importance of wireless broadband in the future, it doesn’t make any sense to exclude explicit protections around neutrality for wireless. The arguments around competition and early development strike me as naive at best and Verizon PR at worst – whatever provisions exist to protect neutrality for wired networks should be applied in the wireless space. Competition and the development of more open gardens make it possible to compromise, but not necessarily throw caution to the wind.
  2. Wording weirdness: I’m concerned that the proposal contains phrasings which seem to give avenues for telcos to back out of neutrality like “prioritization of Internet traffic would be presumed inconsistent with the non-discrimination standard, but the presumption could be rebutted” without clearly explaining what are reasonable grounds for rebuttal. Even parts of the compromise which I accept as valid (i.e., letting telcos do basic network quality of service management, prioritize government/emergency traffic, fight off malware/piracy, etc.) were framed in terms of what telco’s were permitted to do, but not without clearly laid out restrictions (i.e., network service quality management must be subject to FCC review). For a document meant to safeguard neutrality, it sure seems to go out of its way to stipulate workarounds…
  3. “Additional online services”: I understand (and agree with) the intent – carriers may want to provide special services which they want to treat differently to meet their partners’/customers’ needs like a special gaming service or secure money transfer. The language, however, is strange and not imminently clear to me that there aren’t “back doors” for the telcos to use to circumvent neutrality restrictions.

Truthfully, I think most of the document rings true as a practical compromise between the interests and needs of telcos (who would bear the brunt of the costs and should be incentivized to improve network quality and provide meaningful services and integration) and the interests of the public. But, I would ask Google or whatever legislator/FCC member who has a voice on this to do two things:

  • Not compromise on content neutrality on any medium. The value of the internet as a medium and as a platform of innovation comes from the ability of people to access all sorts of applications and content without that access being discriminated against by the network operator. Not sticking to that is risking slower innovation and choking off a valuable source of commentary/opinions, especially in a setup where large local players hold enormous market power because of their government-granted monopoly status.
  • Create clear (but flexible enough to be future-proof) guidelines for acceptable behavior with clear adjudication and clear punishments. No squirrely word weirdness. No “back door” language. You don’t need to browbeat the telco’s, but you don’t need to coddle them either.
One Comment

Suggestion to Major Blogs and Websites

If I can make a suggestion to American TV studios to move towards a miniseries system, why not more?

image I recently spent a couple of hours organizing and pruning the many feeds that I follow in Google Reader. It’s become something of a necessity as my interests and information needs (and the amount of time I have to pursue them) change. But, this time as I found myself trying to figure out which news sites to follow, I found it easier to drop websites which didn’t have sub-feeds.

Most major blogs and websites today use RSS (Really Simple Syndication) feeds to let subscribers know when the site’s been updated without having to check the site constantly. While this is extremely convenient, the enormous number of updates that major websites like the New York Times issue per day make subscribing to their RSS feed an exercise in drinking from the firehose.

So, what to do? Thankfully, some major websites (the New York Times included) figured this out and now provide sub-feeds which provide only a fraction of the total content so that a subscriber can not only avoid RSS information overload but get a focused feed on the information that matters to him/her. The New York Times, for instance, allows you to only get RSS updates from their tech column, the Bits Blog, or even just the Venture Capital section of the New York Times’ Dealbook coverage.

Sadly, not every website is as forward-looking as the New York Times. Many sites don’t offer any sort of sub-feed at all (much to my dismay). Many sites who do offer it, offer a very paltry selection with very limited options.

And, given the choice between an information deluge which I mostly don’t want vs an alternative information source which gives me only the information I do want, I think the answer is obvious. As a result, with the exception of two feeds, I dropped from Google Reader every blog which posted more than once a day which didn’t give me a targeted sub-feed option.

In a world where its getting harder and harder for publications to hold on to readers, you’d think these sites would learn to offer more flexibility (especially when such flexibility is practically free to support if you have even a half-decent web content management system) in how their content is pushed.

But, I guess those sites weren’t interested in keeping me as a reader…

(Image credit)

6 Comments

Suggestion to American TV studios

The past few weeks I’ve been eagerly watching a variety of Japanese television, and I noticed something very peculiar (for an American).

The few Japanese dramas I’ve seen actually end. They build to an end and then just stop. They don’t drag it out for season after season, allowing different seasons to suffer based on actor/actress-negotiations and writers having off-years. They don’t end on ridiculous season cliffhanger-after-season cliffhanger. They have  a well-defined endpoint and, by building to it, they keep the story fresh and force it to have a suitable length.

This isn’t to say that the Japanese dramas I’ve seen don’t go on for multiple seasons. But, I would assert that sequels (should) only happen when there is sufficient audience demand for one and when the storytellers think they have another story to tell.

Contrast that with American TV – the seasons are built not for any plot reason, but because a TV studio needs to have sufficient content to fill the months of September to May/June. Seasons are renewed, not because of a deep creative reason or even necessarily because of audience demand, but because of a misguided sense of momentum. This doesn’t always turn into a disaster (I believe House MD, despite its traditional  has maintained a reasonable level of quality each season through the quality of its casting and writing), but even series that I thoroughly enjoy like Smallville have had their fair share of “useless filler” episodes and bad seasons.

In my humble opinion, it’d be far better to adopt the miniseries format. It prevents writers from creating ridiculous plot devices to keep a story going way past its prime (and past when its actors begin leaving for greener pastures), and it maintains a quality of production which only a purpose-driven creative process can lead to.

Given the challenges of the TV business, I’d say its at least worth a shot for an American TV studio to try.

2 Comments

What is with Microsoft’s consumer electronics strategy?

image Regardless of how you feel about Microsoft’s products, you have to appreciate the brilliance of their strategic “playbook”:

  1. Use the fact that Microsoft’s operating system/productivity software is used by almost everyone to identify key customer/partner needs
  2. Build a product which is usually only a second/third-best follower product but make sure it’s tied back to Microsoft’s products
  3. Take advantage of the time and market share that Microsoft’s channel influence, developer community, and product integration buys to invest in the new product with Microsoft’s massive budget until it achieves leadership
  4. If steps 1-3 fail to give Microsoft a dominant position, either exit (because the market is no longer important) or buy out a competitor
  5. Repeat

While the quality of Microsoft’s execution of each step can be called into question, I’d be hard pressed to find a better approach then this one, and I’m sure much of their success can be attributed to finding good ways to repeatedly follow this formula.

image It’s for that reason that I’m completely  bewildered by Microsoft’s consumer electronics business strategy. Instead of finding good ways to integrate the Zune, XBox, and Windows Mobile franchises together or with the Microsoft operating system “mothership” the way Microsoft did by integrating its enterprise software with Office or Internet Explorer with Windows, these three businesses largely stand apart from Microsoft’s home field (PC software) and even from each other.

This is problematic for two big reasons. First, because non-PC devices are outside of Microsoft’s usual playground, it’s not a surprise that Microsoft finds it difficult to expand into new territory. For Microsoft to succeed here, it needs to pull out all the stops and it’s shocking to me that a company with a stake in the ground in four key device areas (PCs, mobile phones, game consoles, and portable media players) would choose not to use one of the few advantages it has over its competitors.

The second and most obvious (to consumers at least) is that Apple has not made this mistake. Apple’s iPhone and iPod Touch product lines are clear evolutions of their popular iPod MP3 players which integrate well with Apple’s iTunes computer software and iTunes online store. The entire Apple line-up, although each product is a unique entity, has a similar look and feel. The Safari browser that powers the Apple computer internet experience is, basically, the same that powers the iPhone and iPod Touch. Similarly, the same online store and software (iTunes) which lets iPods load themselves with music lets iPod Touches/iPhones load themselves with applications.

image
That neat little integrated package not only makes it easier for Apple consumers to use a product, but the coherent experience across the different devices gives customers even more of a reason to use and/or buy other Apple products.

Contrast that approach with Microsoft’s. Not only are the user interfaces and product designs for the Zune, XBox, and Windows Mobile completely different from one another, they don’t play well together at all. Applications that run on one device (be it the Zune HD, on a Windows PC, on an XBox, or on Windows Mobile) are unlikely to be able to run on any other. While one might be able to forgive this if it was just PC applications which had trouble being “ported” to Microsoft’s other devices (after all, apps that run on an Apple computer don’t work on the iPhone and vice versa), the devices that one would expect this to work well with (i.e. the Zune HD and the XBox because they’re both billed as gaming platforms, or the Zune HD and Windows Mobile because they’re both portable products) don’t. Their application development process doesn’t line up well. And, as far as I’m aware, the devices have completely separate application and content stores!

While recreating the Windows PC experience on three other devices is definitely overkill, I think, were I in Ballmer’s shoes, I would recommend a few simple recommendations which I think would dramatically benefit all of Microsoft’s product lines (and I promise they aren’t the standard Apple/Linux fanboy’s “build something prettier” or “go open source”):

  1. Centralize all application/content “marketplaces” – Apple is no internet genius. Yet, they figured out how to do this. I fail to see why Microsoft can’t do the same.
  2. Invest in building a common application runtime across all the devices – Nobody’s expecting a low-end Windows Mobile phone or a Zune HD to run Microsoft Excel, but to expect that little widgets or games should be able to work across all of Microsoft’s devices is not unreasonable, and would go a long way towards encouraging developers to develop for Microsoft’s new device platforms (if a program can run on just the Zune HD, there’s only so much revenue that a developer can take in, but if it can also run on the XBox and all Windows Mobile phones, then the revenue potential becomes much greater) and towards encouraging consumers to buy more Microsoft gear
  3. Find better ways to link Windows to each device – This can be as simple as building something like iTunes to simplify device management and content streaming, but I have yet to meet anyone with a Microsoft device who hasn’t complained about how poorly the devices work with PCs.

(Image credit – Ballmer) (Image credit – Zune HD) (Image credit – Apple store)

2 Comments

Web 3.0

About a year ago, I met up with Teresa Wu (of My Mom is a Fob and My Dad is a Fob fame). It was our first “Tweetup”, a word used by social media types to refer to meet-up’s between people who had only previously been friends over Twitter. It was a very geeky conversation (and what else would you expect from people who referred to their first face-to-face meeting as a Tweetup?), and at one point the conversation turned to discuss our respective visions of “Web 3.0”, which we loosely defined as what would come after the current also-loosely-defined “Web 2.0” wave of today’s social media websites.

On some level, trying to describe “Web 3.0” is as meaningless as applying the “Web 2.0” label to websites like Twitter and Facebook. It’s not an official title, and there are no set rules or standards on what makes something “Web 2.0”. But, the fact that there are certain shared characteristics between popular websites today versus their counterparts from only a few years ago gives the “Web 2.0” moniker some credible intellectual weight; and the fact that there will be significant investment in a new generation of web companies lends special commercial weight as to why we need to come up with a good conception of “Web 2.0” and a good vision for what comes after (Web 3.0).

So, I thought I would get on my soapbox here and list out three drivers which I believe will define what “Web 3.0” will look like, and I’d love to hear if anyone else has any thoughts.

  1. A flight to quality as users start struggling with ways to organize and process all the information the “Web 2.0” revolution provided.
  2. The development of new web technologies/applications which can utilize the full power of the billions of internet-connected devices that will come online by 2015.
  3. Browser improvement will continue and enable new and more compelling web applications.

I. Quality over quantity

In my mind, the most striking change in the Web has been the evolution of its primary role. Whereas “Web 1.0” was oriented around providing information to users, generally speaking, “Web 2.0” has been centered around user empowerment, both in terms of content creation (blogs, Youtube) and information sharing (social networks). Now, you no longer have to be the editor of the New York Times to have a voice – you can edit a Wikipedia page or upload a YouTube video or post up your thoughts on a blog. Similarly, you no longer have to be at the right cocktail parties to have a powerful network, you can find like-minded individuals over Twitter or LinkedIn or Facebook.

The result of this has been a massive explosion of the amount of information and content available for people and companies to use. While I believe this has generally been a good thing, its led to a situation where more and more users are being overwhelmed with information. As with the evolution of most markets, the first stage of the Web was simply about getting more – more information, more connections, more users, and more speed. This is all well and good when most companies/users are starving for information and connections, but as the demand for pure quantity dries up, the attention will eventually focus on quality.
image
While there will always be people trying to set up the next Facebook or the next Twitter (and a small percentage of them will be successful), I strongly believe the smart money will be on the folks who can take the flood of information now available and milk that into something more useful, whether it be for targeting ads or simply with helping people who feel they are “drinking from a fire hose”. There’s a reason Google and Facebook invest so much in resources to build ads which are targeted at the user’s specific interests and needs. And, I feel that the next wave of Web startups will be more than simply tacking on “social” and “online” to an existing application. It will require developing applications that can actually process the wide array of information into manageable and useful chunks.

II. Mo’ devices, mo’ money

image A big difference between how the internet was used 10 years ago and how it is used today is the rise in the number of devices which can access the internet. This has been led by the rise of new smartphones, gaming consoles, and set-top-boxes. Even cameras have been released with the ability to access the internet (as evidenced by Sony’s Cybershot G3). While those of us in the US think of the internet as mainly a computer-driven phenomena, in much of the developing world and in places like Japan and Korea, computer access to the internet pales in comparison to access through mobile phones.

The result? Many of these interfaces to the internet are still somewhat clumsy, as they were built to mimic PC type access on a device which is definitely not the PC. While work by folks at Apple and at Google (with the iPhone and Android browsers) and at shops like Opera (with Opera Mini) and Skyfire have smoothed some of the rougher edges, there is only so far you can go with mimicking a computer experience on a device that lacks the memory/processing power limitations and screen size of a larger PC.

This isn’t to say that I think the web browsing experience on an iPhone or some other smartphone is bad – I actually am incredibly impressed by how well the PC browsing experience transferred to the mobile phone and believe that web developers should not be forced to make completely separate web pages for separate devices. But, I do believe that the real potential of these new internet-ready devices lies in what makes those individual devices unique. Instead of more attempts to copy the desktop browsing experience, I’d like to see more websites use the iPhone’s GPS to give location-specific content, or use the accelerometer to control a web game. I want to see social networking sites use a gaming console’s owner’s latest scores or screenshots. I want to see cameras use the web to overlay the latest Flickr comments on the pictures you’ve taken or to do augmented reality. I want to see set-top boxes seamlessly mix television content with information from the web. To me, the true potential of having 15 billion internet-connected devices is not 15 billion PC-like devices, but 15 billion devices each with its own features and capabilities.

III. Browser power

image
While the Facebooks and Twitters of the world get (and deserve) a lot of credit for driving the Web 2.0 wave of innovation, a lot of that credit actually belongs to the web standards/browser development pioneers who made these innovations possible. Web applications ranging from office staples like Gmail and Google Docs would have been impossible without new browser technologies like AJAX and more powerful Javascript engines like Chrome’s V8, Webkit’s JavascriptCore, and Mozilla’s SpiderMonkey. Applications like YouTube and Picnik and Photoshop.com depend greatly on Adobe’s Flash product working well with browsers, and so, in many ways, it is web browser technology that is the limiting factor in the development of new web applications.

Is it any wonder, then, that Google, who views web applications as a big piece of its quest for web domination, created a free browser (Chrome) and two web-capable operating systems (ChromeOS and Android), and is investigating ways for web applications to access the full processing power of the computer (Native Client)? The result of Google’s pushes as well as the internet ecosystem’s efforts has been a steady improvement in web browser capability and a strong push on the new HTML5 standard.

So, what does this all mean for the shape of “Web 3.0”? It means that, over the next few years, we are going to see web applications dramatically improve in quality and functionality, making them more and more credible as disruptive innovations to the software industry. While it would be a mistake to interpret this trend, as some zealots do, as a sign that “web applications will replace all desktop software”, it does mean that we should expect to see a dramatic boost in the number and types of web applications, as well as the number of users.

Conclusion

I’ll admit – I kind of cheated. Instead of giving a single coherent vision of what the next wave of Web innovation will look like, I hedged my bets by outlining where I see major technology trends will take the industry. But, in the same way that “Web 2.0” wasn’t a monolithic entity (Facebook, WordPress, and Gmail have some commonalities, but you’d be hard pressed to say they’re just different variants of the same thing), I don’t think “Web 3.0” will be either. Or, maybe all the innovations will be mobile-phone-specific, context-sensitive, super powerful web applications…

(Image credit) (Image credit – PhD comics) (Image credit – mobile phone) (Image credit – Browser wars)

2 Comments

Resume/cover letter pet peeves

This may come a little late for those of you who are already in the middle of recruiting season, but having gone over in excess of 100 applications, I felt it’s my duty to at least try to make a few things clear about what I absolutely hate when I’m doing resume reads (apologies if the tone is a bit aggressive, but I’m really tired of running into consulting job applications with these problems):

  • Not following the directions – This is top of the list for me, and it almost warrants a complete disqualification of an applicant from my perspective. This is your one chance to prove to the company you’re applying to that you’re a great choice – and you can’t even follow the clearly stated directions? If the instructions say, “submit two applications through two different websites” – I don’t care if one website is poorly designed, if you want the job you’re going to submit it twice. If the instructions say “include your SAT score”, and you don’t because of some sort of moral objection – I don’t care, because apparently you don’t want this job enough to overcome that objection. Please, people. Most companies aren’t trying to make this difficult.
  • Cover letters that make you sound like someone I’d rather throw darts at than work with – Don’t get me wrong. To get hired you’ll probably have to do some gloating. And, there’s nothing wrong with using your cover letter to try to explain away a deficiency or two in your application. But, when your cover letter does nothing but convey either how you are someone who thinks you’re better than all the people around you, I find myself asking, “do I really want to work with someone like you?” and answering “No, I’m going to pass on this one.”
  • Purpose statements – Your purpose is self-obvious. Its to get a job at the firm you’re applying to. If your resume doesn’t establish that you have the qualifications and passion to do the job, either change your resume so that the “purpose statement” becomes a waste of time/space or don’t apply for the job.
  • Skill: I’m great at Microsoft Office – Unless you are super-super-kickass at using Word and PowerPoint and Excel (and I mean, you could teach the hardcore investment bankers and consultants a thing or two), don’t mention this. Nobody really cares (when’s the last time you heard a company hire a banker/consultant/analyst because of their Office skills?), and everybody knows you’re just pretending to have more computer skills than you actually have.
Leave a Comment

Wiki-power

image A week or two ago, I had a conversation with a couple of coworkers about the use of blogs/social media to gather information about subjects (and hence justify why I spend so many hours on Google Reader). They were fairly skeptical of the ability of blogs to do the same job that the New York Times or the Economist did.

Although we didn’t settle the debate (it takes time to convince the uninitiated), I had three basic responses:

  1. Speed – Services like Twitter are now so fast that there is even some talk about leveraging Twitter as an early warning system/communication system for disasters.
  2. Insight – News agencies don’t always provide insight or analysis. They relay talking points and soundbytes. They wrap it up with fancy “wrapping paper”, but they don’t reliably provide useful insight. Blogs can be a great source of insightful commentary and background — especially for things that are out of scope or out of the reach for many traditional news sources.
  3. Reputation – One issue my coworkers had was that nobody was regulating what bloggers said. “Why should you trust what a blogger has to say?” I replied, “Why should you trust what the New York Times is saying?” The answer to the original dilemma, of course, is to only read blogs which you trust. “But how do you know who to trust?” You don’t. But, while you might not know if you can trust a single random journalist from a single newspaper, thanks to the power of blogging, I can quickly read blog entries by Ezra Klein, Greg Mankiw, Megan McArdle, and Tyler Cowen and not only get four insightful accounts (often with sources for me to get more information) from people I trust more than a random reporter for a newspaper, but compare their accounts and perspectives to formulate my own informed opinion. Not so easy to do with even a newspaper editorial section.

Moreover, its not like the traditional media aren’t using Twitter/Wikipedia/blogs to do their own research: (HT: PhysOrg)

An Irish student’s fake quote on the Wikipedia online encyclopaedia has been used in newspaper obituaries around the world, the Irish Times reported.

Shane Fitzgerald, 22, a final-year student studying sociology and economics at University College Dublin, told the newspaper he placed the quote on the website as an experiment when doing research on globalisation.

Fitzgerald told the newspaper he picked Wikipedia because it was something a lot of journalists look at and it can be edited by anyone.

“I didn’t expect it to go that far. I expected it to be in blogs and sites, but on mainstream quality papers? I was very surprised about,” he said.

(Image Credit)

Leave a Comment

Paradigm Shift@Home

I recently made a post on Bench Press about the potential for distributed computing (projects like Folding@Home and SETI@Home which combine the computing power from volunteers over the internet to do supercomputer style calculations) to help any initiative needing extra number-crunching power, as well as steps that the scientific and distributed computing communities can take to help get us there, as well as what I think is a valuable paradigm shift in science that the distributed computing approach represents:

What impresses me the most about projects like Folding@Home and SETI@Home is that they have defined some brilliant new ways to do science:

  • Use the internet – It’s a common theme on Bench Press, but with more and more people having faster and faster access to the internet, the potential for distributed computing becomes greater and greater. As Folding@Home demonstrated, such approaches can produce computing systems as powerful (or potentially more powerful) as leading supercomputer systems at a fraction of the cost.
  • Mobilize the public – We’ve discussed ways for the scientific community to reach out to the public like using social media and creating interactive applications/tools for the public to use, but efforts like Folding@Home illustrate a way to not only reach out to the public but to get them vested in science. In a world where high school science teachers find it difficult to get teens interested in science, initiatives like Folding@Home have created a system where teams of individuals compete on who can contribute the most to the effort! Instead of simply hoping that the public will continue to fund and listen, why not borrow a page from the many existing cancer-walk-a-thons and make it easy for the public to get involved?
  • Leverage new technology – It may not come as a surprise to our readers that a significant amount of the computational power at Folding@Home comes from graphics cards and Playstation 3’s. But, while many “mainstream” supercomputers ignored the new power afforded by these new chip types, Folding@Home developed software so that volunteers could quickly and easily use these powerful chips to boost their Folding@Home scores. The Folding@Home initiative also developed software to take advantage of innovations AMD and Intel included in their chips (new multi-core architectures and special instructions to speed up calculations). Is it any wonder, then, that Sony, NVIDIA, and AMD have all publically announced support for the initiative with their products?

image

For more details on distributed computing and some of my thoughts on how the scientific community can better adopt these, check out the post at http://blog.benchside.com/2008/12/distribute-compute/

Leave a Comment

Independent Taiwan

image As many of this blog’s readers know, I was born in Taiwan but raised in the United States. I am a bit ashamed to admit this, but it wasn’t until college that I began to get a real sense of what being Taiwanese meant – the culture, the history, the customs. Sadly, it wasn’t until I started doing research on technology companies that I got a sense of the importance of Taiwan in the global economy.

And it wasn’t until even more recently that I got a real sense of Taiwanese politics. Taiwan is basically split between two parties, the Kuomintang (KMT) – the party of Chiang Kai-shek – and the Democratic Progressive Party (DPP).

Technically, if one were to classify the two parties in Western terms, the KMT would count as the “conservative/right-wing” party and the DPP would be the “liberal/left-wing” party. But, while this difference is real, the main issue that divides the two parties is their stance on Taiwanese “independence”.

The reason I put “independence” in quotes is because the subject is a very nuanced one. Currently, Taiwan is in a state of de facto independence. Although neither China nor the United States will go so far as to say it openly, there is fairly wide acceptance that the Taiwanese government is “sovereign” in the sense that its democratic government rules Taiwan without any real question. The “independence” that divides the KMT and the DPP, however, goes beyond this independent-even-though-nobody-will-say-it status quo. It’s the question of whether or not Taiwan is truly a separate political and cultural entity from China altogether. And, because of the KMT’s origins in China, the KMT is the party which most favors reconciliation with China and greater integration, while the DPP favors stronger terms of independence.

And, while I have many problems with the DPP’s positions and base of support, I am completely opposed to the KMT party line for four main reasons:

  1. The government of Mainland China is a repressive regime with little regard for human rights. The only way I can even begin to understand why people would think that Taiwan would be better off as a part of China is if they didn’t pay attention to the news: Tibet, Tiananmen Square, Uighur Muslims, silencing of political protest, disregard for the health of their own people and trading partners, excessive pollution, support for genocide, the list goes on and on. Yes, plenty of other countries have their fair share of human rights issues, and it’s a perfectly valid point to say that Taiwan at various stages of its past had similar problems which they eventually solved, but that doesn’t change the fact that the Chinese government today is less desirable than an open, democratic one, and anyone who thinks that Taiwan ought to subject itself to such a rule either has no clear idea what the Chinese government has been up to or has something against Taiwanese freedom.
  2. A not-very-similar society and culture is hardly a reason for Taiwan to belong to China. To say that Taiwan ought to reunite with China because of strong cultural ties would be the same as arguing that the United States and India should be colonies of Great Britain. Yes, they speak the same language (although there are many in Taiwan who prefer Taiwanese or Japanese), and some of the same pop music is played in both countries (Asian pop superstar Jay Chou is from Taiwan), but that’s hardly a decent reason to just surrender one’s national identity and government to someone else, especially when the cultures (e.g. phrases, foreign influences, even the writing of characters) have several big differences.
  3. The KMT has a murderous history which the people of Taiwan should imagewant to distance themselves from. This will piss off many KMT, but Chiang Kai-Shek was a contemptible man who butchered his own people and let them starve while he enriched himself. When the Chinese people turned against him and sided with Mao Zedong’s Communists, instead of learning from his mistakes, Chiang repeated them on the island of Taiwan, installing a brutal military rule. The KMT seized all available property and, during the infamous 2-28 incident, butchered political dissidents and native Taiwanese. For years, they suppressed the local culture/dialect, demanding instead that students be educated as if they were mainlanders (Chiang’s plan all along was to re-take the mainland). That the KMT wants to look backwards on these “good old days” strikes me as a somewhat ridiculous basis for foreign policy (not to mention the irony of the party of Chiang Kai-Shek wanting to negotiate “surrender” with the party of Mao Zedong).
  4. The best way to improve Taiwanese economic growth is in achieving independence. KMT supporters oftentimes float the idea that the Taiwanese economy depends on tighter integration with China. While this is certainly true, there is nothing which says that more trade and immigration between China and Taiwan has to mean that Taiwan becomes a part of China. France and Germany have more or less completely free trade and immigration, yet you would be hard-pressed to find a Frenchman who thinks that France should be made a part of Germany. On the contrary, because of Taiwan’s dependence on trade, the issue of independence is especially important. How do you trade or do business when no countries recognize your laws or authority? How do you flourish when few will grant visas to your businesspeople? When your customers find it difficult to travel into your country? Or when pressure by China can cause your telephone area codes to suddenly change?

imageThe DPP, in my opinion, is a backwards party content to play class and identity politics (fomenting racial/cultural backlash against the mainland and the wealthier, more cosmopolitan base of the KMT), argue over trivial things like the official flag of the country (one such example is to the left) and whether or not the map of Taiwan should be depicted with North-South on the vertical or the horizontal axis (to de-emphasize their location next to China), and play to narrow-minded anti-trade/anti-immigration isolationists.

But, despite all of this, I believe that the issue of the hour for Taiwan is independence. And I believe that, because of Taiwan’s relative strength and China’s focus today on economic growth and integration with the global political community, the time for pushing independence is now. Maybe, later, when the need for independence is less and when (hopefully) China becomes more democratic and open, the dialogue and the priorities might change. But, until then, I see the DPP representing the lesser of two evils.

(Image source)

One Comment

A Hundred for Circuit City

A guest post (the first!) by my good buddy, Anthony , who after a couple of minutes of brainstorming with me on what companies to offer $100 for and what we’d do to save them came up with this little bit:

Hi everybody, I’m Anthony, Ben’s partner in low-ball offers to disastrous companies. A couple of weeks have gone by and unfortunately it appears that no one has been willing to accept our $100 offer to run the next failing bank or company division (Damn that Dogbert…).

Now Ben and I are realists, we understand that companies may be reluctant to take advantage of our offer, but when you change CEOs only to find yourself looking at a share price of $0.20 (down 95.24% in 2008) and scrambling for ways to avoid Chapter 11; it just might be time for a major change. Yes, we’re talking to you, Circuit City.

Circuit City’s financial situation has deteriorated steadily since 2006. Revenue gains have been marginal in comparison to Best Buy’s rapid growth. In 2007 Circuit City incurred a loss of $8 million in stark contrast to Best Buy’s $1.37 billion in profits. If that weren’t bad enough in the most recent quarter (ending Aug 2008) Circuit City racked up over $200 million in losses. Total debt has increased steadily over the past three quarters and same store sales (a number showing sales that don’t come from new stores, making it a good sign of how well Circuit City is managing sales growth) fell 7.7% in FY2008. Adding insult to injury Bernard Sands recently pulled its recommendation for vendors to sell goods to Circuit City over concerns the retailer would be unable to pay. On the other hand, their top competitor, Best Buy, is flourishing with its revenue growing by double digits over the past four years. Clearly, something needs to be done.

This is where Ben and I come in. For the low, low price of $100 (probably worth more than the company should be right now), the two of us propose nothing less than a complete revamp of Circuit City’s store format and business strategy.

Since “business as usual” is simply continuing to take on Best Buy on its own home turf, we believe the best method for reinvigorating sales is to provide consumers with a new consumer electronics store experience — one that acknowledges the rapid pace of development in consumer electronics and provides the consumer with a practical while flexible buying experience. This new store setup will differentiate Circuit City stores from the plethora of typical electronics retailers by emphasizing “platforms” rather than individual devices. The reason for this is that the rapid pace of technology makes it difficult for the typical consumer to always make fully informed purchasing decisions. This means that consumers may buy products which aren’t compatible with or don’t work well with one another (e.g. HDTVs and various speaker receivers). The wide range of electronics that these stores need to carry also make it very difficult for the store clerks to understand all the options that consumers may want.

What do we propose? Take a page straight out of IKEA’s playbook — instead of organizing the store by device (e.g. a TV section, a MP3 player section, etc.), organize the store into “platforms” — here is a hardcore gamer’s living room setup, here is a budget home office setup, here is a student setup, here is a always-on-the-go setup, etc. In each case, instead of highlighting specific devices, we would encourage Circuit City and its employees to highlight a particular electronics experience customized for a specific use. This would help Circuit City’s relations with its suppliers, as the suppliers already are attempting to target different products to different types of customers, and would serve as a useful service for customers who have no clear idea of which devices are tailored for them, nor how the devices can work together. Also, in much the same way that IKEA lets you customize specific pieces of the furniture, this store experience gives customers flexibility by presenting choices which don’t interfere with how the electronics work together (e.g. different colors, a slightly higher-end or cheaper device to substitute, etc.).

This new customer orientation also suggests a structure for a new, more useful website. Ben and I propose to integrate Circuit City’s website into its store business in a way that no store has ever done before. Whereas most companies operate their stores and websites separately, we would force Circuit City to not only organize the website in the same way that the store is (to help simplify things for customers), but to also tie them so closely together that a customer could quickly and easily use the website to schedule repairs and pickups, check on the availability of products in their local stores in real-time, download product information (e.g. manuals, flyers, drivers, etc.), and even check out ways to customize or replace specific pieces (e.g. a different color game controller, a slightly higher performance sub-woofer, etc.).

Neither of these initiatives are easy, nor do they represent all of the ideas that Ben and I have, but they are a good first step in how to save Circuit City’s sinking ship. As Circuit City spokesman Bill Cimino told the WSJ, “[t]he management team, board of directors, and its strategic financial advisers are conducting a comprehensive review of all aspects of our business to determine the best methods of accelerating our turnaround”.

They haven’t gotten back to us yet, but I’m sure, if they want their company to succeed, they’ll give it some thought.(Image source)
2 Comments

Standing offer

My friend Anthony and I are taking a stand. We were miffed when Motorola didn’t even consider our offer of $1 for Motorola’s handset division (seriously, why give it to Qualcomm’s ex-COO?). We were annoyed when our offer of $2 for Lehman Brothers was ignored (and now look where they are).

So, we’re going to draw a line in the sand right here. Right now.

Anthony and I will offer $100 (that’s 50 times what we were going to offer for Lehman – and they probably weren’t even worth that) to run the next failing bank or company division.

Do we have much management experience? No. Hardcore MBA with financial experience or brilliant technical expertise? Not really.

But, come on. Experts ran Fannie/Freddie, Lehman Brothers, Bear Stearns, AIG, Merrill Lynch, Washington Mutual, Motorola’s handset division, Pfizer’s heart medication group, … the list goes on and on. And look where they all are now!

It’s time for some new blood.

3 Comments

Playing with Monopoly

imageWith the recent challenges to Google’s purchase of Doubleclick, Microsoft’s endless courtship of Yahoo, and the filing of more papers in the upcoming Intel/AMD case, the question of “why should the government break up monopolies?” becomes much more relevant.

This is a question that very few people ask, even though it is oftentimes taken for granted that the government should indeed engage in anti-trust activity.

The logic behind modern anti-trust efforts goes back to the era of the railroad, steel, and oil trusts of the Gilded Age, when massive and abusive firms engaged in collusion and anti-competitive behavior to fix prices and prevent new entrants from entering into the marketplace. As any economist will be quick to point out, one of the secrets to the success behind a market economy is competition – whether it be workers competing with workers to be more productive or firms competing with firms to deliver better and cheaper products to their customers. When you remove competition, there is no longer any pressing reason to guarantee quality or cost.

So – we should regulate all monopolies, right? Unfortunately, it’s not that simple. The logic that competition is always good is greatly oversimplified, as it glosses over 2 key things:

  1. It’s very difficult to determine what is a monopoly and what isn’t.
  2. Technology-driven industries oftentimes require large players to deliver value to the customer.

What’s a Monopoly?

image

While we would all love monopolies to have clear and distinguishable characteristics – maybe an evil looking man dressed in all black laughing sinisterly as his diabolic plans destroy a pre-school? – the fact of the matter is that it is very difficult for an economist/businessperson to really tell what counts as a monopoly and what doesn’t, for four key reasons:

  1. Many of the complaints and lawsuits brought against “monopolies” are brought on by competitors. Who is trying to sue Intel? AMD. Who complained loudly about Microsoft’s bundling of Internet Explorer into Windows? Netscape.
  2. “Market share” has no meaning. In a sense, there are a lot of monopolies out there. Orson Scott Card has a 100% market share in books pertaining to the Ender’s Game series. McDonald’s has a 100% market share in Big Macs. This may seem like I’m just playing with semantics, but this is actually a fairly serious problem in the business world. I would even venture that a majority of growth strategy consulting projects are due to the client being unable to correctly define the relevant market and relevant market share.
  3. What’s “monopoly-like” may just be good business. Some have argued that Microsoft and Intel are monopolies in that they are bullies to their customers, aggressively pushing PC manufacturers to only purchase from them. But, what is harder to discern is how this is any different from a company that offers aggressive volume discounts? Or that hires the best-trained negotiators? Or that knows how to produce the best products and demands a high price for them? Sure, Google is probably “forcing” its customers to pay more to advertise on Google, but if Google’s services and reach are the best, what’s wrong with that?
  4. “Victims” of monopolies may just be lousy at managing their business. AMD may argue that Intel’s monopoly power is hurting their bottom line, but at the end of the day, Intel isn’t directly to blame for AMD’s product roadmap mishaps, or its disastrous acquisition of ATI. Google isn’t directly to blame for Microsoft’s inability to compete online.

Big can be good?

This may come as a shock, but there are certain cases where large monolithic entities are actually good for the consumer. Most of these lie around technological innovation. Here are a few examples:

  • Semiconductors – The digital revolution would not have been possible without the fast, power-efficient, and tiny chips which act as their brains. What is not oftentimes understood, however, is the immense cost and time required to build new chips. It takes massive companies with huge budgets to build tomorrow’s chips. It’s for this reason that most chip companies don’t run their own manufacturing centers and are steadily slowing down their R&D/product roadmaps as it becomes increasingly costly to design and build out chips.
  • Pharmaceuticals – Just as with semiconductors, it is very costly, time-consuming, and risky to do drug development. Few of today’s biotech startups can actually even bring a drug to market — oftentimes hoping to stay alive just long enough to partner with or be bought by a larger company with the money and experience to jump through the necessary hoops to take a drug from benchside to bedside.
  • Software platforms – Everybody has a bone to pick with Microsoft’s shoddy Windows product line. But what few people recognize is how much the software industry benefited from the role that Microsoft played early on in the computer revolution. By quickly becoming the dominant operating system, Microsoft’s products made it easier for software companies to reach wide audiences. Instead of designing 20 versions of every application/game to run on 20 OS’s, Microsoft made it easy to only have to design one. This, of course, isn’t saying that we need a OS monopoly right now to build a software industry, but it is fair to say that Microsoft’s early “monopoly” was a boon to the technology industry.

The problem with today’s anti-trust rules and regulations is that they are legal rules and regulations, not economic ones. In that way, while they may protect against many of the abuses of the Gilded Age (by preventing firms from getting 64.585% market share and preventing them from monopolistic action 1 through 27), they also unfortunately act as deterrents to innovation and good business practice.

Instead, regulators need to try to take a broader, more holistic view of anti-trust. Instead of market share litmus tests and paying attention to sob stories from the Netscapes of the world, regulators need to really focus on first, determining if the offender in question is acting harmfully anticompetitive at all, and second if there is credible economic value in the institutions they seek to regulate.

Leave a Comment

"Science" fiction

imageThere is no question in my mind that the high quality of life experienced by Americans today and the dominant position in the world which the United States currently enjoys are in large part due to the nation’s leadership in science and technology.

But despite the importance of science today, and its relevance in maintaining this favorable status quo for America tomorrow, Americans show a bizarre lack of understanding of basic science. This manifests itself in a more serious way in the the sheer number of individuals who believe that evolution and creationism are equivalent in validity and/or who believe that vaccines cause autism — both are quite wrong; both are testament to how little many Americans comprehend of science; and both are signs that this lack of understanding has real political impacts.

On the less serious side, however, this also manifests itself in a number of odd “scientific” beliefs which, although oftentimes having no basis in reality, are actually widely held (from: LiveScience)

Top 10 Most Popular Myths in Science

  1. Humans use only 10% of their brains.
  2. The Great Wall of China is the only manmade structure visible from space.
  3. It takes 7 years to digest gum.
  4. Yawning is “contagious.”
  5. Water drains backwards in the Southern Hemisphere due to the Earth’s rotation.
  6. There is no gravity in space.
  7. Chickens can live without a head.
  8. Eating a poppy seed bagel mimics opium use.
  9. A penny dropped from the top of a tall building could kill a pedestrian.
  10. Hair and fingernails continue growing after death.

And now you are enlightened. Go forth and spread your newfound scientific “wisdom.”

Leave a Comment

10K for life

image I’m not sure if this is typical for other consultants, but I spend a lot of time reading through corporate annual and quarterly reports (called 10K’s and 10Q’s, respectively, after the SEC form names). These reports give lots of information, including a description of the business (useful for technology and biotech/pharma companies which can be difficult for the layperson to understand), snapshot of performance for the period that is reported, the relevant historical comparisons for current performance (e.g. the year before, the same quarter from the year before, etc.), and a list of risk factors for the business.

These reports are produced by public companies (and by some private companies, no doubt) for the benefit of investors who no doubt want to know exactly what they are investing in. But, I do also believe that the very process of making these reports is good for management as it forces them to think very hard about their strategy, their competitive environment, and their ability to execute.

This is why, despite scoffing when I first heard about a consultant at my firm who compiles an annual report for himself (complete with a letter to the shareholder — himself), I have recently started compiling these reports. Yes, I know this is incredibly nerdy, but hear me out. Four reasons why everyone should think about making personal annual and quarterly financial reports:

  1. It forces you to track your finances regularly. The practice of having to make annual or quarterly or semiannual reports is impossible unless there is some effort made to regularly check your finances. This is good as it alerts you to irregularities (e.g. credit card fraud) and helps to make sure that you are sticking to your financial goals (e.g. save 10% of my income every month).
  2. It lets you quickly see mistakes in your judgement. Hindsight is 20/20, but only if you look. By thinking about your past year, or quarter, or whatever period you decide to make these reports on, you are forced to think about what you could have done better. Only by routinely thinking about and being honest with yourself can you make better decisions in the future.
  3. Have an idea of where your finances are going. This has been very helpful for me as I plan out big purchases (e.g. vacations, electronics, etc.) and think about how much of my savings to put into investments month-after-month.
  4. It helps you plan for the future. This, in my mind, is the best and most important reason to do these financial reports. I made one for the 6 months since I graduated from college, and by tallying up my purchases and my income and my investments, I found that I was better off than I had thought I was. As a result, I am planning to increase the amount of money I invest in equities for this coming year. I also looked at my purchases and realized that, by making a few changes in what I buy for lunch, I could easily cut down my expenses by several percentage points.

It’s not necessary to copy the form that corporate annual reports come in, and it’s not necessary to do monthly reports or to make them especially pretty. What is important is to pick a schedule which sounds reasonable (I suggest every 3 months as a good balance between having to do it too often, and having to do it not often enough) and to pick a form which is reasonably easy to do but still forces you to write down your past track record and future plans (could even be scribbles on a notepad if that works for you).

Or if you’re more artistically inclined, you can do what Podravka, a Croatian food company, does which is make an annual report that is only readable after you bake it (hat tip: Eric).

But that’s just for extra credit…

2 Comments

Buck-passing

President Harry S. Truman popularized the phrase “the buck stops here.” It’s a very simplistic notion that at the end of the day the responsibility for problems and failures lies on that someone’s shoulders. In my mind, it’s the bare essence of what being a leader is — taking responsibility for screwups and failures.

The world, however, doesn’t seem to see things that way. Politicians, instead of stepping up to bat on issues ranging from policy failure to corruption, are much more likely to “pass the buck” to the media, to “the vast [right/left]-wing conspiracy”, to members of the other party. Rarely, do they admit wrong-doing and then immediately outline steps to remedy any problems that they might have caused. Many corporate executives when accused of incompetence or when tried in court for perjury and fraud fail to own up to their mistakes and play ignorant, and yet are perfectly willing to take the credit when broad market forces outside of their control are responsible for their “good leadership.”

That’s why it’s somewhat refreshing to see that when rumors of sexual abuse at her school arose, Oprah quickly made it an issue of her being responsible. This occurred, despite the fact that the mere existence of the school is a testament to her generosity (and arguably, her ego), despite that Oprah probably had very little to do with the hiring, screening, and the actual wrong-doing involved — she took the extra step to quickly place the administration on leave, issue an official apology, and even going so far as to hire her own investigative team on top of whatever local official investigation was being conducted and to — and this is rare coming from a celebrity — giving the students access to her personal phone number and email address.

Oprah deserves credit for doing this — when faced with bad news on her watch, she (a) quickly apologized, (b) promised immediate action, (c) took an extra step to perform an extra investigation into the matter, and (d) went out of the way to be accessible to the victims.

Tell me one good reason why the CEOs of Citibank, Merill Lynch, and the numerous financial companies who have contributed to the subprime crisis shouldn’t act in exactly the same way?

Leave a Comment

Consulto-nomics

While doing research for a secret internal project, I stumbled across a book, The Boston Consulting Group On Strategy, one of the many books that various consulting firms put out on a regular basis to establish themselves as relevant experts on business strategy. The book itself is a compilation of opinion and analysis pieces by various BCG (Boston Consulting Group) partners from the past including a few interesting pieces by BCG founder Bruce Henderson.

Henderson is interesting in that he is one of the “founding fathers”, if you will, of management consulting. BCG history is full of examples of powerful insights: the Experience Curve, the growth-share matrix, “The Rule of Three and Four”, among others. To this day, the public persona of the firm is one of a puzzle-solving/innovative thinking culture dedicated to solving and analyzing complex business scenarios.

Yet, despite his role at the beginning of consulting, Henderson’s writing is distinctively not consultant-like. Nowhere are the excessive TLAs (three letter acronyms) or use of impressive-sounding but utterly empty sentences and phrases which seem to mark today’s consultant. Instead, he is simple. To the point. Decidedly not long winded.

I like him already.

One of the points that he makes pertains to economists. Henderson makes the argument that economics leaves one very ill-prepared for the business world, because economists deal in abstracted, perfect conditions which bare no semblance to the world. In the business world, there is no perfect competition, or information symmetry, or pure rational agent which the economist often relies on in his or her thinking and theorizing.

I took a course in the Fall semester of my last year of college which dealt with evolutionary game theory, and while we looked at very complex bells and whistles (spatial components, strategies, memory, etc.), at the end of the day we were just analyzing the infamous prisoner’s dilemma, a game with only two possible moves, and it was surprising how complex the analysis could be: it was done with mathematical rigor in a complete fashion with hypothesis testing and reasonably advanced computational analysis.

But, ultimately, it was still just a two-move game (tic-tac-toe, by comparison, is an, on average, 5 move game) with idealized assumptions. The business world doesn’t have two-move games. It doesn’t let you make nice assumptions. It doesn’t give you the time to conduct complete analysis. You don’t have a Cray supercomputer at your desk with hours to code to do a hard numerical analysis. You have a deadline. You have gray areas. You have incomplete data. You have to deal with people and their volatile personalities and emotions.

So, yes, a proof, an experiment, a mathematical model — these are certainly beautiful and interesting. But, you just can’t do that in business. You have to prioritize. You have to manage your time. You have to work together to tackle a big, amorphous task. You have to make guesses. You have to sometimes leap before you look. You have to beg, to intimidate, to coax, to laugh and to cry and to smile.

Leave a Comment

All Roads Lead to . . .

If you had told me four years ago that I would be working in consulting, I would have responded with a basic question: “What’s consulting? And, why am I doing it?”

As recently as a year ago, I was positive that I would be pursuing a PhD in Systems Biology (or something similar such as Computational Biology or Mathematical Biology). The field was deeply exciting to me. It was (and still is) full of untapped potential. I spoke eagerly with professors Erin O’Shea and Michael Brenner about how I could prepare myself and what I could study. Having worked in the lab of professor Tom Maniatis for almost two years at that point, and having been exposed to the joys of doing collaborative scientific work, I was fairly certain that being a graduate student doing research full-time was what I wanted.

With almost a sense of smugness, I looked down at the more “business-y types”. I thought what they were doing lacked rigor, and was hence not worthy of my time. I believed it was mere mental child’s play compared to the rigor and intellectual excitement of trying to decode complex gene networks and how invisible molecules could determine whether we were healthy or sick.

So what happened? Well, I can think of four main reasons. The first and most immediate was that I was part of the organizing committee behind the 2006 Harvard College Asian Business Forum, which was the HPAIR (Harvard Project for Asian and International Relations) business conference. The experience was very rewarding and eye-opening, but more than that, it was an impetus to follow the paths of the many excited delegates, many of whom were early professionals looking into business jobs like consulting and finance.

The second factor was a growing awareness of what life in academia meant. Yes, I was well aware of the struggles that junior academics had to go through on their way towards tenured faculty. But at the same time, towards the end of the summer, with several experiments facing  setbacks and the doubts in my mind over my ability to be a good researcher, I began looking to other alternatives.

The third consideration stems from the fact that I have always been interested in application. My approach towards science has always been rooted in searching for possible applications, whether commercial or for the public interest. Even the reason that I chose to specialize in Systems Biology stemmed from a belief that traditional molecular and cellular techniques will face sharply diminishing returns with regards to finding the causes and cures for diseases. Having lived almost all of my pre-college life in the Silicon Valley, I was geared to seeing fruiftul science as science that moved from “bench to bedside” and my highest aim was to transition brilliant ideas to profitable ones.

The final factor is of course that it’s always exciting to try something new, especially something competitive — and even though I cursed recruiting at times, it could feel like a fun competition. Although I did not expect to receive a job offer from any firm, I did better than I expected in the interview process and received an offer which I simply found too interesting to turn down.

All roads, at least for me, led to consulting.

2 Comments

Does Studying Economics Make People More Conservative?

For Lisa:

An Introductory Economics student asks Greg Mankiw, “Does Econ Make People More Conservative?”

The student asks:

My school offers two main elective history courses for seniors: Government and Economics. Due to scheduling limitations, not many kids are able to take both. I’ve noticed something interesting as the year has progressed. The students who are taking the government course are increasingly endorsing leftist ideologies while the economics students are becoming increasingly right wing. For instance, my school’s paper recently ran an editorial that ‘complained’ that too many of Lawrenceville’s finest were going into investment banking, and not into seemingly ‘socially beneficial’ careers. What is your view on government intervention on economic equality and the like? Do all economics students show republican (or right of center) tendencies?

To my surprise, Mankiw actually says “I believe the answer is, to some degree, yes“, and he outlines three reasons:

First, in some cases, students start off with utopian views of public policy, where a benevolent government can fix all problems. One of the first lessons of economics is that life is full of tradeoffs. That insight, completely absorbed, makes many utopian visions less attractive. Once you recognize, for example, that there is a tradeoff between equality and efficiency, as economist Arthur Okun famously noted, many public policy decisions become harder.

Second, some of the striking insights of economics make one more respectful of the market as a mechanism for coordinating a society. Because market participants are motivated by self-interest, a person might naturally be suspect of market-based societies. But after learning about the gains from trade, the invisible hand, and the efficiency of market equilibrium, one starts to approach the market with a degree of admiration and, indeed, awe.

Third, the study of actual public policy makes students recognize that political reality often deviates from their idealistic hopes. Much income redistribution, for example, is aimed not toward the needy but toward those with political clout.

And of course:

Nonetheless, studying economics does not by itself determine one’s political ideology. I know good economists who are distinctly right of center and good economists who are distinctly left of center. In my department at Harvard, I would guess that Democrats outnumber Republicans among the faculty (although there is surely more political balance in the economics department than in most other departments at the university).

Leave a Comment