Skip to content →

Tag: cloud computing

My Takeaways from GTC 2012

If you’ve ever taken a quick look at the Bench Press blog that I post to, you’ll notice quite a few posts that talk about the promise of using graphics chips (GPUs) like the kind NVIDIA and AMD make for gamers for scientific research and high-performance computing. Well, last Wednesday, I had a chance to enter the Mecca of GPU computing: the GPU Technology Conference.

tmp_image_1337198959501

If it sounds super geeky, it’s because it is :-). But, in all seriousness, it was a great opportunity to see what researchers and interesting companies were doing with the huge amount of computational power that is embedded inside GPUs as well as see some of NVIDIA’s latest and greatest technology demo’s.

So, without further ado, here are some of my reactions after attending:

    • NVIDIA really should just rename this conference the “NVIDIA Technology Conference”. NVIDIA CEO Jen-Hsun Huang gave the keynote, the conference itself is organized and sponsored by NVIDIA employees, NVIDIA has a strong lead in the ecosystem in terms of applying the GPU to things other than graphics, and most of the non-computing demos were NVIDIA technologies leveraged elsewhere. I understand that they want to brand this as a broader ecosystem play, but let’s be real: this is like Intel calling their “Intel Developer Forum” the “CPU Technology Forum” – lets call it what it is, ok? 🙂
    • Lots of cool uses for the technology, but we definitely haven’t reached the point where the technology is truly “mainstream.” On the one hand, I was blown away by the abundance of researchers and companies showcasing interesting applications for GPU technology. The poster area was full of interesting uses of the GPU in life science, social sciences, mathematical theory/computer science, financial analysis, geological science, astrophysics, etc. The exhibit hall was full of companies pitching hardware design and software consulting services and organizations showing off sophisticated calculations and visualizations that they weren’t able to do before. These are great wins for NVIDIA – they have found an additional driver of demand for their products beyond high-end gaming. But, this makeup of attendees should be alarming to NVIDIA – this means that the applications for the technology so far are fundamentally niche-y, not mainstream. This isn’t to say they aren’t valuable (clearly many financial firms are willing to pay almost anything for a little bit more quantitative power to do better trades), but the real explosive potential, in my mind, is the promise of having “supercomputers inside every graphics chip” – that’s a deep democratization of computing power that is not realized if the main users are only at the highest end of financial services and research, and I think NVIDIA needs to help the ecosystem find ways to get there if they want to turn their leadership position in alternative uses of the GPU into a meaningful and differentiated business driver.
    • NVIDIA made a big, risky bet on enabling virtualization technology. In his keynote, NVIDIA CEO Jen-Hsun Huang announced with great fanfare (as is usually his style) that he has made virtualization – this has made it possible to allow multiple users to share the same graphics card over the internet. Why is this potentially a big risk? Because, it means if you want to have good graphics performance, you no longer have to buy an expensive graphics card for your computer – you can simply plug into a graphics card that’s hosted somewhere else on the internet whether it be for gaming (using a service like GaiKai or OnLive) or for virtual desktops (where all of the hard work is done by a server and you’re just seeing the screen image much like you would watch a video on Netflix or YouTube) or in plugging into remote rendering services (if you work in digital movie editing). So why do it? I think NVIDIA likely sees a large opportunity in selling graphics chips which have , to date, been mostly a PC-thing, into servers that are now being built and teed up to do online gaming, online rendering, and virtual desktops. I think this is also motivated by the fact that the most mainstream and novel uses of GPU technology has been about putting GPU power onto “the cloud” (hosted somewhere on the internet). GaiKai wants to use this for gaming, Elemental wants to use this to help deliver videos to internet video viewers, rendering farms want to use this so that movie studios don’t need to buy high-end workstations for all their editing/special effects guys.
    • NVIDIA wants to be more than graphics-only. At the conference, three things jumped out at me as not being quite congruent with the rest of the conference. The first was that there were quite a few booths showing off people using Android tablets powered by NVIDIA’s Tegra chips to play high-end games. Second,  NVIDIA proudly showed off one of those new Tesla cars with their graphical touchscreen driven user interface inside (also powered by NVIDIA’s Tegra chips).
      2012-05-16 19.04.39Third, this was kind of hidden away in a random booth, but a company called SECO that builds development boards showed off a nifty board combining NVIDIA’s Tegra chips with its high-end graphics cards to build something they called the CARMA Kit – a low power high performance computing beast.2012-05-16 19.16.09
      While NVIDIA has talked before about its plans with “Project Denver” to build a chip that can displace Intel’s hold on computer CPUs – this shows they’re trying to turn that from vision into reality – instead of just being the graphics card inside a game console, they’re making tablets which can play games, they’re making the processor that runs the operating system for a car, and they’re finding ways to take their less powerful Tegra processor and pair it up with a little GPU-supercomputer action.

If its not apparent, I had a blast and look forward to seeing more from the ecosystem!

2 Comments

Addendum to iPhone/DROID2

Having written a long treatise on how the DROID 2 and iPhone 4 stack up against one another, I thought it would be good to add another post on where I thought both phones were deficient in the hopes that folks from the smartphone industry would listen intently so that my next phone choice is more clear. Note: I’ve focused this list on things that I think are actually do-able, rather than far-off wishes which are probably beyond our current technology (e.g., week-long battery life, Star Trek-like voice commands, etc):

  • Usage profiles: One of the biggest pains with using smartphones is that they are a pain to customize. The limited screen real-estate and the difficulty of relying on keyboard shortcuts means that settings are buried under multiple menus. This is fine if you really only use your phone in one way, or if you only need to change one or two sets of settings. It is not useful if, like me, you want your phone to act a specific way at work but a fairly different way in the car, or in the home. In that case, both Android and iPhone are severely lacking. The Android Tasker app allows me to create numerous profiles (I’ve created a in-car, in-meeting, at home/office profile and separate profiles for weekends and weeknights with regards to notifications and email sync) – and so is well worth the $6 price – but it is not as elegant of a solution as if it were integrated into the OS, exposing additional functionality.
  • Seamless computer-to-phone: Because smartphones have small screens, weak processors, and semi-awkward input interfaces, there are some things (i.e., research, making presentations/documents, crunching, etc) which I prefer to do on a larger computer.  This doesn’t mean, however, that I want my smartphone to be a completely separate entity from my computer. Quite the opposite – what I really want to see happen is a more seamless integration of computer and phone. At the most basic level, it means I want my bookmarks/browser history/favorite music easily synced between phone and computer. On a more sophisticated level, it means I want to be able to read/edit the same material (from the same place I left off) regardless of where I am or what device I’m using. If I’m running an application on my PC, I want to be able to pick up where I left on in a reduced-screen version of that application on my phone. Google’s Chrome-to-Phone, Mozilla’s Firefox Sync, and applications like DropBox just barely scratch the surface of this – and if someone figured out a highly effective way to do this (it would probably be Apple, Google, or Microsoft), they’d instantly have my business.
  • Email functions: Honestly, guys. Why is it that I cannot: (a) sort my email oldest to newest or (b) create new folders/labels from within your mail application? Blackberry could at least do (a).
  • Every app/screen should support landscape mode: This is one of my biggest pet peeves (more so with the iPhone than the DROID). Why is it that the homescreen of these devices doesn’t support landscape view (the DROID2 does but only if I pull the keyboard out)? Why is it that the iPhone App Store, Yelp, and Maps apps don’t support landscape mode? And why is it that I can’t lock the iPhone in landscape mode, but only in portrait mode? Apple, how about, instead of reviewing iPhone apps for what you deem to be “inappropriate content”, you force developers to support both portrait and landscape mode?

(Image credit)

Leave a Comment

I know enough to get myself in trouble

One of the dangers of a consultant looking at tech is that he can get lost in jargon. A few weeks ago, I did a little research on some of the most cutting-edge software startups in the cloud computing space (the idea that you can use a computer feature/service without actually knowing anything about what sort of technology infrastructure was used to provide you with that feature/service – i.e., Gmail and Yahoo Mail on the consumer side, services like Amazon Web Services and Microsoft Azure on the business side). As a result, I’ve looked at the product offerings from guys like Nimbula, Cloudera, Clustrix, Appistry, Elastra, and MaxiScale, to name a few. And, while I know enough about cloud computing to understand, at a high level, what these companies do, the use of unclear terminology sometimes makes it very difficult to pierce the “fog of marketing” and really get a good understanding of the various product strengths and weaknesses.

Is it any wonder that, at times, I feel like this Dilbert cartoon?:

image

Yes, its all about that “integration layer” …

My take? A great product should not need to hide behind jargon.

(Link: Dilbert cartoon)

3 Comments