Skip to content →

Tag: NVIDIA

Pace of Mobile

Two slides from NVIDIA’s presentation at CES (captured by the excellent Anandtech team) were particularly stunning to me in terms of illustrating how quickly the mobile revolution is advancing.

This first slide highlights the main NVIDIA product announcement/claim: that starting with their current-generation product, Tegra K1 (cue NVIDIA PR: it was so advanced that they couldn’t just call the successor Tegra 5 :-)), their mobile graphics architecture would be the same as what they are currently selling for their PC products (Kepler).

NVCES-025

That’s not really a new claim — after all, it had been announced previously that Logan (the comic book inspired codename for Tegra K1) was supposed to have Kepler technology inside. What is interesting is when its presented in the following way:

NVCES-030

According to NVIDIA, it took 8 years for the PC technology that supported the Unreal Engine 3 game engine to make it to smartphones (in and of itself an impressive feat if you think about it), but only two years for Unreal Engine 4.

Obviously, there are a lot of caveats here (this is, after all, a press announcement to drum up excitement) – even if the GPU architecture is 100% the same we have no idea what kind of real-world performance or power consumption we’ll get out of this (so word to the wise: ignore a lot of the “core count” crap, its not really apples-to-apples with anything). But it’s a great indicator of how quickly the smartphone/tablet are usurping the role as the primary computing device for the world and how hard that is pushing the broader technology industry to keep up.

More great content on this (and more) at Anandtech

(Images captured by Anandtech team during NVIDIA CES press conference liveblog)

Leave a Comment

NVIDIA’s At It Again

Although I’m not attending NVIDIA’s GPU Technology conference this year (as I did last year), it was hard to avoid the big news NVIDIA CEO Jen-Hsun Huang announced around NVIDIA’s product roadmap. And, much to the glee of my inner nerd, NVIDIA has continued its use of colorful codenames.

The newest addition to NVIDIA’s mobile lineup (their Tegra line of products) is Parker — named after the alter-ego of Marvel’s Spiderman. Parker joins a family which includes Kal-El (Superman) [the Tegra 2], Wayne (Batman) [the Tegra 3], Stark (Iron Man) [Tegra 4], and Logan (Wolverine) [Tegra 5].

And as for NVIDIA’s high-performance computing lineup (their Tesla line of products), they’ve added yet another famous scientist: Alessandro Volta, the inventor of the battery (and the reason our unit for electric potential difference is the “Volt”). Volta joins brilliant physicists Nikola Tesla, Enrico Fermi, Johannes Kepler, and James Maxwell.

(Images from Anandtech)

Leave a Comment

My Takeaways from GTC 2012

If you’ve ever taken a quick look at the Bench Press blog that I post to, you’ll notice quite a few posts that talk about the promise of using graphics chips (GPUs) like the kind NVIDIA and AMD make for gamers for scientific research and high-performance computing. Well, last Wednesday, I had a chance to enter the Mecca of GPU computing: the GPU Technology Conference.

tmp_image_1337198959501

If it sounds super geeky, it’s because it is :-). But, in all seriousness, it was a great opportunity to see what researchers and interesting companies were doing with the huge amount of computational power that is embedded inside GPUs as well as see some of NVIDIA’s latest and greatest technology demo’s.

So, without further ado, here are some of my reactions after attending:

    • NVIDIA really should just rename this conference the “NVIDIA Technology Conference”. NVIDIA CEO Jen-Hsun Huang gave the keynote, the conference itself is organized and sponsored by NVIDIA employees, NVIDIA has a strong lead in the ecosystem in terms of applying the GPU to things other than graphics, and most of the non-computing demos were NVIDIA technologies leveraged elsewhere. I understand that they want to brand this as a broader ecosystem play, but let’s be real: this is like Intel calling their “Intel Developer Forum” the “CPU Technology Forum” – lets call it what it is, ok? 🙂
    • Lots of cool uses for the technology, but we definitely haven’t reached the point where the technology is truly “mainstream.” On the one hand, I was blown away by the abundance of researchers and companies showcasing interesting applications for GPU technology. The poster area was full of interesting uses of the GPU in life science, social sciences, mathematical theory/computer science, financial analysis, geological science, astrophysics, etc. The exhibit hall was full of companies pitching hardware design and software consulting services and organizations showing off sophisticated calculations and visualizations that they weren’t able to do before. These are great wins for NVIDIA – they have found an additional driver of demand for their products beyond high-end gaming. But, this makeup of attendees should be alarming to NVIDIA – this means that the applications for the technology so far are fundamentally niche-y, not mainstream. This isn’t to say they aren’t valuable (clearly many financial firms are willing to pay almost anything for a little bit more quantitative power to do better trades), but the real explosive potential, in my mind, is the promise of having “supercomputers inside every graphics chip” – that’s a deep democratization of computing power that is not realized if the main users are only at the highest end of financial services and research, and I think NVIDIA needs to help the ecosystem find ways to get there if they want to turn their leadership position in alternative uses of the GPU into a meaningful and differentiated business driver.
    • NVIDIA made a big, risky bet on enabling virtualization technology. In his keynote, NVIDIA CEO Jen-Hsun Huang announced with great fanfare (as is usually his style) that he has made virtualization – this has made it possible to allow multiple users to share the same graphics card over the internet. Why is this potentially a big risk? Because, it means if you want to have good graphics performance, you no longer have to buy an expensive graphics card for your computer – you can simply plug into a graphics card that’s hosted somewhere else on the internet whether it be for gaming (using a service like GaiKai or OnLive) or for virtual desktops (where all of the hard work is done by a server and you’re just seeing the screen image much like you would watch a video on Netflix or YouTube) or in plugging into remote rendering services (if you work in digital movie editing). So why do it? I think NVIDIA likely sees a large opportunity in selling graphics chips which have , to date, been mostly a PC-thing, into servers that are now being built and teed up to do online gaming, online rendering, and virtual desktops. I think this is also motivated by the fact that the most mainstream and novel uses of GPU technology has been about putting GPU power onto “the cloud” (hosted somewhere on the internet). GaiKai wants to use this for gaming, Elemental wants to use this to help deliver videos to internet video viewers, rendering farms want to use this so that movie studios don’t need to buy high-end workstations for all their editing/special effects guys.
    • NVIDIA wants to be more than graphics-only. At the conference, three things jumped out at me as not being quite congruent with the rest of the conference. The first was that there were quite a few booths showing off people using Android tablets powered by NVIDIA’s Tegra chips to play high-end games. Second,  NVIDIA proudly showed off one of those new Tesla cars with their graphical touchscreen driven user interface inside (also powered by NVIDIA’s Tegra chips).
      2012-05-16 19.04.39Third, this was kind of hidden away in a random booth, but a company called SECO that builds development boards showed off a nifty board combining NVIDIA’s Tegra chips with its high-end graphics cards to build something they called the CARMA Kit – a low power high performance computing beast.2012-05-16 19.16.09
      While NVIDIA has talked before about its plans with “Project Denver” to build a chip that can displace Intel’s hold on computer CPUs – this shows they’re trying to turn that from vision into reality – instead of just being the graphics card inside a game console, they’re making tablets which can play games, they’re making the processor that runs the operating system for a car, and they’re finding ways to take their less powerful Tegra processor and pair it up with a little GPU-supercomputer action.

If its not apparent, I had a blast and look forward to seeing more from the ecosystem!

2 Comments

Qualcomm Trying to Up its PR with Snapdragon Stadium

I’m very partial towards “enabling technologies” – the underlying technology that makes stuff tick. That’s one reason I’m so interested in semiconductors: much of the technology we see today has its origins in something that a chip or semiconductor product enabled. But, despite the key role they (and other enabling technologies) play in creating the products that we know and love, most people have no idea what “chips” or “semiconductors” are.

Part of that ignorance is deliberate – chip companies exist to help electronics/product companies, not steal the spotlight. The only exception to that rule that I can think of is Intel which has spent a fair amount over the years on its “Intel Inside” branding and the numerous Intel Inside commercials that have popped up.

While NVIDIA has been good at generating buzz amongst enthusiasts, I would maintain that no other semiconductor company has quite succeeded at matching Intel in terms of getting public brand awareness – an awareness that probably has helped Intel command a higher price point because the public thinks (whether wrongly or rightly) that computers with “Intel inside” are better.

Well Qualcomm looks like they want to upset that. Qualcomm make chips that go into mobile phones and tablets and has benefitted greatly from the rise in smartphones and tablets over the past few years, getting to the point where some might say they have a shot at being a real rival for Intel in terms of importance and reach. But for years, the most your typical non-techy person might have heard about them is the fact that they have the naming rights to San Diego’s Qualcomm Stadium – home of the San Diego Chargers and former home of the San Diego Padres.

Well, on December 16th, in what is probably a very interesting test by Qualcomm to see if they can boost the consumer awareness of the Snapdragon product line they’re aiming at the next-generation of mobile phones and tablets, Qualcomm announced it will rename Qualcomm Stadium to Snapdragon Stadium for 10 days (coinciding with the San Diego County Credit Union Poinsettia Bowl and Bridgepoint Education Holiday Bowl) – check out the pictures from the Qualcomm blog below!

dsc_8635_0

cropped

Will this work? Well, if the goal is to get millions of people to, overnight, buy phones with Snapdragon chips inside – the answer is probably a no. Running this sort of rebranding for only 10 days for games that aren’t the SuperBowl just won’t deliver the right PR boost. But, as a test to see if their consumer branding efforts raises consumer awareness about the chips that power their phones, and potentially demand for “those Snapdragon watchamacallits” in particular? This might be just what the doctor ordered.

I, for one, am hopeful that it does work – I’m a sucker for seeing enabling technologies and the companies behind them like Qualcomm and Intel get the credit they deserve for making our devices work better, and, frankly, having more people talk about the chips in their phones/tablets will push device manufacturers and chip companies to innovate faster.

(Image credit: Qualcomm blog)

Leave a Comment

The Marketing Glory of NVIDIA’s Codenames

This is an old tidbit, but nevertheless a good one that has (somehow) never made it to my blog. I’ve mentioned before the private equity consulting world’s penchant for silly project names, but while code names are not rare in the corporate world, more often than not, the names tend to be unimaginative. NVIDIA’s code names, however, are pure marketing glory.

Take NVIDIA’s high performance computing product roadmap (below) – these are products that use the graphics processing capabilities of NVIDIA’s high-end GPUs and turn them into smaller, cheaper, and more power-efficient supercomputing engines which scientists and researchers can use to crunch numbers (check out entries from the Bench Press blog for an idea of what researchers have been able to do with them). How does NVIDIA describe its future roadmap? It uses the names of famous scientists to describe its technology roadmap: Tesla (the great American electrical engineer who helped bring us AC power), Fermi (“the father of the Atomic Bomb”), Kepler (one of the first astronomers to apply physics to astronomy), and Maxwell (the physicist who helped show that electrical, magnetic, and optical phenomena were all linked).

cudagpuroadmap

Who wouldn’t want to do some “high power” research (pun intended) with Maxwell? 🙂

But, what really takes the cake for me are the codenames NVIDIA uses for its smartphone/tablet chips: its Tegra line of products. Instead of scientists, he uses, well, comic book characters (now you know why I love them, right?) :-). For release at the end of this year? Kal-El, or for the uninitiated, that’s the alien name for Superman. After that? Wayne, as in the alter ego for Batman. Then, Logan, as in the name for the X-men Wolverine. And then Stark, as in the alter ego for Iron Man.

Tegra_MWC_Update1

Everybody wants a little Iron Man in their tablet :-).

And, now I know what I’ll name my future secret projects!

(Image credit – CUDA GPU Roadmap) (Image credit – Tegra Roadmap)

One Comment

Keep your enemies closer

One of the most interesting things about technology strategy is that the lines of competition between different businesses is always blurry. Don’t believe me? Ask yourself this, would anyone 10 years ago have predicted that:

I’m betting not too many people saw these coming. Well, a short while ago, the New York Times Tech Blog decided to chart some of this out, highlighting how the boundaries between some of the big tech giants out there (Google, Microsoft, Apple, and Yahoo) are blurring:

image

Its an oversimplification of the complexity and the economics of each of these business moves, but its still a very useful depiction of how tech companies wage war: they keep their enemies so close that they eventually imitate their business models.

(Chart credit)

3 Comments

Innovator’s Business Model

image A few weeks back, I wrote a quick overview of Clayton Christensen’s explanation for how new technologies/products can “disrupt” existing products and technologies. In a nutshell, Christensen explains that new “disruptive innovations” succeed not because they win in a head-to-head comparison with existing products (i.e. laptops versus desktops), but because they have three things:

  1. Good enough performance in one area for a certain segment of users (i.e. laptops were generally good enough to run simple productivity applications)
  2. Very strong performance on an unrelated feature which eventually will become very important for more than one small niche (i.e. laptops were portable, desktops were not, and that became very important as consumers everywhere started demanding laptops)
  3. Have the potential to improve by leveraging their industry learning curve to the point where they can compete head-to-head with an existing product (i.e. laptops now can be as fast if not faster than most desktops)

But, while most people think of Christensen’s findings as applied to product and technology shifts, this model of how innovations overtake one another can be just as easily applied to business models.

A great example of this lies in the semiconductor industry. For years, the dominant business model for semiconductor companies was the Integrated Device Manufacturer (IDM) model – a business model whereby semiconductor companies both designed and manufactured their own product. The primary benefit of this was tighter integration of design and manufacturing. Semiconductor manufacturing is highly sophisticated, requiring all sorts of specialized processes and chemicals and equipment, and there are a great deal of intricacies between one’s designs and one’s manufacturing process. Having both design and manufacturing under one roof allowed IDMs to create better products more quickly as they were able to exploit the interplays between design and manufacturing and more readily correct problems as they arose. IDMs were also able to tweak their manufacturing processes to push specific features, letting IDMs differentiate their products from their peers.

image But, a new semiconductor model emerged in the early 1990s – the fabless model. Unlike the IDM model, fabless companies don’t own their own semiconductor factories (called fabs – hence the name “fabless”) and outsource their manufacturing to either IDMs with spare manufacturing capacity or dedicated contract manufacturers called foundries (the two largest of which are based in Taiwan).

At first, the industry scoffed at the fabless model. After all, these companies could not tightly link their designs to manufacturing, had to rely on the spare capacity of IDMs (who would readily take it away if they needed it) or on foundries in Taiwan, China, and Singapore which lagged the leading IDMs in manufacturing capability by several years.

But, the key to Christensen’s disruptive innovation model is not that the “new” is necessarily better than the “old,” but that it is good enough on one dimension and great on other, more important dimensions. So, while fabless companies were at first unable to keep up in terms of bleeding edge manufacturing technology with the dominant IDMs, the fabless model had a significant cost advantage (due to fabless companies not needing to build and operate expensive fabs) and strategic advantage, as their management could focus their resources and attention on building the best designs rather than also worrying about running a smooth manufacturing setup.

The result? Fabless companies like Xilinx, NVIDIA, Qualcomm, and Broadcom took the semiconductor industry by storm, growing rapidly and bringing their allies, the foundries, along with them to achieve technological parity with the leading IDMs. This model has been so successful that, today, much of the semiconductor space is either fabless or pursuing a fab-lite model (where they outsource significant volumes to foundries, while holding on to a few fabs only for certain products), and TSMC, the world’s largest foundry, is considered to be on par in manufacturing technology with the last few leading IDMs (i.e. Intel and Samsung). This gap has been closed so impressively, in fact, that former IDM-technology leaders like Texas Instruments and Fujitsu have now decided to rely on TSMC for their most advanced manufacturing technology.

To use Christensen’s logic: the fabless model was “good enough” on manufacturing technology for a niche of semiconductor companies, but great in terms of cost. This cost advantage helped the fabless companies and their allies, the foundries, to quickly move up the learning curve and advance in technological capability to the point where they disrupted the old IDM business model.

This type of disruptive business model innovation is not limited to imagethe semiconductor industry. A couple of weeks ago The Economist ran a great series of articles on the mobile phone “ecosystem” in emerging markets. The entire time while I was reading it, I was struck by the numerous ways in which the rise of the mobile phone in emerging markets was creating disruptive business models. One in particular caught my eye as something which was very similar to the fabless semiconductor model story: the so-called “Indian model” of managing a mobile phone network.

Traditional Western/Japanese mobile phone carriers like AT&T and Verizon set up very expensive networks using equipment that they purchase from telecommunications equipment providers like Nokia-Siemens, Alcatel-Lucent, and Ericsson. (In theory,) the carriers are able to invest heavily in their own networks to roll out new services and new coverage because they own their own networks and because they are able to charge customers, on average, ~$50/month. These investments (in theory) produce better networks and services which reinforce their ability to charge premium dollar on a per customer basis.

In emerging markets, this is much harder to pull off since customers don’t have enough money to pay $50/month. The “Indian model”, which began in emerging countries like India, is a way for carriers in  low-cost countries to adapt to the cost constraints imposed by the inability of customers to pay high $50/month bills, and is generally thought to consist of two pieces. The first involves having multiple carriers share large swaths of network infrastructure, something which many Western carriers shied away from due to intellectual property fears and questions of who would pay for maintenance/traffic/etc. Another plank of the “Indian model” is to outsource network management to equipment providers (Ericsson helped to pioneer this model, in much the same way that the foundries helped the first fabless companies take off) — again, something traditional carrier shied away from given the lack of control a firm would have over its own infrastructure and services.

Just as in the fabless semiconductor company case, this low-cost network management business model has many risks, but it has enabled carriers in India, Africa, and Latin America to focus on getting and retaining customers, rather than building expensive networks. The result? We’re starting to see some Western carriers adopt “Indian model” style innovations. One of the most prominent examples of this is Sprint’s deal to outsource its day-to-day network operations to Ericsson! Is this a sign that the “Indian model” might disrupt the traditional carrier model? Only time will tell, but I wouldn’t be surprised.

(Image credit) (Image credit – Foundry market share) (Image credit – mobile users via Economist)

3 Comments

HotChips 101

image This post is almost a week overdue thanks to a hectic work week. In any event, I spent last Monday and Tuesday immersed in the high performance chip world at the 2009 HotChips conference.

Now, full disclosure: I am not electrical engineer, nor was I even formally trained in computer science. At best, I can “understand” a technical presentation in a manner akin to how my high school biology teacher explained his “understanding” of the Chinese language: “I know enough to get in trouble.”
But despite all of that, I was given a rare look at a world that few non-engineers ever get to see, and yet it is one which has a dramatic impact on the technology sector given the importance of these cutting-edge chip technologies in computers, mobile phones, and consumer electronics.

And, here’s my business strategy/non-expert enthusiast view of six of the big highlights I took away from the conference and which best inform technology strategy:

  1. image We are 5-10 years behind on the software development technology needed to truly get performance power out of our new chips. Over the last decade, computer chip companies discovered that simply ramping up clock speeds (the Megahertz/Gigahertz number that everyone talks about when describing how fast a chip is) was not going to cut it as a way of improving computer performance (because of power consumption and heat issues). As a result, instead of making the cores (the processing engines) on a chip faster, chip companies like Intel resorted to adding more cores to each chip. The problem with this approach is that performance becomes highly dependent on software developers being able to create software which can figure out how to separate tasks across multiple cores and share resources effectively between them – something which is “one of the hardest if not the hardest systems challenge that we as an industry have ever face” (courtesy of UC Berkeley professor Dave Patterson). The result? Chip designers like Intel may innovate to the moon, but unless software techniques catch up, we won’t get to see any of that. Is it no wonder, then, that Intel bought multi-core software technology company RapidMind or that other chip designers like IBM and Sun are so heavily committed to creating software products to help developers make use of their chips? (Note: the image to the right is an Apple ad of an Intel bunny suit smoked by the PowerPC chip technology that they used to use)
  2. Computer performance may become more dependent on chip accelerator technologies. The traditional performance “engine” of a computer was the CPU, a product which has made the likes of Intel and IBM fabulously wealthy. But, the CPU is a general-purpose “engine” – a jack of all trades, but a master of none. In response to this, companies like NVIDIA, led by HotChips keynote speaker Jen-Hsun Huang, have begun pushing graphics chips (GPUs), traditionally used for gaming or editing movies, as specialized engines for computing power. I’ve discussed this a number of times over at the Bench Press blog, but the basic idea is that instead of using the jack-of-all-trades-and-master-of-none CPU, a system should use specialized chips to address specialized needs. Because a lot of computing power is burnt doing work that is heavy on the mathematical tasks that a GPU is suited to do, or the signal processing work that a digital signal processor might be better at, or the cryptography work that a cryptography accelerator is better suited for, this opens the doorway to the use of other chip technologies in our computers. NVIDIA’s GPU solution is one of the most mature, as they’ve spent a number of years developing a solution they call CUDA, but there was definitely a clear message: as the performance that we care about becomes more and more specialized (like graphics or number crunching or security), special chip accelerators will become more and more important.
    image
  3. Designing high-speed chips is now less and less about “chip speed” and more and more about memory and input/output. An interesting blog post by Gustavo Duarte highlighted something very fascinating to me: your CPU spends most of its time waiting for things to do. So much time, in fact, that the best way to speed up your chip is not to speed up your processing engine, but to speed up getting tasks into your chip’s processing cores. The biological analogy to this is something called a perfect enzyme – an enzyme that works so fast that its speed is limited by how quickly it can get ahold of things to work on. As a result, every chip presentation spent ~2/3 of the time talking about managing memory (where the chip stores the instructions it will work on) and managing how quickly instructions from the outside (like from your keyboard) get to the chip’s processing cores. In fact, one of the IBM POWER7 presentations spent almost the entire time discussing the POWER7’s use and management of embedded DRAM technology to speed up how quickly tasks can get to the processing cores.
  4. Moore’s Law may no longer be as generous as it used to be. I mentioned before that one of the big “facts of life” in the technology space is the ability of the next product to be cheaper, faster, and better than the last – something I attributed to Moore’s Law (an observation that chip technology doubles in capability every ~2 years). At HotChips, there was a imagefascinating panel discussing the future of Moore’s Law, mainly asking the question of (a) will Moore’s Law continue to deliver benefits and (b) what happens if it stops? The answers were not very uplifting. While there was a wide range of opinions on how much we’d be able to squeeze out of Moore’s Law going forward, there was broad consensus that the days of just letting Moore’s Law lower your costs, reduce your energy bill, and increase your performance simultaneously were over. The amount of money it costs to design next-generation chips has grown exponentially (one panelist cited a cost of $60 million just to start a new custom project), and the amount of money it costs to operate a semiconductor factory have skyrocketed into the billions. And, as one panelist put it, constantly riding the Moore’s Law technology wave has forced the industry to rely on “tricks” which reduced the delivery of all the benefits that Moore’s Law was typically able to bring about. The panelists warned that future chip innovations were going to be driven more and more by design and software rather than blindly following Moore’s Law and that unless new ways to develop chips emerged, the chip industry itself could find itself slowing its progress.
  5. Power management is top of mind. The second keynote speaker, EA Chief Creative Officer Richard Hilleman noted something which gave me significant pause. He said that in 2009, China will probably produce more electric cars in one year than have ever been produced in all of history. The impact to the electronics industry? It will soon be very hard to find and imagevery expensive to buy batteries. This, coupled with the desires of consumers everywhere to have longer battery lives for their computers, phones, and devices means that managing power consumption is critical for chip designers. In each presentation I watched, I saw the designers roll out a number of power management techniques – the most amusing of which was employed by IBM’s new POWER7 uber-chip. The POWER7 could implement four different low-power modes (so that the system could tune its power consumption), which were humorously named: doze, nap, sleep, and “Rip van Winkle”.
  6. Chip designers can no longer just build “the latest and greatest”. There used to be one playbook in the Silicon Valley – build what you did a year ago, but make it faster. That playbook is fast becoming irrelevant. No longer can Silicon Valley just count on people to buy bigger and faster computers to run the latest and greatest applications. Instead, people are choosing to buy cheaper computers to run Facebook and Gmail, which, while interesting and useful, no longer need the CPU or monitor with the greatest “digital horsepower.” EA’s Richard Hilleman noted that this trend was especially important in the gaming indimageustry. Where before, the gaming industry focused on hardcore gamers who spent hours and hours building their systems and playing immersive games, today, the industry is keen on building games with clever mechanics (e.g. a Guitar Hero or a game for the Nintendo Wii) for people with short attention spans who aren’t willing to spend hours holed up in front of their televisions. Instead of focusing on pure graphical horsepower, gaming companies today want to build games which can be social experiences (like World of Warcraft) or which can be played across many devices (like smartphones or over social networks). With stores like Gamestop on the rise, gaming companies can no longer count on just selling games, they need to think up how to sell “virtual goods” (like upgrades to your character/weapons) or in-game advertising (a Coke billboard in your game?) or encourage users to subscribe. What this all means is that, to stay relevant, technology companies can no longer just gamble on their ability to make yesterday’s product faster, they have to make them better too.

There was a lot more that happened at HotChips than I can describe here (and I skipped over a lot of the more techy details), but those were six of the most interesting messages that I left the conference with, and I am wondering if I can get my firm to pay for another trip next year!

Oh, and just to brag, while at HotChips, I got to check out a demo of the potential blockbuster game Batman: Arkham Asylum while checking out NVIDIA’s 3D Vision product! And I have to say, I’m very impressed by both products – and am now very tempted by NVIDIA’s Buy a GeForce card, get Batman: Arkham Asylum free offer.

(Image credit: Intel bunny smoked ad) (Image credit: GPU computing power) (Image Credit: brick wall) (Image – Rip Van Winkle) (Image – World of Warcraft box art)

One Comment