This site may earn chapter commissions from the links on this page. Terms of apply.

It is most incommunicable to enlarge the enthusiasm for deep-learning-based AI among most of the computer scientific discipline community and big chunks of the tech industry. Talk to nearly any CS professor and you become an overwhelming sense that just most every problem can at present be solved, and every task automated. One even quipped, "The only thing we need to know is which chore you want us to eliminate next." Clearly there is a lot of hubris baked-in to these attitudes. Simply with the rapid advances in cocky-driving vehicles, warehouse robots, diagnostic assistants, and speech and facial recognition, there is certainly plenty of reason for reckoner scientists to get cocky.

And no one is better at being cocky than Nvidia CEO, Jen-Hsun Huang. On stage, he is e'er something of a breathless cyclone, and every bit he recapped the contempo, largely Nvidia-powered, advances in AI, and what they portend for the future, it reminded me of a late-night infomercial, or maybe Steve Jobs revealing one more thing. In this case, though, Nvidia has a lot more than one affair up its sleeve. It is continuing to push forward with its AI-focused hardware, software, and solutions offerings, many of which were either appear or showcased at this year'south GTC.

Nvidia's AI hardware lineup: Tesla P100 GPU and DGX-one Supercomputer join the M40 and M4

For anyone who still thinks of Nvidia equally a consumer graphcis card company, the DGX-1 should put that idea to rest. A $129,000 supercomputer with 8 tightly-coupled state-of-the-art Pascal-architecture GPUs, it is nearly x times faster at supervised learning than Nvidia'due south flagship unit a year agone. For those who desire something a little less cutting border, and a lot less expensive, Nvidia offers the M40 for loftier-terminate grooming, and the M4 for high-functioning and low-ability AI runtimes.

If you want access to these high-end GPUs you'll likely also need a high-end rig, like this Cipher model being shown off by Rave at Nvidia GTC 2016

If you desire access to these high-cease GPUs yous'll likely also need a high-end rig, like this Nil model being shown off by Rave at Nvidia GTC 2016

Nvidia'southward AI developer tools: ComputeWorks, Deep Learning SDK, and cuDNN 5

With cuDNN 5 and a Tesla GPU, Recurrent Neural Networks can run up to 6 times as fastNvidia has supported AI, and especially neural net, developers for a while with its Deep Learning SDK. At GTC Nvidia announced version v of it neural network libraries (cuDNN). In addition to supporting the new Tesla P100 GPU, the new version promises faster performance and reduced memory usage. It also adds back up for Recurrent Neural Networks (RNNs), which are particularly useful for applications that work with time series data (similar audio and video signals — speech recognition, for example).

CuDNN isn't a competitor to the large neural cyberspace developer tools. Instead, information technology serves as a base layer for accelerated implementations of pop tools like Google TensorFlow, UC Berkeley's Caffe, University of Montreal'south Theano, and NYU'due south Torch. However, Nvidia does accept its own neural net runtime offering, Nvidia GPU Inference Engine (GIE). Nvidia claims over 20 images per 2nd, per watt for GIE running on either a Tesla M4 or Jetson Tx1. CuDNN 5, GIE, and the updated Deep Learning SDK are all beingness made available equally role of an update to Nvidia'south ComputeWorks.

TensorFlow in particular got a big shout-out from Huang during his keynote. He applauded that it was open source (like several of the other tools are) and was helping "democratize AI." Because the source is accessible, Nvidia was able to adapt a version for the DGX-1, which he and Google'southward TensorFlow lead Rajat Monga showed running (well, showed a monitor session logged into a server someplace that was running information technology).

The e'er-fascinating poster session in the GTC entrance hall featured literally dozens of different enquiry efforts based on using Nvidia GPUs and one of these deep-learning engines to scissure some major scientific problem. Fifty-fifty the winner of the ever-popular Early Phase Companies contest was a deep-learning awarding: Startup Sadako is education a robot how to larn to identify and sort recyclable items in a waste stream using a learning network. Another crowd favorite at the upshot, BriSky, is a drone company, only relies on deep learning to program its drones to automatically perform complex tasks such as inspections and monitoring.

JetPack lets you build things that apply all that great AI

MIT's sidewalk-friendly personal transport vehicle at Nvidia GTC 2016Programming a trouble-solving neural network is one thing, but for many applications the final production is a physical vehicle, machine or robot. Nvidia's JetPack SDK — the power backside the Jetson TX1 developer kit — provides not merely a Ubuntu-hosted development toolchain, but libraries for integrating reckoner vision (Nvidia VisionWorks and OpenCV4Tegra), as well as Nvidia GameWorks, cuDNN, and CUDA. Nvidia itself was showcasing some of the cool projects that the combination of the JetPack SDK and Jetson TX1 developer kit have made possible, including an autonomous scaled-down race car and democratic (full-size) three-wheeled personal transport vehicle, both based on piece of work done at MIT.

How Neural Networks and GPUs are pushing the boundaries of what computers can do

Huang also pointed to other current examples of how deep learning — made possible by advances in algorithms and increasingly powerful GPUs — is irresolute our perception of what computers tin do. Berkeley's Brett robot, for instance, can learn tasks like putting apparel away, assembling a model, or screwing a cap on a h2o bottle past simple trial and mistake — without explicit programming. Similarly, Microsoft's image recognition system has achieved much higher accuracy than the human benchmark that was the gold standard until equally recently as last twelvemonth. And of course, AlphaGo'southward mastery of one of the well-nigh mathematically complex board games has generated quite a bit of publicity, even among people who don't typically follow AI or play Go.

Has Nvidia really created a super-human? It thinks so

In line with its chin-out approach to new technologies, massive banners all over the GTC proclaimed that Nvidia'southward AI software learned to be a meliorate driver than a man in "hours." I assume they are referring to the 3,000 miles of training that Nvidia's DAVENET neural network received before information technology was used to create the demo video we were shown. The statement reeks of hyperbole, of course, since we didn't come across DAVENET practice anything especially exciting, or avoid any truly unsafe situations, or display whatever particular gift. Just it was shown navigating a variety of on and off road routes. If it was truly trained to practice that by letting information technology drive three,000 miles (over the course of 6 months according to the video), that is an amazing accomplishment. I'chiliad sure information technology is only a taste of things to come, and Nvidia plans to be at the heart of them.