NVIDIA Introduces Jetson TX2 For Edge Machine Learning With High-Quality Customers

Expanding on their Jetson TX1 and TK1 products for embedded computing, NVIDIA announced last week their Jetson TX2 platform—a hardware and software platform the size of a credit card designed to deliver AI computing at the edge. NVIDIA touts Jetson TX2 as delivering “unprecedented deep learning capabilities,” and based on the form factor, it may be right as it paves the way for a number of cutting-edge uses—from highly intelligent factory robots and commercial drones, to cameras with AI for smart cities. NVIDIA has been running on all cylinders lately with datacenter machine learning, and I think this release, if it performs as promised, will solidify their place at the top of the machine learning class in certain classes of devices. NVIDIA announced the TX2 at an event I attended last week in San Francisco with many tier 1 vendors and startups with some interesting use cases.

Jetson, by design, isn’t targeted at every embedded device, it’s for those non-mobile devices who need strong deep neural network performance at a given power draw. The TX2 is a significant step up from its predecessor. Operating at its maximum performance mode, NVIDIA says it will deliver twice the performance of Jetson TX1, using less than 15 watts of power. Running at maximum energy efficiency mode, NVIDIA says it can achieve twice the energy efficiency of TX1, while using less than 7.5 watts of power. In short, Jetson TX2 should be able to run larger, deeper neural networks. As we have seen on the server and self-driving automotive world, this can will pay off in terms of smarter devices, capable of better accuracy, and quicker response times—well suited for intensive tasks such as facial and speech recognition, image classification, navigation, and more.

A tiny powerhouse

On the technical side of things, Jetson TX2 boasts a powerful 256-core Pascal architecture-based GPU, and a dual core 64-bit custom NVIDIA Denver 2 with a quad core ARM A57 CPU. It comes with a 8GB of LPDDR4 memory (58.3 gigabytes per second), and 32 gigabytes eMMC of storage. It also has Bluetooth, 802.11ac WLAN connectivity, and 1 GB Ethernet for networking. Jetson TX2’s 12 CSI lanes (2.5 GB per second per lane), are capable of supporting as many as 6 cameras, and it’s capable of 4K by 2K, 60fps encode and decode for video. These features are wrapped up in a remarkably tiny little package—55mm x 87mm, around the size of a credit card.

Jetson TX2 is also supported by JetPack 3.0, one of the most comprehensive SDKs available for embedded AI computing. NVIDIA says Jetpack will simplify the integration of AI across many applications. For deep neural network functions, it will also support TensorRT (a neural network interference engine), cuDNN 5.1 (a GPU-accelerated library of primitives). Additionally, it will support latest graphics drivers and APIs, and CUDA 8. What I really like about what NVIDIA has done with Jetpack is that the companyhave improved its performance so the total solution actually gets better with time. Most embedded solutions are “one and done” and don’t get software performance upgrades. I have personally experienced this with the NVIDIA SHIELD tablet where even though the hardware hasn’t changed in years, its performance has improved significantly over time.

Customer testimonials

I have been to many trade shows and have seen many demos about the “next big things” in technology. What really struck me was the quality of the partners NVIDIA rolled out for their announcements. Sure, there were startups, but the industrial giants, too, with many, many resources who could have likely stitched the technology together themselves but opted not to.

The early response from NVIDIA’s customers are looking pretty good so far. Just to name a few, FANUC, the leading supplier of robotic automation, who I literally see everywhere in next gen factories, lauds Jetson has a “powerful platform to enable AI at the edge.” Toyota’s Human Support Robots are powered by Jetson, and the company has praised the flexibility of the platform and the potential it holds for robotics. Cisco Systems is using Jetson to add facial and speech recognition features to its Spark portfolio for workplace collaboration. There has also been a good response from the realm of academia, with MIT, Stanford University, and the FIRST Robotics Competition all singing Jetson’s praises. Even the Lowe’s LoweBot from Fellows Robotics was at the event to show off NVIDIA’s new technology.

Available Soon

Preorders for the Jetson TX2 started already in the U.S. and Europe, with the Developer Kit (the carrier board and the Jetson TX2 module) going for $599—the kits will begin shipping on March 14th. NVIDIA plans to reduce the price to $399 in the second quarter, for bulk purchases of 1,000+, and the company knocked down the price of the original Jetson TX1 Developer Kit to $499. In my opinion, this is all relatively affordable giving the ML performance and efficiency the platform brings to the table.

Wrapping Up

All in all, I’d say the Jetson TX2 looks very promising, especially if it does manage to outperform the TX1 as much as NVIDIA says it will. NVIDIA has a loyal customer base who need ML as their centerpiece, and I think many will be happy to come along for the ride with this new, more powerful addition to their portfolio. I agree that the expanded power and efficiency it brings to the table could, as NVIDIA proclaims, unlock “a new class of intelligent machines”—and I’m intrigued to watch and see what advancements in deep learning come as a result. I will continue to monitor the rollout, but with NVIDIA’s current track record, I think the Jetson TX2 will do well where non-mobile ML means a lot.