Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Nvidia Upgrades Low-Cost Jetson AI Computer—More Power for Half the Price


Good news for AI developers and hobbyists: Nvidia just made it a lot cheaper to build AI-powered robots, drones, smart cameras and other gadgets that need a brain. The company’s new Jetson Orin Nano Super, announced Tuesday and available now, packs more processing muscle than its predecessor while costing half as much at $249.

The palm-sized computer delivers a 70% performance boost, reaching 67 trillion operations per second for AI tasks. That’s a significant jump from earlier models, especially for powering things like chatbots, computer vision, and robotics applications.

“This is a brand new Jetson Nano Super. Almost 70 trillion operations per second, 25 watts and $249,” Nvidia CEO Jensen Huang said in an official video reveal from his kitchen. “It runs everything the HGX does, it even runs LLMs.”

Memory bandwidth also got a major upgrade, increasing to 102 gigabytes per second, 50% faster than the previous generation of the Jetson. This improvement means the device can handle more complex AI models and process data from up to four cameras simultaneously.

The device comes with Nvidia’s Ampere architecture GPU and a 6-core ARM processor, allowing it to run multiple AI applications at once. This gives developers the potential to work with more varied competences, like building small models for robots capable of things like mapping environment, object recognition, and voice commands with low processing power.

Existing Jetson Orin Nano owners aren’t left out in the cold either. Nvidia is releasing software updates to increase efficiency of its legacy AI processors.

The numbers behind Nvidia’s new Jetson Orin Nano Super tell an interesting story. With just 1,024 CUDA cores, it looks modest compared to the RTX 2060’s 1,920 cores, the RTX 3060’s 3,584, or the RTX 4060’s 3,072. But raw core count doesn’t tell the whole story.

While gaming GPUs like the RTX series guzzle between 115 and 170 watts of power, the Jetson sips a mere 7 to 25 watts. That’s about one-seventh the power consumption of an RTX 4060—the most efficient of the bunch.

Memory bandwidth numbers paint a similar picture. The Jetson’s 102 GB/s might look underwhelming next to the RTX cards’ 300+ GB/s, but it’s optimized specifically for AI workloads at the edge, where efficient data processing matters more than raw throughput.

That said, the real magic happens in AI performance. The device cranks out 67 TOPS (trillion operations per second) for AI tasks—a number that’s hard to compare directly with RTX cards’ TFLOPS since they measure different types of operations.

But in practical terms, the Jetson can handle tasks like running local AI chatbots, processing multiple camera feeds, and controlling robots—all simultaneously on a power budget that could barely run a gaming GPU’s cooling fan, basically being neck-and-neck against an RTX 2060 at a fraction of the cost and a fraction of the power consumption.

It’s 8GB of shared memory may seem low, but it means it’s more capable than a normal RTX 2060 when it comes to running local AI models like Flux or Stable Diffusion which may throw an “out of memory” error on those GPUs, or offload part of the work to normal RAM, reducing the inference time—basically the AI thinking process.

The Jetson Orin Nano Super also supports various small and large language models, including those with up to 8 billion parameters, such as the Llama 3.1 model. It can generate tokens at a rate of approximately 18-20 per second when using a quantized version of these models. A bit slow, but still good enough for some local applications. Still, it’s an improvement over the previous generation of Jetson AI hardware models.

Given its price and characteristics, the Jetson Orin Nano Super is primarily designed for prototyping and small-scale applications. For power users, businesses or applications requiring extensive computational resources, the device’s capabilities may feel limiting compared to higher-end systems that cost much more and require a lot more power.

Edited by Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *