Autonomous machines use AI to solve some of the world's most challenging tasks. The NVIDIA® Jetson™ platform gives you the means to develop and deploy AI-powered robots, drones, IVA applications, and other autonomous machines that think for themselves.
THE JETSON FAMILY
For AI in the periphery or for autonomous machines
Learn more about the Jetson family.
The NVIDIA® Jetson AGX Orin™ Developer Kit makes it easy to get started working with the Jetson AGX Orin module. Compact size, many connectors, and up to 275 TOPS of AI performance make this developer kit perfect for prototyping advanced AI-controlled robots and other autonomous machines.
The most innovative technology for AI computing and visual computing is packed into a supercomputer the size of a credit card. Its small form factor and low power consumption make the Jetson TX2 module ideal for smart edge devices such as robots, drones, smart cameras, and wearable medical devices.
The Jetson TX2i features a variety of standard hardware interfaces that facilitate integration into a wide range of products and form factors. It also comes with the full NVIDIAJetpack SDK, which includes the BSP, libraries for deep learning, computer vision, GPU computing, multimedia processing, and more to accelerate your software development.
These system-on-modules support multiple concurrent AI application pipelines with an NVIDIA Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed IO, and fast memory bandwidth.
The NVIDIA Jetson Nano module opens up amazing new possibilities for edge computing. It offers accelerated compute performance of up to 472 GFLOPS, can run many advanced neural networks in parallel, and provides the power to process data from multiple high-resolution sensors - a prerequisite for full AI systems. It is also production-ready and supports all major AI frameworks. ...
This advanced system-on-module is powered by the NVIDIA Xavier SoC and is designed for cost-effective and performance-driven autonomous machine applications. The heterogeneous accelerated computing architecture delivers advanced computing performance and enables AI at the edge.
The NVIDIA Jetson Nano Developer Kit provides the compute power needed to run modern AI workloads at unprecedented scale, performance, and cost. Developers, learners, and manufacturers can now run AI frameworks and models for applications such as image classification, object recognition, segmentation, and speech processing.
The NVIDIA Jetson Orin NX 16GB module delivers up to 100 TOPS of AI performance in the smallest Jetson form factor, with configurable power between 10W and 25W. This provides 3x the power of the NVIDIA Jetson AGX Xavier and 5x the power of the NVIDIA Jetson Xavier NX, making it ideal for small form factor, low power products such as drones and handheld devices.
NVIDIA Jetson Xavier NX takes supercomputer performance to the extreme in a compact system-on-module (SOM) smaller than a credit card. It offers new cloud-native support and accelerates the NVIDIA software stack with more than 10x the performance of its widely used predecessor, Jetson TX2.
NVIDIA Jetson TX2 NX is the next level of AI performance for entry-level embedded and edge products. It offers up to 2.5x the performance of Jetson Nano and is identical to Jetson Nano and Jetson Xavier NX in terms of form factor and pin compatibility.
The NVIDIA Jetson TX2 4GB provides powerful computing performance in a small credit card form factor to run modern AI workloads. It can be configured to operate between 7.5 and 15 watts, making it ideal for various intelligent edge devices such as robots, drones, smart cameras, and wearable medical devices.
The 64GB Jetson AGX Xavier module enables autonomous AI machines that run on as little as 10W and deliver up to 32 TOPs. Customers can use the 64 GB memory to store multiple AI models, run complex applications, and improve their real-time pipelines.
NVIDIA Jetson AGX Xavier Industrial delivers the highest performance for embedded industrial AI and functional security applications in an energy-efficient, rugged form factor. With up to 20x the performance and 4x the memory of the Jetson TX2i, this system-on-module enables customers to leverage the latest AI models for their most demanding use cases.
Jetson Xavier NX 16GB delivers up to 21 TOPS for running advanced AI workloads, consumes only 10 watts of power, and has a compact form factor smaller than a credit card. Customers can use the 16GB memory with their real-time AI pipelines to run complex applications with multiple neural networks and process data from high-resolution sensors in parallel.
Jetson Nano | Jetson TX2 Series | Jetson Xavier NX Series | Jetson AGX Xavier Series | Jetson Orin NX Series | Jetson AGX Orin Series | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TX2 NX | TX2 (4GB) | TX2 | TX2i | Jetson Xavier NX (16 GB) | Jetson Xavier NX | Jetson AGX Xavier (64 GB) | Jetson AGX Xavier | Jetson AGX Xavier Industrial | Jetson Orin NX 8 GB | Jetson Orin NX 16 GB | Jetson Orin AGX 32 GB | Jetson Orin AGX 64 GB | ||
AI Performance | 472 GFLOPS | 1.33 TFLOPS | 1.26 TFLOPS | 21 TOPS | 32 TOPS | 30 TOPS | 70 TOPS | 100 TOPS | 200 TOPS | 275 TOPS | ||||
GPU | NVIDIA Maxwell™ GPU with 128 cores | NVIDIA Pascal™ GPU with 256 cores | NVIDIA Volta™ GPU with 384 cores and 48 tensor cores | NVIDIA Volta GPU with 512 cores and 64 Tensor cores | NVIDIA Volta GPU with 512 cores and 64 Tensor cores | NVIDIA Ampere GPU with 1792 cores and 56 Tensor cores | NVIDIA Ampere GPU with 2048 cores and 64 Tensor cores | |||||||
CPU | Quad-core ARM® Cortex®-A57 MPCore processor | Dual-core Denver 2 64-bit CPU and quad-core Arm Cortex-A57 MPCore processor | 6-core NVIDIA Carmel Arm®v8.2 64-bit CPU 6 MB L2 + 4 MB L3 | 8-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8 MB L2 + 4 MB L3 | 8-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU, 1.5 MB L2 + 4 MB L3 | 6-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 2 MB L2 + 4 MB L3 | 8-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 2 MB L2 + 4 MB L3 | 12-core NVIDIA Arm® Cortex A78AE v8.2 64-bit CPU 3 MB L2 + 6 MB L3 | ||||||
DL accelerator | - | - | 2x NVDLA | 2x NVDLA | 1x NVDLA v2 | 2x NVDLA v2 | 2x NVDLA v2 | |||||||
Vision Accelerator | - | - | 2x PVA | 2x PVA | 1 x PVA v2 | 1 x PVA v2 | ||||||||
Safety Cluster Engine | - | - | - | - | 2x Arm Cortex-R5 in Lockstep | - | - | - | - | |||||
Working memory | 4 GB 64-bit LPDDR4 25.6 GB/s | 4 GB 128-bit LPDDR4 51.2 GB/s | 8 GB 128-bit LPDDR4 59.7 GB/s | 8 GB 128-bit LPDDR4 (ECC Support) 51.2 GB/s | 16 GB 128-bit LPDDR4x 59.7 GB/s | 8 GB 128-bit LPDDR4x 59.7GB/s | 64 GB 256-bit LPDDR4x 136.5 GB/s | 32 GB 256-bit LPDDR4x 136.5 GB/s | 32 GB 256-bit LPDDR4x (ECC support) 136.5 GB/s | 8 GB 128-bit LPDDR5 102.4 GB/s | 16 GB 128-bit LPDDR5 102.4 GB/s | 32 GB 256-bit LPDDR5 204.8 GB/s | 64 GB 256-bit LPDDR5 204.8 GB/s | |
Data storage | 16 GB eMMC 5.1 | 16 GB eMMC 5.1 | 32 GB eMMC 5.1 | 32 GB eMMC 5.1 | 16 GB eMMC 5.1 | 32 GB eMMC 5.1 | 64 GB eMMC 5.1 | (Supports external NVMe) | 64 GB eMMC 5.1 | |||||
Camera | Up to 4 cameras 12 lanes MIPI CSI-2 D-PHY 1.1 (up to 18 Gbit/s) | Up to 5 cameras (12 via virtual channels) 12 lanes MIPI CSI-2 D-PHY 1.2 (up to 30 Gbit/s) | Up to 6 cameras (12 via virtual channels) 12 lanes MIPI CSI-2 D-PHY 1.2 (up to 30 Gbit/s) | Up to 6 cameras (24 via virtual channels) 14 lanes MIPI CSI-2 D-PHY 1.2 (up to 30 Gbit/s) | Up to 6 cameras (36 via virtual channels) 16 lanes MIPI CSI-2 | 8 lanes SLVS-EC D-PHY 1.2 (up to 40 Gbit/s) C-PHY 1.1 (up to 62 Gbit/s) | Up to 6 cameras (36 via virtual channels) 16 lanes MIPI CSI-2 D-PHY 1.2 (up to 40 Gbit/s) C-PHY 1.1 (up to 62 Gbit/s) | Up to 4 cameras (8 via virtual channels*) 8 lanes MIPI CSI-2 D-PHY 1.2. (up to 20 Gbit/s) | Up to 6 cameras (16 via virtual channels*) 16 lanes MIPI CSI-2 D-PHY 2.1 (up to 40 Gbit/s) | C-PHY 2.0 (up to 164 Gbit/s) | ||||||
Video encoding | 1x 4K30 (H.265) 2x 1080p60 (H.265) | 1x 4K60 (H.265) 3x 4K30 (H.265) 4x 1080p60 (H.265) | 2x 4K60 (H.265) 10x 1080p60 (H.265) 22x 1080p30 (H.265) | 4x 4K60 (H.265) 16x 1080p60 (H.265) 32x 1080p30 (H.265) | 2x 4K60 (H.265) 12x 1080p60 (H.265) 24x 1080p30 (H.265) | 1x 4K60 (H.265) 3x 4K30 (H.265) 6x 1080p60 (H.265) 12x 1080p30 (H.265) | 1x 4K60 (H.265) 3x 4K30 (H.265) 6x 1080p60 (H.265) 12x 1080p30 (H.265) | 2x 4K60 (H.265) 4x 4K30 (H.265) 8x 1080p60 (H.265) 16x 1080p30 (H.265) | ||||||
Video decoding | 1x 4K60 (H.265) 4x 1080p60 (H.265) | 2x 4K60 (H.265) 7x 1080p60 (H.265) 14x 1080p30 (H.265) | 2x 8K30 (H.265) 6x 4K60 (H.265) 22x 1080p60 (H.265) 44x 1080p30 (H.265) | 2x 8K30 (H.265) 6 x 4K60 (H.265) 26x 1080p60 (H.265) 52x 1080p30 (H.265) | 2x 8K30 (H.265) 4x 4K60 (H.265) 18x 1080p60 (H.265) 36x 1080p30 (H.265) | 1x 8K30 (H.265) 2x 4K60 (H.265) 4x 4K30 (H.265) 9x 1080p60 (H.265) 18x 1080p30 (H.265) | 1x 8K30 (H.265) 2x 4K60 (H.265) 4x 4K30 (H.265) 9x 1080p60 (H.265) 18x 1080p30 (H.265) | 1x 8K30 (H.265) 3x 4K60 (H.265) 7x 4K30 (H.265) 11x 1080p60 (H.265) 22x 1080p30 (H.265) | ||||||
PCIe | 1 x4 (2nd generation PCIe) | 1 x1 + 1 x2 (2nd generation PCIe) | 1 x1 + 1 X4 OR 1 x1 + 1 x1 + 1 x2 (2nd generation PCIe) | 1 x1 (PCIe Gen3) + 1 x4 (4th generation PCIe) | 1 x8 + 1 x4 + 1 x2 + 2 x1 (4th generation PCIe, root port and endpoint) | 1 x4 + 3 x1 (4th generation PCIe, root port and endpoint) | Up to 2 x8 + 1 x4 + 2 x1 (4th generation PCIe, root port and endpoint) | |||||||
Networks | 10/100/1000 BASE-T Ethernet | 10/100/1000 BASE-T Ethernet, WLAN | 10/100/1000 BASE-T Ethernet | 1x GbE | 1x GbE 2× 10 GbE | |||||||||
Display | 2 multi-mode DP 1.2/eDP 1.4/HDMI 2.0 1 x 2 DSI (1.5 Gbit/s/lane) | 2 multi-mode DP 1.2/eDP 1.4/HDMI 2.0 1 x 2 DSI (1.5 Gbit/s/lane) | 2 multi-mode DP 1.2/eDP 1.4/HDMI 2.0 2 x 4 DSI (1.5 Gbit/s/lane) | 2 Multi-mode DP 1.4/eDP 1.4/HDMI 2.0 No support for DSI | 3 Multi-Mode DP 1.4/eDP 1.4/HDMI 2.0 No support for DSI | 1x 8K60 multimodal DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1 | 1x 8K60 multimodal DP 1.4a (+MST)/eDP 1.4a/HDMI 2.1 | |||||||
Power | 5 W | 10 W | 7.5 W | 15 W | 10 W | 20 W | 10 W | 15 W | 20 W | 10 W | 20 W | 30 W | 20 W | 40 W | 10 W | 15 W | 20 W | 10 W | 15 W | 25 W | 15 W | 20 W | 50 W | 15 W | 30 W | 50 W Up to 60 W MAX. | ||||
Mechanics | 69.6 mm x 45 mm 260-pin SO-DIMM socket | 69.6 mm x 45 mm 260-pin SO-DIMM socket | 87 mm x 50 mm 400-pin connector Integrated heat transfer plate | 69.6 mm x 45 mm 260-pin SO-DIMM socket | 100 mm x 87 mm 699-pin connector Integrated heat transfer plate | 69.6 mm x 45 mm 260-pin SO-DIMM socket | 100 mm x 87 mm 699-pin Molex Mirror Mezz connector Integrated heat transfer plate |
NVIDIA ISAAC USHERS IN A NEW ERA
OF AUTONOMOUS MACHINES
OF AUTONOMOUS MACHINES
Acceleration solutions for deep learning on Jetson Orin
Acceleration solutions for deep learning on Jetson Orin
Designed for extremely demanding conditions
ROBUST JETSON SYSTEMS
Editable editable, click me for edit, editable, click me for edit, editable, click me for edit ...