Pollux Logo

NVIDIA GPU Infrastructure for Running Omniverse

1. GPU Architecture Required for Omniverse

Image

Omniverse runs on NVIDIA RTX technology.

Therefore, a GPU must have both RT Cores and Tensor Cores for features such as real-time rendering, sensor simulation, and path tracing to function properly.

Among Omniverse’s core features, the following are either impossible to run without RT Cores or suffer from severe performance degradation:

  • RTX real-time rendering
  • Path Tracing
  • RTX Lidar, RTX Camera
  • Large-scale USD scene acceleration
  • GPU PhysX–based physics simulation
  • Sensor RTX–based robotic environment simulation in general

In conclusion, Omniverse cannot deliver proper performance or functionality on GPUs without RT Cores.

2. Why AI-Only GPUs Are Not Suitable for Omniverse

Image

AI-focused GPUs (B300, B200, H100, H200, A100, V100) do not have RT Cores.

As a result, the following issues arise:

  • Extremely low rendering performance in Omniverse
  • RTX-based sensor simulation not supported
  • Path tracing not supported
  • Real-time visual rendering is effectively impossible
  • Graphics-based visualization in Isaac Sim not feasible
  • Almost no GPU acceleration for large USD scene loading

While applications may technically “run,” the performance level is not usable in real-world projects.

These GPUs excel at AI training, but they are not suitable for digital twins or simulation workloads.

3. Why RTX-Based GPUs Are Optimized for Omniverse

Image

RTX GPUs (L40S, RTX PRO 6000, RTX 6000 Ada, etc.) include both RT Cores and Tensor Cores,

and they provide hardware-accelerated support for the following:

  • Real-time RTX ray tracing
  • RTX sensor simulation
  • Large-scale USD scene rendering
  • Deep learning–based denoising
  • GPU PhysX–based physics simulation
  • Real-time, responsive simulation in environments with AMRs, humanoids, and robot arms

In short, Omniverse, Isaac Sim, digital twins, industrial visualization, and sensor simulation workloads require RTX-class GPUs.

4. GPU Product Lineup (RT Core Presence)

CategoryGPU ModelRT CoresOmniverse SupportIsaac Sim (Visualization/Sensors)Isaac Lab (Headless Training)Key Characteristics
AI-Only Training GPUsB300NoneNot supportedNot supportedSupportedBlackwell AI–only, ultra-fast LLM/RL/DL training, 270GB HBM3e (B300 Ultra)
B200NoneNot supportedNot supportedSupportedHigh-performance AI training, 192GB HBM3e
H200NoneNot supportedNot supportedSupportedIncreased VRAM vs H100 (141GB HBM3e), optimized for AI training
H100NoneNot supportedNot supportedSupportedLarge-scale AI training, 80GB HBM3
A100NoneNot supportedNot supportedSupportedFlagship AI training GPU, 80GB HBM2e
V100NoneNot supportedNot supportedSupportedPrevious-generation AI accelerator, 32GB HBM2
RTX-Based GPUsRTX PRO 6000 (Blackwell)Yes (4th Gen)SupportedFully supportedSupportedOptimal for Omniverse/robot simulation, 96GB GDDR7 ECC, 24,064 CUDA Cores
L40S (Ada)Yes (3rd Gen)SupportedFully supportedSupportedTop-tier data center RTX for simulation/graphics, 48GB GDDR6
L40 (Ada)Yes (3rd Gen)SupportedSupportedSupportedWorkstation/server GPU capable of Omniverse, 48GB GDDR6
RTX 6000 AdaYes (3rd Gen)SupportedSupportedSupportedEnterprise workstation GPU, 48GB GDDR6
A6000Yes (2nd Gen)SupportedSupportedSupportedAmpere-generation workstation flagship, 48GB GDDR6
A40Yes (2nd Gen)SupportedSupportedSupportedServer-grade RTX GPU, 48GB GDDR6
A5000Yes (2nd Gen)SupportedSupportedSupportedMid-range RTX GPU capable of graphics/simulation, 24GB GDDR6

5. RTX PRO 6000 (Blackwell Server Edition)

Image

RTX PRO 6000 is NVIDIA’s latest Blackwell-based professional GPU.

It is a general-purpose, high-performance GPU capable of handling AI, simulation, graphics, and digital-twin workloads in one.

In particular, it is optimized for RTX-based simulations because it integrates both 4th Gen RT Cores and 5th Gen Tensor Cores, which are essential for Omniverse and Isaac simulations.

Key Features

  • Based on the Blackwell architecture
  • 24,064 CUDA Cores
  • 5th Gen Tensor Cores (for AI and simulation acceleration)
  • 4th Gen RT Cores (for real-time ray tracing and RTX sensor simulation)
  • 96GB ECC GDDR7 memory
  • PCIe Gen5 x16
  • Supports multi-GPU configurations and data center server environments
  • A versatile professional GPU that handles AI + graphics + simulation together

Why It Is Ideal for Omniverse/Isaac

  • RT Core–based real-time RTX rendering and RTX Lidar/Camera support
  • Accelerated rendering of large USD scenes
  • Large VRAM (96GB GDDR7) for complex digital-twin environments
  • Fast setup of robot simulations, AMR sensor reproduction, and physics-based environments
  • Optimized for mixed workloads combining graphics, AI, and simulation

6. Choosing a GPU for Robot Training (Isaac Sim & Isaac Lab)

GPU requirements for NVIDIA’s robotics tools vary significantly depending on whether rendering/sensors are involved.

1) Isaac Sim (Visualization + RTX Sensors + Physics)

Image

Required capabilities:

  • GPU-accelerated PhysX
  • RTX-based sensor simulation (RTX Lidar, RTX Camera, etc.)
  • Real-time rendering (RTX visual computing)

Required GPUs:

  • RTX PRO 6000
  • L40S
  • RTX 6000 Ada
  • A6000 / A40

AI-only GPUs (B200/B300/H100/H200/A100/V100)

have no practical support or extremely poor efficiency for visualization, sensor simulation, and RTX-based rendering, so they are not suitable for running Isaac Sim simulations.


2) Synthetic Data Generation

Image

Synthetic data generation replaces real-world data collection with simulation to automatically generate large amounts of labeled data for training sensor-based AI models such as camera and LiDAR perception.

Image

NVIDIA Isaac Sim includes built-in features dedicated to this synthetic data pipeline.

Notably, synthetic data allows automatic generation of the following, without manual labeling:

  • RGB images
  • Semantic segmentation
  • Instance segmentation
  • Bounding boxes
  • Depth
  • Point clouds
  • LiDAR returns
  • 2D/3D keypoints
  • Occlusion information
  • Multi-view datasets with material/lighting variations

Why Synthetic Data Is Necessary

  • Real-world data collection is difficult (cost, environmental risk, time)
  • Automated labeling
  • Ability to deliberately generate large numbers of rare edge cases
  • Unlimited generation of diverse lighting, materials, and positions in a controlled manner
  • Capability to model camera/LiDAR sensor pipelines identically to real hardware

In short, synthetic data is a core technology for robot perception training, enabling full control over data diversity, scale, and difficulty.

https://developer.nvidia.com/blog/build-synthetic-data-pipelines-to-train-smarter-robots-with-nvidia-isaac-sim

3) Isaac Lab (Headless Training — No Rendering)

Image

Isaac Lab can run in headless (non-visual) mode for reinforcement learning and robot training.

  • RTX is not required
  • No RT Cores needed
  • You can disable sensor simulation and use only GPU PhysX

Therefore, the following AI-only GPUs can be used for Isaac Lab training:

  • B300
  • B200
  • H200
  • H100
  • A100
  • V100
Image

In headless mode, the focus is solely on:

  • PhysX
  • Parallel simulation
  • Policy training
  • Model updates

In this context, AI accelerator GPUs can actually be more efficient.

Platform considerations: JAX-based SKRL training in Isaac Lab runs CPU-only by default on aarch64 architectures (e.g., DGX Spark). However, if JAX is built from source, GPU support is possible (though this configuration is not yet validated within Isaac Lab).

7. Architecture Strategies for Running AI Training and Omniverse Together

Because Omniverse-based digital twins/sensor simulations and large-scale AI training rely on different GPU architectures, it is difficult to cover all workloads with a single GPU.

Depending on purpose, scale, and budget, the following three configurations are commonly used.


1) Small-Scale Testing and Development Phase

“Single RTX GPU Workstation Setup”

An RTX GPU (e.g., RTX PRO 6000, RTX 6000 Ada) can handle all of the following on a single workstation:

  • Running Omniverse / Isaac Sim
  • RTX sensor simulations such as RTX Lidar and RTX Camera
  • Robot behavior visualization
  • Light to moderate AI training or inference (small to mid-sized models)

Characteristics:

  • Ideal for testing, PoC, and early-stage development
  • Easy to validate both digital twins and AI models on the same machine
  • Lower overall hardware cost
  • Sufficient when rendering/visualization–centric workloads dominate

Limitations:

  • Highly intensive AI workloads such as LLMs, large-scale RL, or training with thousands of parallel environments are inefficient
  • Even with ample VRAM, training performance is significantly behind AI-only GPUs

Recommended GPUs for this configuration:

  • RTX PRO 6000
  • RTX 6000 Ada
  • L40S
  • A6000

Conclusion:

A general-purpose development workstation suitable for “Omniverse + mid-scale AI workloads.”


2) Medium–Large-Scale Robot/RL (AI) Training + Simulation

“Hybrid Server with RTX GPU + AI GPU”

This is the most widely used architecture in real research labs and companies.

Workload separation:

  • RTX GPU → Omniverse / Isaac Sim / sensor simulation
  • AI GPU → Isaac Lab reinforcement learning / LLM / deep learning training

Advantages:

  • Omniverse RTX visualization and
  • Isaac Lab headless parallel training

can each run at peak performance on dedicated GPUs.

  • AI GPUs (B100/B200/H100/H200/A100) maximize training throughput and parallelism
  • RTX GPUs handle graphics/sensor/physics simulations, reducing delays and contention

Example configuration:

  • GPU0: RTX PRO 6000 → Omniverse/simulation
  • GPU1–8: H100 or B200 → RL/control/AI model training

Conclusion:

The most balanced architecture that guarantees both AI training speed and Omniverse graphics performance.


3) Large-Scale Production Deployment

“Dual-Server Architecture (Dedicated Servers)”

In this setup, environments are fully separated.

  1. Omniverse-Dedicated Server (RTX GPUs)
    • RTX-based sensors
    • Large-scale USD rendering
    • Visualization of AMRs, humanoids, robot arms, etc.
    • Real-time twin viewers
  2. AI-Training-Dedicated Server (AI GPUs)
    • Isaac Lab headless parallel training
    • Policy learning (PPO, SAC, etc.)
    • LLM and large-scale perception model training

Advantages:

  • No interference between workloads (0% GPU contention)
  • Maximum scalability
  • Stable for enterprise production and long-term projects
  • Low-latency digital twins
  • Maximum parallelism for AI training

This configuration aligns with NVIDIA’s officially recommended architecture for enterprise and robotics customers.

Conclusion:

The enterprise-standard architecture for running large-scale digital twins and robot AI training together.

8. NVIDIA Brev: The Ideal Option for Pre-Purchase GPU Testing and Simulation Validation

NVIDIA Brev (Brev.dev) is a cloud GPU platform that lets you rent diverse GPU servers by the hour and start using them immediately.

Before purchasing hardware, it is extremely useful for testing which GPUs run Omniverse/Isaac Sim/Isaac Lab best in an environment similar to your production setup.

Brev is particularly strong in the following scenarios:

  • Pre-validating that RTX-based Omniverse runs smoothly
  • Testing high-end GPUs like RTX PRO 6000 or RTX 6000 Ada before purchase
  • Experimenting with Isaac Sim/Isaac Lab configurations directly in the cloud
  • Experiencing reinforcement learning speeds on AI training GPUs such as H100/A100
  • Prototyping GPU scaling and server layouts
  • Quickly benchmarking GPUs without large upfront expenses

Representative GPUs Available on Brev (as of 2025)

Image

Available GPUs differ by provider on Brev, but in general you can access the following:

RTX-Based (Suitable for Omniverse)

  • RTX PRO 6000
  • RTX 6000 Ada
  • RTX A6000
  • L40S
  • L40

AI-Training-Focused (Suitable for Headless Isaac Lab and LLM Training)

  • A100
  • H100
  • H200 (in some regions)
  • B200/B300 (gradually expanding)

In other words, Brev provides both RTX-class and AI-class GPUs, allowing you to prototype your intended on-premise server architecture in the cloud first, with a nearly identical configuration.

Share this post:

Copyright 2025. POLLUX All rights reserved.