Pollux Logo

Tech Blog

Dive deep into technical topics, coding tutorials, and cutting-edge technology insights.

Building an Omniverse Extension with Omniverse Code and Publishing to GitHub

Building an Omniverse Extension with Omniverse Code and Publishing to GitHub

NVIDIA Omniverse Code allows developers to create, test, and deploy their own Extensions. In this post, we walk through how to build a custom Extension using Omniverse Code and then publish it to GitHub for distribution. This guide includes everything from setting up the development environment to making your Extension publicly accessible through the Omniverse Extensions window. 1. Development Environment Setup 1-1. Install and Launch Omniverse Code Use Omniverse Launcher to download and launch Omniverse Code. 1-2. Install Visual Studio Code To modify the Extension and push to GitHub, install VS Code. https://code.visualstudio.com 1-3. Install Git Git is required to version-control your code and publish it to GitHub. https://git-scm.com/downloads 1-4. Create a GitHub Account If you haven’t already, sign up at GitHub to host your Extension repository. https://github.com/ 1-5. Create an Extension Template Open Omniverse Code and create a new Extension using the built-in Extension Template generator. 1-6. Set the Extension Name Specify the root folder name and Extension name when prompted. 1-7. Publish to GitHub After creating the Extension, VS Code will automatically launch. Open the Source Control tab on the left sidebar. Click the Publish to GitHub button. 1-8. Choose Repository Visibility You can set your repository to Private or Public. If you want it to show up in the Omniverse Extension list, it must be Public. 2. Making the Extension Appear in Omniverse Extensions Window 2-1. Add the omniverse-kit-extension Topic Go to your GitHub repository’s Settings In the Topics section, add: omniverse-kit-extension This tag tells Omniverse to include your repository in its Extension list. 2-2. Create a Release 1. From the GitHub sidebar, go to Releases → Create a new release 2. Enter a tag (e.g., v1.0) and click Create new tag 3. Hit Publish release 2-3. Change Repository to Public Go to Settings → Danger Zone, and change the visibility to Public. This is required for Omniverse to automatically detect and show the Extension. Final Thoughts This post covered the full process of building and publishing an Omniverse Extension to GitHub and making it available via the Omniverse Extensions window. Summary of Steps: Set up the environment (Omniverse Code, VS Code, Git, GitHub) Create a new Extension using the built-in template Push the Extension to GitHub Add the appropriate topic and publish a release Set the repository to Public for visibility This approach lets you easily develop, share, and manage custom Extensions. It’s ideal for personal projects, team collaboration, or open-source contributions to the Omniverse community.

March 28th, 2025

More Tech Blog Posts

Isaac Lab – Writing an Asset (Robot) Configuration

Isaac Lab – Writing an Asset (Robot) Configuration

Isaac Lab – Writing an Asset (Robot) Configuration In Isaac Lab, defining a robot’s configuration is a core part of creating simulation environments and running reinforcement learning tasks. This configuration is written using the ArticulationCfg class, which specifies how the robot is loaded, its initial state, and how its joints are actuated. In this post, we’ll walk through how to create a custom robot configuration in Isaac Lab using ArticulationCfg, breaking down its key components with examples. This guide is based on the official tutorial: https://isaac-sim.github.io/IsaacLab/main/source/how-to/write_articulation_cfg.html What is ArticulationCfg? The ArticulationCfg object in Isaac Lab is the central configuration for defining a robot as an articulated asset in the simulator. It consists of three main components: Spawn – How and where the robot is loaded into the scene Initial State – The robot’s starting position, orientation, and joint angles Actuator – How the robot's joints are controlled (effort, position, or velocity) Once defined, this configuration can be reused across multiple simulation environments and learning tasks. Defining the Spawn Configuration The spawn section defines how the robot is imported into the simulation. You can use either a USD file or URDF file to load the robot. Key parameters include: usd_path: Path to the robot’s USD file rigid_props: Physical properties of the robot (mass, joint limits, etc.) articulation_props: Settings like self-collision, solver iterations, etc. This section ensures that the robot’s body and joints are initialized with proper physical properties in the simulator. Setting the Initial State The initial_state section defines the robot’s position and joint configuration when it first spawns into the simulation. pos: World-space position [x, y, z] rot: Optional orientation (e.g., quaternion) joint_pos: A list or tensor of initial joint positions This helps ensure the robot starts in a meaningful pose, like a standing position or zeroed configuration. Defining Actuators To control the robot, you must define actuator configurations for its joints. Isaac Lab supports: Implicit actuator models (built-in PID or torque control) Explicit actuator models (custom control strategies) Actuators specify how target forces, positions, or velocities are applied to each joint. For reinforcement learning, it's critical that actuators are correctly configured to match your policy’s output. Importing and Using an Articulation Configuration Once defined, your custom ArticulationCfg can be imported and used like this: from omni.isaac.lab_assets import HUMANOID_CFG Then referenced in a scene using: humanoid: ArticulationCfg = HUMANOID_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot") replace() allows you to override the robot’s prim path to match the current simulation scene structure {ENV_REGEX_NS} ensures the robot is placed within a specific environment namespace (useful for multi-env RL) Testing Robot Motion with run_simulator() To validate the configuration, you can test the robot by applying random joint forces in the simulation loop: efforts = torch.randn_like(robot.data.joint_pos) robot.set_joint_effort_target(efforts) This method applies random effort (torque/force) values to each joint, allowing you to visually confirm that the robot is reacting as expected in simulation. Summary In this post, we walked through how to write and use an Articulation Configuration (ArticulationCfg) in Isaac Lab to define a robot for simulation and learning. These configurations include: Spawn – Importing the robot into the scene and applying physical properties Initial State – Setting the starting pose and joint values Actuator – Defining how the robot’s joints are controlled Once configured, your robot can be instantiated in multiple environments, used in interactive scenes, and trained using reinforcement learning techniques. Key Takeaways: ArticulationCfg is the foundation of any robot in Isaac Lab Use @configclass to define reusable and readable config structures Always test with run_simulator() to validate joint control and motion This knowledge lays the groundwork for building complex robot learning pipelines inside Isaac Lab.

Mar 28, 2025

Isaac Lab – Using the Interactive Scene

Isaac Lab – Using the Interactive Scene

Isaac Lab, built on NVIDIA Isaac Sim, is a modular reinforcement learning framework that simplifies how you configure and run simulation environments. In this post, we explore how to use the InteractiveScene class in Isaac Lab to efficiently build and manage simulation environments—including floors, lighting, and robots—with minimal code. With InteractiveScene, you no longer have to manually define each scene element. Instead, you can automatically spawn everything from config classes, and even control the number of environments at runtime using CLI arguments—ideal for reinforcement learning experiments. https://isaac-sim.github.io/IsaacLab/main/source/tutorials/02_scene/create_scene.html What is InteractiveScene? 📷 [Insert image describing InteractiveScene concept] The scene.InteractiveScene class allows you to spawn a complete simulation environment—including ground plane, lighting, robots, and sensors—from configuration classes. It simplifies what would otherwise require dozens of manual steps, and makes the scene modular, reusable, and easy to scale. Setting Number of Environments via CLI To dynamically control the number of environments during execution, add a CLI argument like this: type=int: ensures the input is an integer default=2: if omitted, two environments will be created Very useful for parallelized training or benchmarking Understanding the Isaac Lab Project Structure Key directories in the Isaac Lab repository: source/standalone: Demos, tutorials, and RL pipelines extensions: Core features and configuration assets omni.isaac.lab: Main simulation logic omni.isaac.lab.assets: Contains robot/environment config files lab.tasks: Task definitions for robots (e.g., locomotion, manipulation) Core Classes and Configs Here are the essential classes used to configure simulation elements: ArticulationCfg: For movable robots or arms AssetBaseCfg: For fixed assets like ground or walls InteractiveSceneCfg: Master config for lights, robots, and ground InteractiveScene: The main class that generates the actual scene SimulationContext: Controls simulation playback (step, pause, etc.) Isaac Lab uses the @configclass decorator to define structured configs that are validated and used during runtime. Example: CartpoleSceneCfg Here’s an example of a simple scene config for a cartpole robot: The ground plane is created at an absolute path The light simulates ambient illumination The cartpole robot is placed at a unique relative path using {ENV_REGEX_NS} Absolute vs Relative Prim Paths In Omniverse USD scenes: Use absolute paths (e.g., /World/...) for shared, global assets like lights or ground Use relative paths (e.g., {ENV_REGEX_NS}/...) for environment-specific assets like robots or tables For example, if --num_envs=32, you’ll get 32 unique environments—each with its own robot or object—when using relative prim paths. Creating Multiple Environments with CLI You can run the simulation with custom environment counts using CLI like this: 입력 예: ./isaaclab.sh -p my_scene.py --num_envs 32 This will spawn 32 environments, and each robot (or sphere) will be generated independently. This pattern is essential for scaling up to parallel training environments in RL. Conclusion In this post, we covered how to use Isaac Lab’s InteractiveScene class to build modular simulation environments. Here’s what we learned: InteractiveScene helps spawn all scene elements automatically Configs use @configclass to structure reusable settings Use absolute paths for shared items and relative paths for per-environment assets CLI arguments like -num_envs allow dynamic scaling for multi-agent or multi-robot simulations Isaac Lab provides a robust and flexible simulation backend for reinforcement learning, and these tools make it easy to manage scalable, reproducible environments.

Mar 28, 2025

Isaac Lab – Creating an Empty Simulation Scene

Isaac Lab – Creating an Empty Simulation Scene

Isaac Lab is a reinforcement learning (RL) framework built on top of NVIDIA Isaac Sim, designed to simplify the creation and management of simulated robotics environments. In this post, we’ll walk through how to build the most basic example in Isaac Lab: an empty simulation scene. Although simple, this example introduces essential components such as: CLI configuration SimulationCfg setup SimulationContext initialization Running a rendering + physics loop Mastering this foundation will help you scale to more complex simulations later on. https://isaac-sim.github.io/IsaacLab/main/source/tutorials/00_sim/create_empty.html Using CLI in Isaac Sim Isaac Lab is primarily designed to run in a command-line interface (CLI) environment to optimize performance and scalability. The AppLauncher enables customized runtime configurations using CLI arguments. Common CLI flags include: -headless: Run without GUI -device cuda: Use CUDA-capable GPU -enable_cameras: Activate camera sensors -livestream: Enable live streaming -verbose / -info: Debug and log output -experience: Load predefined environment JSONs argparse로 CLI 인자 처리 You can use Python’s argparse to handle CLI inputs: Then launch the simulation with: This initializes the Omniverse simulation app and gives you control over its behavior. Simulation Configuration and Context To launch a simulation, you’ll need two key components: 1. SimulationCfg Isaac Lab’s default simulation configuration class. It defines: device: Target device (e.g., "cpu" or "cuda:0") dt: Physics time step (e.g., 0.01 = 100Hz) physics_scene: Settings like gravity, collision rules, and friction 2. SimulationContext This wraps the actual simulation engine. It uses the config to initialize the simulation backend. You can later call methods like sim.play(), sim.pause(), or sim.step() to control the simulation flow. Setting the Camera View To visualize the simulation properly, set the camera position using: First argument: Camera position [x, y, z] Second argument: Look-at target position This determines the initial viewpoint in the scene. Resetting the Simulation Before you start simulating, always call: sim.reset() This initializes the simulation timeline and avoids errors caused by uninitialized physics handles. Failing to reset might lead to crashes or invalid behavior. Main Simulation Loop Once everything is initialized, use a loop to run the simulation: while simulation_app.is_running(): sim.step() print("Step completed") sim.step() executes one physics frame print() provides step-by-step logs (especially useful in headless mode) This loop is the heartbeat of your simulation. Conclusion This tutorial walks you through the foundational steps for creating an empty simulation scene in Isaac Lab. While basic, it introduces key components that will be reused and expanded in more complex examples: Running Isaac Sim using CLI with AppLauncher Using SimulationCfg and SimulationContext Setting up camera views Executing the simulation loop in a clean structure Mastering this setup will prepare you for more advanced tasks like: Spawning robots Adding articulated joints and sensors Integrating reinforcement learning policies Managing large-scale multi-agent environments

Mar 28, 2025

Isaac Lab – Running Demo Scripts and Understanding Core Keywords

Isaac Lab – Running Demo Scripts and Understanding Core Keywords

Isaac Lab is a powerful reinforcement learning framework built on top of NVIDIA Isaac Sim, designed to streamline the creation, testing, and evaluation of robotic learning environments. In this post, we’ll walk through how to run one of the key demo scripts provided by Isaac Lab, and explain the core components of its simulation pipeline. Specifically, we'll explore how to run the Showroom Demos locally, and break down functions like environment setup, robot spawning, scene design, and randomized motion initialization. What is the Isaac Lab Showroom? Isaac Lab offers a collection of demo scripts known as Showroom Demos, which showcase different robot types and behaviors in simulation. Official docs: 🔗 Isaac Lab Showroom Demos Available Demo Scripts Some of the built-in demo scripts you can run include: quadrupeds.py – Simulates quadrupedal robots amrs.py – Simulates autonomous mobile robots (AMRs) hands.py – Demonstrates robot hand simulations 1. Installing Isaac Lab and Running a Demo To get started: Clone the Isaac Lab GitHub repository Navigate to the directory, e.g., /isaac/isaacLab Run a demo script using: 2. Script Overview: quadrupeds.py The script uses robot configurations from omni.isaac.lab_assets, which define the physical properties and control setup of various quadrupeds. This modular setup allows you to quickly test different robot types with minimal changes to the code. 3. The define_origins() Function This function defines the initial positions of all environments (i.e., individual robot instances). It returns a tensor of shape (num_envs x 3), representing the x, y, z coordinates. num_cols: Number of environments per row num_rows: Computed based on total environments This setup allows dozens or even hundreds of robots to be placed in a grid on a single simulation floor. 4. The design_scene() Function This function constructs the simulation scene, including: A ground plane Light sources Environment-wide lighting intensity While these elements can be added manually in Isaac Sim’s UI, automating them in code ensures reproducibility across training runs. 5. Initializing Origins and Running the Simulator origin은 학습에 사용될 로봇들의 위치를 일괄 지정합니다. The run_simulator() function starts the simulation once all components are defined You can also configure the observation perspective using sim.set_camera_view() to define the camera angle. 6. Randomizing Robot Actions at Start Each robot is automatically spawned into the simulation, and joint positions can be initialized with random noise: This helps avoid overfitting and encourages learning generalizable behaviors. 7. Changing the Randomization Range: 0.1 → 5.1 The standard deviation of the noise has a significant effect: 0.1: Minor random deviations → stable behavior 5.1: Large variability → more chaotic but diverse movement Choosing an appropriate scale depends on the complexity of the learning task and your training objectives. Conclusion In this post, we reviewed the quadrupeds.py demo script from Isaac Lab and broke down its major components: Environment layout with define_origins() Scene creation with design_scene() Randomized behavior for training robustness Joint initialization and simulation control Isaac Lab is a powerful tool for: Multi-agent simulation Reinforcement learning (RL) training environments Joint control and locomotion testing Camera and rendering pipeline integration If you're planning to develop or test RL policies using Isaac Sim, Isaac Lab gives you a clean and extensible foundation.

Mar 28, 2025

Copyright 2025. POLLUX All rights reserved.