Physical AI with reThings Jetson Orin Platform
Shape the future of robotics development - simple and accessible to everyone

Leading Models & Platforms for Embodied AI

Slide
Synthetic Data Generation with
NVIDIA Cosmos

Cosmos leverages pre-trained world foundation models, fine-tuned using autoregressive and diffusion architectures trained on 9,000 trillion tokens from diverse sources, including synthetic environments and video data. By enabling developers to transform 3D-rendered scenes into photorealistic videos, Cosmos facilitates the simulation of complex scenarios—such as humanoid robots executing advanced tasks or autonomous driving models in realistic conditions. This approach creates vast, diverse datasets that accelerate robust AI training and development.

Slide
Robot Realistic Simulations in
NVIDIA Issac Sim

Isaac Sim is a simulation platform designed to validate robotic skills in highly realistic digital environments. It leverages GPU power to simulate sensors like cameras, LiDAR, and contact sensors, generating realistic data for testing. With tools like Replicator for synthetic data, Omnigraph for environment orchestration, and Isaac Lab for reinforcement learning, Isaac Sim empowers developers to create and train robotic systems efficiently. By matching PhysX parameters to real-world dynamics, it ensures accurate training and testing, enabling confident deployment of robots in real-world applications.

Slide
Enhance Operation Precision with
ACT

Action Chunking Transformers (ACT) is an algorithm developed by Stanford's ALOHA team to improve robot precision in complex tasks. ACT addresses challenges in imitation learning, where small prediction errors can accumulate over time, leading to significant deviations. By predicting sequences of actions (chunks) instead of single steps, ACT reduces the effective task horizon, mitigating compounding errors. This approach enables robots to execute fine-grained manipulations, such as opening containers or inserting batteries, with higher success rates.

Slide
Generating Robot Behaviors with
Diffusion Policy

Diffusion Policy is an innovative approach that represents a robot's visuomotor policy as a conditional denoising diffusion process to generate behaviors. By learning the gradient of the action-distribution score, it iteratively refines actions through stochastic Langevin dynamics during inference. This method effectively handles multimodal action distributions, adapts to high-dimensional action spaces, and demonstrates impressive training stability. Benchmark tests across 15 tasks from four different robot manipulation datasets show that Diffusion Policy consistently outperforms existing state-of-the-art robot learning methods, with an average improvement of 46.9%

previous arrow
next arrow

Build Open-source Robots

SO-ARM101/100 Robot Arm Kit

Choose SO-ARM101/100 robot arm kit to complete imitation learning robotics project! We provide 2 unassembled robot arms as one set, which is one leader arm and one follower arm composed by motors, adapter boards, and cables in the package.

3 STEPS to get hands on LeRobot Imitation Learning! Check out the full SO-ARM wiki docs for detailed guidance.

->Data Collection

Collect dedicated behaviors as data from the follower arm, which is taught by the leader arm by manually/Collect data in simulation through NVIDIA Issac Lab

->Model Training

Train ACT/Diffusion Policy/Pi0/GR00T N1 models, choose the best performance

->Edge Deployment

Deploy validated model on NVIDIA Jetson Orin edge device, complete a pick & place task at the edge

LeKiwi Mobile Manipulator Kit

Open-source, accessible, and versatile—LeKiwi makes real-world robotics experimentation practical and fun!

Take a further step from SO-ARM, now you can build you can build up the robot arm with the low-cost 3D-printed mobile base! LeKiwi is a low-cost, 3D printed mobile manipulator powered by Raspberry Pi 5/NVIDIA Jetson Orin/Laptop system.

It’s never been easier to automate daily tasks at home/Lab:

  • all in Python
  • omnidirectional movement, dual RGB camera input supported
  • full integration with Hugging Face’s LeRobot framework
  • able to achieve full end-to-end control with VLA (Vision-Language-Action) models

Seeed x LeRobot Embodied AI Hackathon

Join an amazing tour with our embodied AI hackathon starting from Shenzhen China in Dec. 2024, where we’ve explored the Hugging Face LeRobot platform working with SO-ARM100/101 kit, a 6-Axis robotics arm, and had the whole experience of assembling, calibrating, and teleoperating with the leader and follower arms for imitation learning projects.

Get Ready for Next Workshop!

Seeed Embodied AI Hackathon is calling for every robotics developers and enthusiasts globally! Open for all project topics.

  • Join as a contestant to build up your idea
  • Join as a ranger supporting to host our hackathons at your local community

Feel free to contact us at [email protected], or just pick a session that fits your schedule and register now!

reCap of Seeed Embodied AI Hackathon

K-Scale Labs is a humanoid company in Palo Alto, focused on creating infrastructure to empower anyone to build their own humanoid robots, leveraging cutting-edge machine learning models.

Seeed joint this open-source community, to deliver more technical support and hardware design, bringing engineers, hackers, and innovators together to shape the future of embodied AI — making it safe, useful, and accessible. Whether it’s

  •  Mechanical design
  • Hardware compatibility
  • Enriched software platforms

We are committed to democratizing tools for humanoid innovation.

2024 k-hacks0.2

The Humanoid Robot Hackathon hosted by K-Scale, Seeed Studio, Hugging Face

Join the excitement as every hacker brings robots to life! Watch them make coffee, dance, boxing, and even perform ALOHA teleoperation. This is a great opportunity to connect with the community, contribute to enhancing KOS, inference, simulation, and libraries, and showcase your skills.

Ready to co-create? Join K-Scale Discord community to participate in building and customizing K-scale’s latest robot models!

Featured Product

reComputer Jetson Robotics Series

Robotics Brain Reference Design for Rapid Validation and Policy Training & Deployment

  • Support 19-54 V range battery power input, combining with 6×USB, 2×CAN (1 with XT30 (2+2), 1 with JST connector), 1×GMSL, 2×I2C, etc.
  • MAXN performance up to 157 TOPS to boost your application, while efficiently handle 40W heat dissipation needs
  • Pre-installed Jetpack 6.2, Linux OS BSP ready

Powered by Orin Nano/ Orin NX, from 34 TOPS - 157 TOPS AI performance

reComputer Mini Jetson Series

Tiny Edge AI Computer for Robotics

  • Dual-channel CAN, multi-channel Ethernet / GMSL expansion
  • 4x USB ports up to 8 with expansion, designed for multi-sensor connections
  • Support 9-60V DC input, can directly connect with 48V battery power supply

Powered by Orin Nano/ Orin NX, from 20 TOPS - 100 TOPS AI performance

reServer Industrial J501 Carrier Board for Jetson AGX Orin

Build the most powerful embedded AI systems with up to 275 TOPS AI performance

Advanced Vision AI Capabilities for autonomous machines

  • Supports 8K60 and 3x4K60 video decoding
  • 16 lanes of MIPI CSI-2, offering optional GMSL extension board supporting up to 8 GMSL cameras.

Extensive Connectivity

  • Gigabit Ethernet, 10G Ethernet, USB 3.1, M.2 Key E, Key B, and Key M. CAN and RS232/422/485

Jetpack 6 BSP ready, build autonomous machines with latest Jetson Platform Service.

reComputer Jetson Super Series

Super charging reComputer with up to 1.7x AI performance boost

  • Deliver up to 157 TOPS AI performance, while efficiently handle 40W heat dissipation
  • Rich set of I/Os including 2x Ethernet, mini-PCIe, 4x USB, 4x CSI, CAN, M.2 Key M/E
  • Pre-installed Jetpack 6.2, Linux OS BSP ready

Join our discussion

Let's see what we can help and get it rolling together!