How to manage the robot development cycle？
Partner story: Cogniteam, making the ROS development cycle manageable
Cogniteam were early adopters of Player/Stage, the ROS predecessor. Back then, vendors were hesitant to join in the Robotic Operating System buzz. Generic protocols, software packages, and visualization tools were something that each company would have developed internally, again and again. Linux was considered good for the academy and hackers. Back then, making a driver work would usually mean compiling your own Linux Kernel, reading through some obscure forums by the light of a candle, or as my lab professor would say “Here be dragons.” By the time you could see real image data streaming through your C++ code, your laptop graphic display driver would usually stop working due to incompatible dependencies and ubuntu would crash on boot.
By now, more than a decade has passed. ROS has come into the picture, making data visualization, SLAM algorithms, and navigating robots something that anyone with some free time and a step-by-step tutorial can follow through, test, and customize. Robotic Sensor / Platform vendors themselves are now accepting ROS and releasing git repositories with ready-made ROS nodes — nodes that they themselves used to test and develop the hardware.
A shift in robotics development
Buying off-the-shelf components, building your own robot has never been easier, and that is before we even talk about simulation tools and the cloud.
Today’s ROS-based robots will be replaced by a whole new OS with a full-blown new ROS release in a new robot. But what happened to the old robot and the old code? Remember when phones behaved that way? Before FOTA (Firmware Over The Air). Before App stores. Before Android.
Whole setups are hard to break down and reassemble. Program flows are not transferable. We know, in 2012 we released some basic behavior tree decision-making code to ROS. It took until ROS2 to see a behavior engine first used as a ROS standard component. Have you ever tried to reconfigure the move-base between robots, set up TF’s, and reconfigure thresholds for negative and positive obstacles when the sensor type or position changed? Updating its simulative model? Making sure its dependencies are met between the various ROS versions provided by the vendor?
Sounds like we are back to square one, doesn’t it?
Breaking the cycle
Using the Nimbus backbone is like having a whole DevOp team inside your team.
Nimbus started with those examples in mind, as a way to break the cycle by providing tools to develop, package, deploy and manage cloud-connected robots. Nimbus uses containerized applications as software components. In Nimbus, these software components can be organized, connected, and reassembled by code, console interface, or from the web using GUI, making anyone (even without ROS specific know-how) able to understand and see the various buildings blocks that compose the robot execution. The goal of deconstructing the mission to containerized blocks is also to untie the problematic coupling of OS and ROS versions by providing isolation and enabling using various ROS distributions on the same robot, including ROS1 and ROS2 components together.
Components can now be easily replaced making testing of alternative algorithms easier and robot access can be shared between operators and developers to allow remote access to the robot at any time. This does not require installing anything on the robot itself as all the installations are being managed by an agent running as a service on the robot. Multiple users can see live data or access the robot configuration and change it. Using the Nimbus backbone is like having a whole DevOp team inside your team.
Breaking the cycle
The robot configuration and tools for viewing and editing it are also important aspects of Nimbus. By building the robot model in Nimbus (configuring the robot sensors and drivers), Nimbus can keep track of driver versions, monitor the devices, generate TF’s (coordination transformation services to components), and auto generate a simulation for your code, thus enabling you to change sensor location/type and simulatively test alternative scenarios – all without any coding. Nimbus also provides introspection, visualization and coming soon will be analytic tools to ease the development of robots and bring on the robotic revolution.
Log in, start developing right now, and stay tuned for what is to come.
Seeed is glad to partner with Cogniteam, aiming at delivering the easiest ever robot development process, from prototyping to production, including configuration, testing, deployment, and operations management. Try Nimbus with Seeed’s Jetson Platform carrier boards and Jetson Sub kit, attach sensors such as RPLidar and cameras to build your create your robotic application from scratch. You can also seamlessly connect your existing ROS projects to Nimbus. Based on the open-source Robot Operating System (ROS), Nimbus is truly a ‘plug and play’ solution.
Developers can find everything in one place including computing hardware, sensors, and simply drag and drop what you need.
Using Nimbus software, build robots capable of executing complex missions, in dynamic environments, where it is impossible to foresee all possible decisions.
Developers can monitor robots’ performance in real-time. By communicating with each other directly or through the Cloud, robots can build on their own experience. They can take decisions together collectively, share tasks, and re-plan to tackle changing scenarios in real-time.
Nimbus supports the following hardware you can find at Seeed.
- NVIDIA® Jetson Nano™ Developer Kit
- NVIDIA® Jetson Xavier™ NX Developer Kit
- Jetson SUB Mini PC
- A203 (Version 2) Carrier Board
- A205 Carrier Board
- A206 Carrier Board
- RPLiDAR A2M8 360 Degree Laser Scanner Kit – 12M Range
- RPLiDAR A1M8-R6 360 Degree Laser Scanner Kit – 12M Range
- RPLiDAR A3M1 360 Degree Laser Scanner Kit – 25M Range