BRAV-7123 Enables Mass Production of Service Humanoid Robots, Driving Industry-Wide Intelligent Transformation Through Strategic Collaboration

BRAV-7123 Enables Mass Production of Service Humanoid Robots, Driving Industry-Wide Intelligent Transformation Through Strategic Collaboration

With the rapid advancement of artificial intelligence and robotics technologies, the markets for service robots, industrial automation, and special-purpose robots are experiencing unprecedented growth opportunities. However, achieving powerful AI computing performance, complex multi-sensor fusion, and highly reliable real-time control within constrained size and power budgets has become a critical challenge for many robotics companies transitioning from prototypes to mass production.

 

A leading humanoid robot manufacturer in China has successfully achieved large-scale mass production of its next-generation service humanoid robots based on JHCTECH’s BRAV-7123 edge computing platform. This milestone marks a significant breakthrough in both robotic intelligence and large-scale commercialization through the collaboration between the two parties.

 

Industry Background: Power and System Integration as the Foundation for Practical Robotics

Service robots are rapidly transitioning from laboratory prototypes to real-world applications such as homes, healthcare, and food service environments. Beyond the ability to “see clearly” and “understand,” robots must also move with stability and respond with low latency. This evolution places exceptionally high demands on the overall performance of embedded computing platforms, including:

✔ High computing power to support real-time visual recognition, voice interaction, and path planning;

✔ Rich interfaces to adapt to multi-sensor fusion and multi-actuator collaboration;

✔ Stable and reliable industrial-grade design to ensure continuous operation 24/7;

 

Hardware Empowerment: BRAV-7123 — A High-performance "Brain" Purpose-Built for Robotics

The BRAV-7123 is equipped with an NVIDIA Jetson Orin NX module, providing AI computing power of up to 70~157 TOPS, and features a wealth of expansion interfaces and a robust industrial design, perfectly meeting the robot's needs for a computing platform.

⭐Perception System: Equipped with stereo cameras, wide-angle cameras (and depth cameras in selected scenarios), microphone arrays, and tactile/six-axis force sensors.

⭐Actuation System: Integrated with mobile chassis platforms, collaborative robotic arms, and dexterous hands/end-effectors.

⭐Auxiliary System: Including internal sensing IMUs, emergency stop systems, and collision avoidance sensors.

At the interface and connectivity level, the platform provides high-speed data transmission through two Gigabit Ethernet ports. An optional PCIe x4 expansion supports either four USB 3.0 ports or four Gigabit Ethernet ports, enabling up to six USB 3.0 or GbE connections for seamless integration of vision cameras and high-performance sensors.

In addition, two CAN FD and two RS485 industrial buses deliver low-latency communication for robotic arm servo drives, power battery systems, force sensors, and other industrial subsystems. The platform also features an M.2 expansion slot supporting 5G and WiFi 6 wireless connectivity, enabling cloud collaboration, remote monitoring, and OTA upgrades.

 

Software Collaboration: End-to-End Enablement from System Optimization to Application Deployment

Beyond hardware performance, JHCTECH’s engineering team worked closely with the customer on low-level BSP development and system optimization, significantly accelerating the transition from R&D to mass production.

Full AI Stack Porting

JHCTECH assisted the customer in porting a comprehensive AI software stack, including CUDA, cuDNN, OpenCV, and TensorRT, as well as mainstream large-model–related packages such as Transformers, FlashAttention, PyTorch, and vLLM.

Robot Operating System

The robotic system is built on ROS 2 Humble, which provides modular communication mechanisms (nodes, topics, and services), enabling efficient and decoupled collaboration among perception, planning, and control modules. ROS 2 also manages robot models, coordinate transformations (TF), and point cloud data. For example, safe path planning and autonomous navigation—from the kitchen to the laundry room and then to the living room—are implemented using the Nav2 framework, significantly shortening development cycles and time to market.

Visual Processing Pipeline

On the perception side, NVIDIA DeepStream is deployed to build a multi-stream AI video processing pipeline, enabling efficient processing of visual data from multiple cameras. Object detection models such as YOLO are also utilized to identify and localize objects including tables, cups, shirts, pants, and washing machine control buttons.

Semantic Understanding and Manipulation

A 3D pose estimation algorithm is used to estimate the position and orientation of the object being grasped (such as a pot handle or collar) in 3D space, guiding the robotic arm on how to grasp it. Scene graphs generated by CLIP are used to understand semantic relationships such as "folded clothes are on a chair" and "dirty clothes are in a basket."

 

Mass Production Assurance: End-to-End Support from R&D to Deployment

During mass production, JHCTECH ensured a stable supply chain and rigorous quality control, guaranteeing that every hardware batch met customer requirements. This end-to-end support significantly reduced manufacturing complexity, allowing the customer to focus on robot functionality development and market expansion. The successful mass production of the BRAV-7123 represents not only the outcome of close technical collaboration between both parties, but also a key milestone in advancing service robots toward practical and intelligent real-world deployment.

 

Learn more about BRAV-7123 series

 2026-01-23
0