Why China’s new humanoid robot standards could change the industry

Why China’s new humanoid robot standards could change the industry

By the point you end studying this sentence, one other humanoid robotic could have rolled off a manufacturing line someplace in China. That isn’t hyperbole.

On March 30, 2026, Shanghai-based Agibot introduced it had produced its 10,000th humanoid robotic – a milestone the corporate reached after scaling from 5,000 to 10,000 models in simply three months. In the meantime, rival UBTech plans to ramp up output to five,000 models in 2026 and 10,000 in 2027.

The humanoid robotic {industry} is not a futuristic fantasy. It’s a mass-production actuality unfolding proper now – in Chinese language factories, logistics facilities, and more and more, business areas just like the McDonald’s in Shanghai that not too long ago started testing robotic servers.

However right here’s the uncomfortable query that comes with this speedy scaling: What occurs when one thing goes incorrect?

Industrial incidents spotlight security dangers

Considerations about robotic security are usually not theoretical. Incidents involving industrial robots, whereas comparatively uncommon, illustrate what can occur when complicated programs fail or behave unpredictably.


In July 2023, Peter Hinterdobler, a technician at Tesla’s Fremont plant, was injured whereas engaged on a robotic that had been moved from its normal place on the Mannequin 3 manufacturing line. In keeping with stories, the robotic arm activated unexpectedly throughout upkeep, hanging him and inflicting critical accidents. He’s now pursuing authorized motion in opposition to Tesla and robotic producer Fanuc.

Different incidents have additionally been reported lately. In 2021, a Tesla engineer on the firm’s Texas facility was injured throughout an interplay with a robotic on the manufacturing facility ground. Earlier, in 2015, a employee at an automotive components plant in Michigan died following an accident involving an industrial robotic.

These instances occurred in managed industrial environments, the place robots usually function behind security boundaries and below strict procedures. Even in such settings, nevertheless, failures can happen because of a mix of mechanical points, software program errors, or human elements.

The emergence of humanoid robots introduces a distinct set of challenges. Not like conventional industrial machines, humanoids are designed to function in nearer proximity to folks, usually with out bodily separation. As their capabilities broaden – in energy, mobility, and autonomy – guaranteeing predictable and protected conduct turns into more and more necessary.

The brand new rulebook

That is exactly why China’s Ministry of Business and Info Know-how (MIIT) printed its first nationwide customary system for humanoid robots and embodied intelligence in late February 2026.

Formally referred to as the “Humanoid Robotic and Embodied Intelligence Normal System (2026 Version)”, it’s organized round six pillars:

  1. foundational requirements;
  2. neuromorphic computing;
  3. limbs and parts;
  4. system integration;
  5. utility eventualities; and
  6. security and ethics.

The framework was developed by the MIIT’s Humanoid Robots and Embodied Intelligence Standardization Technical Committee (HEIS, designation MIIT/TC8), a physique comprising over 120 researchers, executives, and policymakers from main robotics companies, analysis institutes, and {industry} customers.

Liang Liang, the committee’s secretary-general, described the imaginative and prescient succinctly. By unifying technical specs, the system is meant to “scale back coordination and adaptation prices throughout the economic chain, promote modularization, and keep away from low-level redundant work”. However beneath that technocratic language lies a extra pressing objective: making highly effective machines protected earlier than they enter our properties.

The security framework – What’s really in the usual

The usual tackles security on three ranges:

  1. Bodily Security ({Hardware}): Mandates specs for structural integrity, emergency cease mechanisms, thermal administration (stopping batteries from overheating), and power limiting – guaranteeing a robotic arm can not crush a human finger.
  2. Behavioral Security (Software program): Requires that robots have predictable responses to failure, an idea referred to as “minimal threat situation”. If a robotic loses connection to its management system or encounters an unfamiliar scenario, it should default to a protected state – freezing in place or slowly reducing its arms – reasonably than thrashing unpredictably.
  3. Moral & Operational Security: As Liang Liang advised China Day by day, with humanoid robots set to enter “hundreds of households, security would be the major issue”. The framework consists of pointers on when a robotic could make autonomous choices versus when human intervention is required.

But even the architects of the usual acknowledge its limits. Peng Zhihui, co-founder of Agibot and a deputy director of the HEIS committee, famous that in industrial eventualities, “almost 80 % of duties the place people excel however conventional automation struggles are strongly associated to tactile sensing” – and the dearth of standardized tactile sensors stays a important bottleneck.

In different phrases, a robotic may comprehend it shouldn’t crush a human hand, however with out dependable tactile sensing, it could not understand it’s crushing one till it’s too late.

A ranges system for humanoids? It already exists

Probably the most profitable requirements in expertise right now is the SAE International’s J3016 customary for autonomous driving – the “Ranges 0 to five” framework that provides shoppers an intuitive understanding of how a lot management a automobile cedes to its laptop. (SAE, by the way in which, is US-based – the Society of Automotive Engineers, based in 1905.)

In Might 2025, almost a 12 months earlier than the nationwide framework was launched, the Beijing Humanoid Robotic Innovation Middle (backed by MIIT) printed what’s believed to be the world’s first Humanoid Robotic Intelligence Grading customary (T/CIE 298-2025). It makes use of what it calls a “4-Dimension, 5-Stage” framework:

The 4 Dimensions:

  1. P – Notion & Cognition: Can the robotic sense and perceive its atmosphere?
  2. D – Resolution & Studying: Can it plan duties and study from expertise?
  3. E – Execution & Efficiency: Can it transfer exactly, steadiness, and manipulate objects?
  4. C – Collaboration & Interplay: Can it work safely with people and different robots?

The 5 Ranges (L1–L5):

Stage Title What It Means
L1 Fundamental Functionality Easy, pre-programmed actions; no adaptation
L2 Notion Functionality Can sense atmosphere however restricted decision-making
L3 Conditional Autonomy Can deal with particular duties autonomously below supervision
L4 Excessive Autonomy Operates independently in outlined eventualities; human backup obtainable
L5 Full Autonomy Full independence in any atmosphere; no human wanted

The usual consists of 22 major indicators and greater than 100 technical provisions, together with a “common security baseline” and mappings to typical utility eventualities. The creators explicitly acknowledged borrowing from autonomous car grading logic – however tailored it for machines that stroll, grasp, and share area with people.

A Proposal: Including ‘Stage 0’ – and why hurt potential issues

The SAE framework consists of Stage 0 (No Automation) – a automobile the place the human driver does every part. For humanoids, Stage 0 would signify a completely mechanical robotic with no autonomous conduct in any respect, operated totally by distant management or pre-programmed sequences.

This offers the general public a baseline they will psychologically establish with and really feel they perceive. It’s the “dumb” robotic – protected as a result of it can not act independently.

However autonomy is barely half the chance equation. The opposite half is hurt potential – how a lot injury the robotic may trigger if one thing goes incorrect.

An L5 (totally autonomous) family companion robotic that weighs 10 kilograms and strikes slowly poses a really totally different threat from an L5 industrial humanoid that may raise 50 kilograms and dash at 15 km/h.

Realistically, no business robotic producer would voluntarily settle for a public “hurt potential” score – it will terrify consumers. However regulators have to assume in these phrases.

A two-axis mannequin – Autonomy Stage (L0-L5) and Hurt Potential (H1-H3) – would give security inspectors a framework for certification. An L5-H3 robotic (totally autonomous, excessive hurt potential) would require redundant emergency stops, necessary human-supervised testing, and even perhaps geofencing to forestall operation in public areas.

The geopolitics of requirements

Requirements are by no means simply technical paperwork. They’re additionally strategic instruments in world competitors. A report from the Netherlands Institute of Worldwide Relations (Clingendael) notes that China has reworked “from a reactive standards-taker right into a proactive standards-maker” since 2018, utilizing initiatives like “China Requirements 2035” to embed Chinese language technical specs into world provide chains.

Can requirements be used to lock out overseas rivals? Circuitously – that might violate WTO guidelines on technical boundaries to commerce. You can’t refuse approval to a overseas firm that demonstrably meets the usual.

However you can also make the method of proving compliance costly, time-consuming, and reliant on native testing services. You can even design requirements round proprietary applied sciences by which Chinese language companies maintain key patents.

That is why mutual recognition agreements – like these below the Belt and Highway Initiative’s “requirements connectivity” program – are so strategically necessary. The nation that writes the rulebook usually wins the race, even when the foundations themselves are brazenly printed.

No ensures – however a framework

Can the brand new Chinese language customary assure {that a} humanoid robotic won’t ever crush a human cranium? No. No customary can supply an absolute assure. Machines fail. Software program has bugs. People make errors.

What the usual can do is create a clear framework for assessing threat, evaluating capabilities, and holding producers accountable. It could mandate that robots have emergency stops, power limits, and predictable failure modes. It could require testing and documentation. It could – and does – set a common security baseline that every one producers should meet.

As Wang Xingxing, founder and CEO of Unitree Robotics and a deputy director of the HEIS committee, put it: “To allow humanoid robots to genuinely work, significantly on long-sequence duties, industry-wide requirements are completely important.”

With manufacturing ramping from hundreds to tens of hundreds – Agibot’s manufacturing acceleration from 5,000 to 10,000 models took simply three months – the {industry} has no time to waste. The robots are coming. The one query is whether or not we could have the foundations in place earlier than they arrive.