AI-native networks have been a recurring speaking level at Cell World Congress for years. What made MWC 2026 in Barcelona totally different was the proof. A cascade of bulletins from the world’s largest telecom distributors, chipmakers, and operators didn’t simply reiterate the imaginative and prescient for AI-RAN–they delivered discipline trial outcomes, industrial product launches, open-source toolkits, and a multi-operator coalition committing to construct 6G on AI-native foundations.
For enterprise and IT decision-makers, the sign is obvious: the architectural shift occurring in telecom infrastructure will quickly reshape how connectivity is delivered, managed, and monetised.
Nvidia and a world coalition lock in on AI-RAN and 6G
The week’s most consequential announcement up to now got here from Nvidia, which secured commitments from greater than a dozen global operators and know-how firms–together with BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Cell, Cisco, and Booz Allen–to construct 6G on open, safe, and AI-native software-defined platforms.
The initiative, framed as a shared dedication to make sure future connectivity infrastructure is clever, resilient and reliable, is backed by ongoing collaborations with governments throughout the US, UK, Europe, Japan, and Korea.
Jensen Huang, Nvidia’s founder and CEO, set the stakes plainly: “AI is redefining computing and driving the biggest infrastructure buildout in human historical past–and telecommunications is subsequent.” The corporate is a founding member of the AI-RAN Alliance, which now has over 130 taking part firms, and has joined the FutureG Workplace-led OCUDU Initiative within the US to speed up open, software-defined, AI-native 6G architectures.
Nvidia additionally launched a set of open-source tools focusing on community operators: a 30-billion-parameter Nemotron Giant Telco Mannequin (LTM), developed with AdaptKey AI and fine-tuned on telecom datasets together with business requirements and artificial logs; an open-source information co-published with Tech Mahindra for constructing AI brokers that motive like NOC engineers; and new Nvidia Blueprints for RAN vitality effectivity and community configuration.
The vitality blueprint integrates VIAVI’s TeraVM AI RAN State of affairs Generator to simulate energy-saving insurance policies in a closed loop earlier than touching reside networks. Actual-world adoption of the community configuration blueprint is already underway–Cassava Applied sciences is deploying it for an autonomous community platform throughout Africa’s multi-vendor cellular atmosphere, whereas NTT DATA is utilizing it with a tier one operator in Japan to handle visitors surges after community outages.
Nokia and operators take AI-RAN over the air
Nokia announced vital progress in its strategic AI-RAN partnership with Nvidia, finishing practical checks of its anyRAN software program on NVIDIA’s GPU-accelerated AI-RAN platform with T-Cell US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The outcomes matter as a result of they moved validation out of managed lab environments and into reside, over-the-air circumstances.
At T-Cell’s AI-RAN Innovation Centre in Seattle, Nokia’s AirScale Huge MIMO radio within the 3.7GHz band ran concurrent AI and RAN workloads–together with video streaming, generative AI queries, and AI-powered video captioning–on a single Nvidia Grace Hopper 200 server alongside industrial 5G.
IOH achieved Southeast Asia’s first AI-RAN-powered Layer 3 5G name at MWC, with AI and RAN workloads operating concurrently on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha put it: “This isn’t nearly proving that the know-how works. It’s about guaranteeing that each Indonesian, wherever they’re, can profit from the digital and AI period.”
SoftBank’s demonstration went additional, exhibiting how spare compute capability recognized by its AITRAS Orchestrator can run third-party AI workloads–a glimpse of how operators may finally monetise RAN infrastructure past connectivity.
Nokia’s expanded AI-RAN ecosystem now contains Dell Applied sciences, Quanta, Supermicro, and Crimson Hat OpenShift for orchestration, giving operators a widening vary of business off-the-shelf choices. Nokia shares rose 5.4% on the day of the announcement.
Ericsson takes a distinct highway to AI-native networks
Ericsson arrived at MWC 2026 with a distinctly totally different method–and it’s one value understanding. Whereas Nokia has guess on Nvidia GPU acceleration (backed by a US$1 billion Nvidia funding), Ericsson unveiled ten new AI-ready radios constructed by itself purpose-built silicon, that includes neural community accelerators embedded instantly into its Huge MIMO {hardware}. No NVIDIA GPUs required.
The portfolio contains AI-managed beamforming, AI-powered out of doors positioning, prompt protection prediction utilizing AI fashions, and a latency-prioritised scheduler delivering as much as seven occasions sooner response occasions. Ericsson’s argument is constructed on complete value of possession: customized silicon, it contends, delivers higher TCO and energy effectivity than exterior GPU {hardware}, with the additional benefit of provide chain independence.
Per Narvinger, head of Ericsson’s cellular networks enterprise, has been direct that this view is unlikely to vary. At MWC, Ericsson additionally introduced a sweeping collaboration with Intel spanning compute, cloud applied sciences, and AI-driven RAN and packet core use circumstances, to speed up ecosystem readiness for AI-native 6G. “6G will not be merely an iteration of cellular know-how. It’s the infrastructure that may distribute AI throughout gadgets, the sting and the cloud,” mentioned Ericsson President and CEO Börje Ekholm.
Intel CEO Lip-Bu Tan framed the partnership as a path to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon constructed on Intel’s most superior course of nodes.
SK Telecom, SoftBank, and the operator rebuild
Past the seller bulletins, two operators used MWC 2026 to articulate how deeply AI-RAN suits into their broader infrastructure methods.
SK Telecom CEO Jung Jai-hun outlined a full-stack AI-native rebuild–from its community core to customer support programs–together with plans to improve its sovereign AI basis mannequin from 519 billion to over one trillion parameters, and to construct a brand new AI knowledge centre in Korea in collaboration with OpenAI.
The corporate can also be increasing autonomous community operations utilizing AI to automate wi-fi high quality administration, visitors management, and community gear operations, with AI-RAN know-how central to enhancing velocity and decreasing latency.
SoftBank, in the meantime, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system at MWC in collaboration with Northeastern College’s INSI, Keysight Applied sciences, and zTouch Networks.
The system makes use of SoftBank’s Giant Telecom Mannequin to translate natural-language operator targets into real-time 5G and 6G community configurations–a significant step towards networks that handle themselves based mostly on intent quite than guide instruction.
A {hardware} ecosystem takes form round AI-RAN
One of many clearest indicators that AI-RAN is maturing from idea to industrial infrastructure is the breadth of {hardware} firms now constructing purpose-built merchandise for it. At MWC 2026, Quanta Cloud Expertise announced industrial on-the-shelf AI-RAN merchandise supporting Nvidia ARC platforms and Nokia software program.
Supermicro prolonged assist throughout the complete Nvidia AI-RAN portfolio, together with ARC-Professional and RTX 6000-based configurations. MSI unveiled its unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads.
Lanner Electronics launched its AstraEdge AI Server lineup–the ECA-6710 and ECA-5555–purpose-built to co-locate AI inference, RAN features, and high-performance packet processing at cell websites. AMD, to not be neglected, positioned its EPYC 8005 edge platform and Open Telco AI initiative at MWC as a substitute compute path for operators transferring from AI pilots to manufacturing.
What this implies past the community
For enterprise decision-makers, the implications of this week’s bulletins prolong past telecom infrastructure procurement. AI-RAN networks that evolve repeatedly by software program–quite than requiring expensive {hardware} refresh cycles–imply connectivity infrastructure more and more resembles cloud infrastructure in its tempo of change and suppleness.
The embedding of GPU compute throughout the RAN opens the prospect of enterprise AI workloads operating on the community edge, nearer to the place knowledge is generated. And as Nvidia’s State of AI in Telecom report famous, 77% of respondents anticipate a considerably sooner deployment timeline for AI-native wi-fi structure than for earlier community generations.
The structure debate between Ericsson’s customized silicon path and Nokia-Nvidia’s GPU-accelerated method can also be value watching–not as a result of one will certainly win, however as a result of it displays a real query about the place AI inference ought to sit in community {hardware}, and at what value. That query will form operator procurement choices and vendor relationships for years.
What MWC 2026 made unmistakable is that AI-native networks are not a analysis agenda. The sector trials are reside, the {hardware} is transport, and the coalitions are forming. The query for enterprises and operators alike is not whether or not this transition will occur–however how briskly, and who leads it.
(Picture by )
See additionally: MWC 2026: SK Telecom lays out plan to rebuild its core round AI
Wish to be taught extra about AI and large knowledge from business leaders? Try AI & Big Data Expo going down in Amsterdam, California, and London. The great occasion is a part of TechEx and is co-located with different main know-how occasions together with the Cyber Security & Cloud Expo. Click on here for extra info.
AI Information is powered by TechForge Media. Discover different upcoming enterprise know-how occasions and webinars here.
