Auto China 2026 Observations | Is the Auto Industry Collectively Abandoning "Vase AI"?

Edited by Yara From Gasgoo

Gasgoo Munich- If 2023 was the dawn of the automotive large language model, then 2026 is the year of disenchantment. Automakers are no longer content with letting in-car AI write poems or tell jokes; instead, they are demanding answers to a more pragmatic question: What can this "brain" inside the car actually *do* for the user?

Behind this inquiry lies a paradigm shift from "Generative AI" to "Agentic AI."

At this year's show, a wave of intelligent agent solutions from Lenovo, Volcengine, Tencent, SenseAuto, and MediaTek all points to one conclusion: cars are evolving from passive "voice assistants" into "humanoid agents" capable of active perception, autonomous decision-making, and end-to-end task execution.

This transformation runs far deeper than simply swapping in a larger screen or adding a more powerful chip.

Is AI finally getting down to business?

Step into the core exhibition halls, and "intelligent agents" has become the buzzword on almost every mainstream automaker and tech supplier's stand.

But a closer look reveals a fundamental difference between this wave of in-car AI and the iterations of the past two or three years.

Previously, in-car AI large models played the role of a knowledgeable but helpless passenger. Users asked; it answered. Users commanded; it executed. Despite improved voice recognition and more natural dialogue, the underlying logic remained identical to traditional voice control—still a "command-based" or "chat-based" interaction.

Industry observers have sharply noted that these features often amount to little more than gimmicks in real-world driving scenarios.

The watershed moment of 2026 lies in giving intelligent agents "hands and feet." They must possess the full-chain capability to understand intent, decompose tasks, orchestrate resources, execute actions, and complete the loop.

Zhong Xuedan, Vice President of Tencent Smart Mobility, offered a succinct summary: "In the second half of automotive intelligence, the competition isn't about who stacks the most AI features. It's about who can first integrate large models, full vehicle capabilities, and the service ecosystem into an intelligent agent hub that is perceptible, plannable, executable, and continuously evolving."

Yet Zhong offered a starker assessment: the industry's hype over large models from the past year needs to cool down.

In an interview with Gasgoo ahead of the show, he was blunt: "Over the past year, there’s been a lot of hype about large models, but in reality, it hasn't solved any problems. You’d get better answers just asking Yuanbao on your phone. There’s no need to force large models into the car. In the vehicle, value comes from Agents grounded in sensor data, vehicle-specific functions, and scenarios tied to the car itself."

His remarks highlight a critical industry pain point: simply stuffing a large model into a car is pointless. The key is using agents to solve real problems in specific scenarios.

微信图片_20260424164311_201_694.png

Image Credit: Tencent

What constitutes the technical foundation for this shift? At this year's auto show, several players offered their answers.

On the opening day, Volcengine unveiled a new automotive AI solution based on Agentic AI architecture. Its core breakthrough lies in using a unified "AI brain" to deeply link key functional domains like vehicle control, navigation, and autonomous driving, achieving a complete closed loop of "perception, reasoning, execution, memory, and learning."

Unlike traditional "question-and-answer" systems, this setup possesses autonomous agency. During long-distance drives, it can automatically switch between singing, storytelling, or cartoon modes based on the status of rear-seat passengers—eliminating the need for the driver to issue repetitive commands.

Even more striking is its market penetration. At the launch, a Volcengine executive disclosed the latest figures: over 50 automotive brands and 145 models have integrated the Doubao large model. The total installed base exceeds 7 million vehicles, completing more than 30 million cabin interactions and service loops daily.

MediaTek, meanwhile, introduced an even more ambitious concept at the show: "AI-Defined Vehicle" (AIDV).

"AIDV is not just about using AI to realize functions," explained Zhang Yutai, MediaTek Vice President and General Manager of the Automotive Platform Business Unit. "More importantly, the models must be able to iterate rapidly, ensuring the vehicle's 'smart brain' stays online and updated in real time."

This means AI is no longer a mere add-on; it is the core soul of the entire vehicle architecture.

MediaTek's Dimensity Auto Cockpit Platform C-X1, built on a 3nm process and delivering up to 400 TOPS of all-modal AI computing power, serves as the on-device compute foundation for this type of "proactive intelligent agent cockpit."

640.png

Image Credit: MediaTek

Lenovo Auto Computing, however, has charted a more distinct path.

On the first day of the show, the company unveiled its "Auto Computing 2.0" strategy, headlined by the Auto AI Box and OneAI automotive intelligent agent platform. Built on the NVIDIA DRIVE AGX Thor-Z platform, the solution delivers 360 TOPS@FP8 of AI computing power for cabin agent applications, supports on-device deployment of up to 30B multimodal large models, and slashes interaction latency from seconds to milliseconds.

Xu Liang, Vice President of Lenovo, described the 2.0 strategy as an evolution from a "computing platform" to an "intelligent agent platform," transforming the automobile into a "personal mobile AI hub"—the next logical step after smartphones and PCs.

A clear industry consensus emerges from this barrage of announcements: 2026 is widely viewed as the watershed year when in-car AI makes the leap from generative to agentic.

Large models are ceasing to be mere "vases" in the cockpit and are beginning to shoulder actual responsibilities.

But to grasp where the tipping point for this change lies, one must examine the maturity of two conditions.

Zhong offered a precise breakdown: "Moving from dialogue to execution depends on two things. First is the capability of the technical foundation itself. The first task of an on-board large model is optimizing dialogue and improving experience. But to become executable, the model must evolve to be safe and controllable. Second is the evolution of engineering paradigms over the past six months—frameworks like Manus impose constraints and controls on agents to ensure more stable output."

He further noted that the convergence of these two conditions makes the leap from "saying" to "doing" possible. "On one hand, the model has made engineering strides. On the other, you need robust ecosystem connectivity. If we have the capability but find we can't access anything when trying to execute, that won't work either."

This breakdown reveals a critical logic: The arrival of intelligent agents in cars by 2026 is not a singular technological breakthrough, but the result of three elements aligning simultaneously: model capability, engineering paradigms, and ecosystem connectivity.

'One Brain, Multiple Forms' and the Eco-Race: Diverging Paths for Vehicle Agents

If one word were to summarize the debate over AI agent technology routes at this show, it would be "One Brain, Multiple Forms"—using a unified core agent to drive diverse capabilities across different scenarios and terminals.

Yet Gasgoo observes that beneath this broad direction, vendors' chosen paths are diverging significantly.

The On-Device Faction: Compute in the 'Box'

SenseAuto is a typical representative of this faction. Its debut of the Sage Box at the show builds a three-layer architecture using the Sage on-device model, the Qianji system, and the New Member native agent. Its core selling points are "zero token cost," "Always On response," and "One Brain, Multiple Forms."

SenseAuto's New Member agent achieves a critical leap from "chatting" to "working." It supports fuzzy intent navigation, plans personalized routes by combining user memory with contextual information, and can simultaneously identify and process commands from multiple passengers to execute tasks with a single click.

Looking further ahead, SenseAuto has introduced the SenseAuto Go Robotaxi solution, fusing the cabin and driving domains. Partnering with T3 Mobility, it plans to launch trial operations this year, extending agent capabilities from the cockpit to full-stack autonomous driving.

The logic behind on-device deployment is clear: data privacy, network dependency, and latency are the three Achilles' heels of cloud-based large models. By moving computing power into the vehicle—whether via Lenovo's Auto AI Box or SenseAuto's Sage Box—companies are essentially trying to replace the "cloud brain" with a "local brain."

This approach places extreme demands on chip computing power, explaining why chipmakers and computing platforms have had an unprecedentedly high profile at this year's show.

Regarding the division of labor between edge and cloud, Zhong offered his view: the edge handles immediate response and basic safety, while the cloud tackles complex scenarios. "Edge computing power will grow stronger, and so will edge model capabilities. But more complex scenarios will inevitably rely on the cloud; the edge simply doesn't have the capacity."

This implies that a hybrid edge-cloud architecture is not a transitional solution, but a technical framework for long-term coexistence.

The Ecosystem Platform Faction: Letting Agents Grow on Services

In stark contrast stands Tencent's "ecosystem building" strategy.

At the 2026 TIME DAY event on the eve of the auto show, Tencent officially launched the "Travel Full-Scenario Agent Open Platform," comprehensively upgrading its seven cabin agent products.

Tencent's approach is not to build an "all-knowing brain," but to graft its WeChat mini-program ecosystem, payment capabilities, and map services into the cabin scene. This endows agents with tangible "doing" power: ordering takeout, booking restaurants, making payments, and navigation—all in a one-stop closed loop.

In a sense, Tencent is addressing a specific industry pain point: even after AI large models are widely "on board," the functions in the cabin that can truly be called "usable" have not broken past the "command" and "chat" layers.

Dowson Tang, Senior Executive Vice President of Tencent, pinpointed the core issue: "As an intelligent carrier that integrates software and hardware and connects strongly to the physical world, the automobile is naturally suited for the landing of agent scenarios."

The implication is clear: an intelligent agent is not just a matter of "dialogue" capability, but "connection" capability. Whoever can successfully plug real-world services outside the car into the cabin will be the one who lets the agent truly "get things done."

The Full-Vehicle Domain Faction: One Agent to Rule Them All

Geely, meanwhile, showcased a more aggressive full-vehicle route at the show. Its "1+2+N" multi-agent framework uses a vehicle-level super agent named Eva to coordinate both smart driving and the cockpit, linking subsystems like the chassis and energy management for millisecond-level synergy. This extends the AI agent's management scope from the cabin to the vehicle's physical controls.

Qiu Xiaoxin, CEO of Aixin Yuanzhi (AX), further validated this trend: "As on-device AI agents develop, the interior of future cars will no longer see the cockpit and assisted driving operating independently. Instead, a unified 'Agent Subject' will emerge to coordinate different intelligent capabilities within the car, creating a more complete, integrated experience."

In other words, the current phase, where cabin agents and driving agents develop separately, may be merely transitional. The endgame is a "single brain" taking over all in-car intelligence.

The Deep Logic Behind the Diverging Paths

It is worth pondering that the outcome of this battle of routes may not be a zero-sum game.

The on-device faction solves the problem of being "smart without a network"; the ecosystem faction solves "how much can be done after being smart"; and the full-domain faction solves the "unified experience from cabin to driving." Logically, these three are complementary.

What truly decides the winner may not be whose technical solution looks flashier, but who can first crack the commercial closed loop of "scalable mass production, sustainable evolution, and low-cost replication."

As Zhang Junyi, CFO of SenseAuto, pointed out: Traditional automotive parts companies often face valuation constraints when expanding into AI+auto businesses, whereas native AI companies entering the automotive sector find it easier to gain capital recognition. This inherent difference in track attributes will profoundly affect the long-term investment capacity of different players.

6391288852718308062090949.png

Image Credit: SenseAuto

Stock prices, financing ability, and R&D talent costs—these "non-technical factors" are becoming the invisible arbiters of success in the agent race.

China's Home Court in the Agent Era: Reconstructing the Global Supply Chain and the Anxiety Within

If the story of AI agents at this show were limited to Chinese companies taking center stage, it would miss a more profound dimension: multinational giants are catching up with astonishing speed, and their underlying logic has fundamentally shifted.

Volkswagen Group unveiled its "Full-Domain Agent AI" product technology roadmap at the show, announcing plans to apply this technology to mass-production models within 2026, fully empowering new models under its CEA architecture with agent AI capabilities.

6391265836318950062711534.png

Image Credit: Volkswagen

BMW, leveraging Alibaba's Qianwen large model, introduced three AI agents customized for the Chinese market: the "Car Usage Expert," "Travel Companion," and "Encyclopedia Master," covering vehicle usage, travel, and knowledge Q&A scenarios.

Mercedes-Benz also showcased AI-empowered intelligent cockpits and driver assistance achievements built on its MB.OS operating system.

A journalist from German broadcaster ZDF offered a poignant observation from the show floor: "Chinese electric vehicles are getting better, with stronger performance and bolder designs, reshaping the global auto market at an unprecedented speed."

Al Jazeera's assessment was even more direct: "The competitiveness of Chinese automakers has long since moved beyond price to a contest of innovation concepts. This auto show sometimes feels more like a technology exposition."

This shift in narrative power is no accident.

From a policy perspective, the 2026 Government Work Report proposed "creating a new form of intelligent economy" for the first time, explicitly requiring the "accelerated promotion of new-generation intelligent terminals and agents." Minister of Industry and Information Technology Li Lecheng stated at the "Two Sessions" the commitment to "fully advance the tackling and iteration of new-generation AI products, including autonomous vehicles and humanoid robots." With "AI+" written into the government report for three consecutive years and "intelligent agents" listed as a policy keyword for the first time, top-level design is providing unprecedented institutional support for the industry.

Market data tells a similar story. A research report indicates the global automotive AI agent market will reach approximately $1.62 billion in 2025, expected to hit $13.98 billion by 2032, with a compound annual growth rate of about 42.5%. China's share is projected to exceed 35%, positioning it to dominate the overall automotive AI market.

An even more compelling statistic: the penetration rate of new passenger cars in China equipped with combined driver assistance functions (L2 level) has already surpassed 60%, and is expected to exceed 70% by 2026. Intelligent driver assistance has shifted from an "option" to a "standard feature."

Yet, beneath the "China's Home Court" narrative, anxiety is equally visible.

Amidst the noise, a deeper challenge is surfacing: as AI agents move from concept to mass production, and from tech demos to daily driving, the challenge is no longer the technical question of "can it do it?" but the experiential question of "is it worth using?"

J.D. Power's "2025 China Intelligent Cockpit Study" notes that proactive service capability has become the core track for differentiating cabin experience. Agents capable of predicting needs, retaining long-term memory, and multitasking will fundamentally reshape the human-vehicle relationship. Conversely, agents that fail to do so will quickly become just another forgotten "selling point."

Looking back from the vantage point of the 2026 Beijing Auto Show, the thematic leap from "New Automotive" to "Smart Future" feels less like a natural technological progression and more like a collective "preemptive sprint" driven by intense competitive pressure.

The story of AI agents in cars is just beginning. The true test of this technology's mettle will not be the fancy demonstrations on the show stands, but the real-world usage data from millions of mass-produced vehicles over the coming year.

Only when the driver behind the wheel stops issuing commands and waits for the car to "guess" their needs will the second half of the intelligence race truly begin.

Gasgoo not only offers timely news and profound insight about China auto industry, but also help with business connection and expansion for suppliers and purchasers via multiple channels and methods. Buyer service: buyer-support@gasgoo.com Seller Service: seller-support@gasgoo.com

All Rights Reserved. Do not reproduce, copy and use the editorial content without permission. Contact us: autonews@gasgoo.com