DUBLIN–(BUSINESS WIRE)–The “China Automotive Multimodal Interaction Development Research Report, 2025” report has been added to ResearchAndMarkets.com’s offeringDUBLIN–(BUSINESS WIRE)–The “China Automotive Multimodal Interaction Development Research Report, 2025” report has been added to ResearchAndMarkets.com’s offering

China Automotive Multimodal Interaction Development Research Report 2025: Closed-Loop Evolution of Multimodal Interaction – Progressive Evolution of L1~L4 Intelligent Cockpits – ResearchAndMarkets.com

2026/01/26 23:36
7 min read

DUBLIN–(BUSINESS WIRE)–The “China Automotive Multimodal Interaction Development Research Report, 2025” report has been added to ResearchAndMarkets.com’s offering.

Research on Automotive Multimodal Interaction: The Interaction Evolution of L1~L4 Cockpits.

This report comprehensively sorts out the installation of Interaction Modalities in automotive cockpits, multimodal interaction patents, mainstream cockpit interaction modes, application of interaction modes in key vehicle models launched in 2025, cockpit interaction solutions of automakers/suppliers, and integration trends of multimodal interaction.

I. Closed-Loop Evolution of Multimodal Interaction: Progressive Evolution of L1~L4 Intelligent Cockpits

According to the “White Paper on Automotive Intelligent Cockpit Levels and Comprehensive Evaluation” jointly released by the China Society of Automotive Engineers (China-SAE), five levels of intelligent cockpits are defined: L0-L4.

As a key driver for cockpit intelligence, multimodal interaction capability relies on the collaboration of AI large models and multiple hardware to achieve the fusion processing of multi-source interaction data. On this basis, it accurately understands the intentions of drivers and passengers and provides scenario-based feedback, ultimately achieving natural, safe, and personalized human-machine interaction. Currently, the automotive intelligent cockpit industry is generally in the L2 stage, with some leading manufacturers exploring and moving towards the L3.

The core feature of L2 intelligent cockpits is “strong perception, weak cognition”. In the L2 stage, the multimodal interaction function of cockpits achieves signal-level fusion. Based on multimodal large model technology, it can “understand users’ ambiguous intentions” and “simultaneously process multiple commands” to execute users’ immediate and explicit commands. At present, most mass-produced intelligent cockpits can enable this.

In the case of Li i6, it is equipped with MindGPT-4o, the latest multimodal model which boasts understanding and response capabilities with ultra-long memory and ultra-low latency, and features more natural language generation. It supports multimodal “see and speak” (voice + vision fusion search: allowing illiterate children to select the cartoons they want to watch by describing the content on the video cover); multimodal referential interaction (voice + gesture: Voice reference to objects: while issuing commands, extend the index finger: pointing left can control the window and complete vehicle control. Voice reference to personnel: passengers in the same row can achieve voice control over designated personnel through gesture and voice coordination, e.g., pointing right and saying “Turn on the seat heating for him”).

The core feature of L3 intelligent cockpits is “strong perception, strong cognition”. In the L3 stage, the multimodal interaction function of cockpits achieves cognitive-level fusion. Relying on large model capabilities, the cockpit system can comprehensively understand the complete current scenario and actively initiate reasonable services or suggestions without the user issuing explicit commands.

The core feature of L4 intelligent cockpits is “full-domain cognition and autonomous evolution”, creating a “full-domain intelligent manager” for users. In the L4 stage, the application of intelligent cockpits will go far beyond the tool attribute and become a “digital twin partner” that can predict users’ unspoken needs, have shared memories, and dispatch all resources for users. Its core experience is: before the user clearly perceives or expresses the need, the system has completed prediction and planning and entered the execution state.

II. Multimodal AI Agent: Understand What You Need and Predict What You Think

AI Agent can be regarded as the core execution unit and key technical architecture for the specific implementation of functions in the evolution of intelligent cockpits from L2 to L4. By integrating voice, vision, touch and situational information, AI Agent can not only “understand” commands, but also “see” the environment and “perceive” the state, thereby integrating the original discrete cockpit functions into a coherent, active and personalized service process.

Agent applications under L2 can be regarded as “enhanced command execution”, which is the ultimate extension of L2 cockpit interaction capabilities. Based on large model technology, the cockpit system decomposes a user’s complex command into multiple steps and then calls different Agent tools to execute them.

In the next level of intelligent cockpits, Agent applications will change from “you say, I do” to “I watch, I guess, I suggest, let’s do it together”. Users do not need to issue any explicit commands. They just sigh and rub their temples, and the system can comprehensively judge data from “camera” (tired micro-expressions), “biological sensors” (heart rate changes), “navigation data” (continuous driving for 2 hours), and “time” (3 pm (afternoon sleepiness period)) via the large model to know that “the user is in the tired period of long-distance driving and has the need to rest and refresh”.

Based on this, the system will take the initiative to initiate interaction: “You seem to need a rest. There is a service zone* kilometers ahead with your favorite ** coffee. Do you need me to turn on the navigation? At the same time, I can play refreshing music for you.” After the user agrees, the system then calls navigation, entertainment and other Agent tools.

Key Topics Covered:

1 Overview of Multimodal Interaction in Automotive Cockpits
1.1 Development Stages of Intelligent Cockpits

1.2 Definition of Multimodal Interaction

1.3 Development System of Multimodal Interaction

1.4 Introduction to Core Interaction Modality Technologies: Haptic Interaction

1.5 Application Scenarios of Large Models in Intelligent Cockpits

1.6 Vehicle-Human Interaction Functions Based on Multimodal AI Large Models

1.7 Industry Chain of Multimodal Interaction

1.8 Industry Chain of Multimodal AI Large Models

1.9 Policy Environment for Multimodal Interaction

1.10 Installation of Interaction Modalities in Cockpits

2 Summary of Patents Related to Automotive Multimodal Interaction
2.1 Summary of Patents Related to Haptic Interaction

2.2 Summary of Patents Related to Auditory Interaction

2.3 Summary of Patents Related to Visual Interaction

2.4 Summary of Patents Related to Olfactory Interaction

2.5 Summary of Patents Related to Other Featured Interaction Modalities

3 Multimodal Interaction Cockpit Solutions of OEMs
3.1 BYD

3.2 SAIC IM Motors

3.3 FAW Hongqi

3.4 Geely

3.5 Great Wall Motor

3.6 Chery

3.7 Changan

3.8 Voyah

3.9 Li Auto

3.10 NIO

3.11 Leapmotor

3.12 Xpeng

3.13 Xiaomi

3.14 BMW

4 Multimodal Cockpit Solutions of Suppliers
4.1 Desay SV

4.2 Joyson Electronics

4.3 SenseTime

4.4 iFLYTEK

4.5 Thundersoft

4.6 Huawei

4.7 Baidu

4.8 Banma Zhixing

5 Application Cases of Multimodal Interaction Solutions for Typical Vehicle Models
5.1 Summary of Application Cases of Multimodal Interaction Solutions for Typical Vehicle Models

5.2 All-New IM L6

5.2.1 Panoramic Summary of Multimodal Interaction Functions

5.2.2 Analysis of Featured Modal Interaction Capabilities

5.3 Fangchengbao Bao 8

5.4 Hongqi Jinkuihua Guoya

5.5 Denza N9

5.6 Zeekr 9X

5.7 Geely Galaxy A7

5.8 Leapmotor B10

5.9 Li i6

5.10 Xpeng G7

5.11 Xiaomi YU7

5.12 MAEXTRO S800

5.13 2025 AITO M9

5.14 All-New BMW X3 M50

5.15 2026 Audi E5 Sportback

5.16 All-New Mercedes-Benz Electric CLA

6 Summary and Development Trends of Multimodal Interaction
6.1 Summary of Large Model Configuration Parameters of OEMs

6.2 Trend 1: Evolution of Multimodal Interaction Based on AI Large Models}

6.3 Trend 2: Cockpit Scenario Application Cases

6.4 Trend 3 (Voice Interaction)

6.5 Trend 4 (Visual Interaction)

For more information about this report visit https://www.researchandmarkets.com/r/layqmj

About ResearchAndMarkets.com
ResearchAndMarkets.com is the world’s leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Contacts

ResearchAndMarkets.com

Laura Wood, Senior Press Manager

[email protected]

For E.S.T Office Hours Call 1-917-300-0470

For U.S./ CAN Toll Free Call 1-800-526-8630

For GMT Office Hours Call +353-1-416-8900

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Italy becomes first EU country to pass comprehensive AI law

Italy becomes first EU country to pass comprehensive AI law

Italy has formally passed a sweeping new law to regulate artificial intelligence, becoming the first member of the European Union to roll out comprehensive legislation in step with the bloc’s landmark AI Act. The Italian Senate granted final approval after a year of debate, concluding what Prime Minister Giorgia Meloni’s government described as a decisive […]
Share
Cryptopolitan2025/09/18 04:00
Metaplanet Forms Bitcoin-Focused Subsidiaries in Japan and the U.S.

Metaplanet Forms Bitcoin-Focused Subsidiaries in Japan and the U.S.

The post Metaplanet Forms Bitcoin-Focused Subsidiaries in Japan and the U.S. appeared on BitcoinEthereumNews.com. Metaplanet (3350), the largest bitcoin BTC$116,183.54 treasury company in Japan, said it established two subsidiaries — one in Japan and one in the U.S. — and bought the bitcoin.jp domain name as it strengthens its commitment to the largest cryptocurrency. Bitcoin Japan Inc., will be based in Tokyo and manage a suite of bitcoin-linked media, conferences and online platforms, including the internet domain and Bitcoin Magazine Japan. The U.S. unit, Metaplanet Income Corp., will be based in Miami and focus on generating income from bitcoin-related financial products, including derivatives, the company said in a post on X. Metaplanet noted it launched a bitcoin income generation business in the last quarter of 2024 and aims to further scale these operations through the new subsidiary. Both the wholly owned subsidiaries are led in part by Metaplanet CEO Simon Gerovich. Earlier this month, the firm brought its bitcoin holdings to over 20,000 BTC. It’s currently the world’s sixth-largest bitcoin treasury company, with 20,136 BTC in its balance sheet, according to BitcoinTreasuries data. The leading firm, Strategy (MSTR), has 638,985 BTC. The subsidiaries are being established shortly after the company announced plans to raise a net 204.1 billion yen ($1.4 billion) in an international share sale to bolster its BTC holdings. Metaplanet stock dropped 1.16% on Wednesday. Source: https://www.coindesk.com/business/2025/09/17/metaplanet-sets-up-u-s-japan-subsidiaries-buys-bitcoin-jp-domain-name
Share
BitcoinEthereumNews2025/09/18 06:12
[LIVE] Crypto News Today: Latest Updates for Sept. 18, 2025 – Bitcoin Pushes Towards $118K as Fed Rate Cut Sparks Broad Crypto Rally

[LIVE] Crypto News Today: Latest Updates for Sept. 18, 2025 – Bitcoin Pushes Towards $118K as Fed Rate Cut Sparks Broad Crypto Rally

Follow up to the hour updates on what is happening in crypto today, September 18. Market movements, crypto news, and more!
Share
Coinstats2025/09/18 12:23