Personalized In-Car User Interfaces Using Machine Learning and Variable Fonts

Published by Kushtrim Hamzaj
July 10, 2025

As vehicles become more digitized, the demand for intelligent, adaptive in-car interfaces is growing. This use case proposes a personalized in-car interface that updates its visual elements, especially variable fonts, based on the driver’s identity. The project’s main goal is to enhance the personalization of modern cars’ ML-driven user interfaces by integrating variable font technology. The use case was also presented at the 9th International Conference on Design and Digital Communication.

A Brief History

Big car tech industries such as Mercedes, MINI Cooper, BMW, and Audi have already started incorporating machine learning into their in-car user interface systems. For example, Mercedes-Benz has incorporated touch and voice control into their user experience (MBUX) systems to deliver a personalized user experience (Tarnowski, Haidenthaler, Pohl, & Pross, 2022). Similarly, BMW has introduced voice assistant personalities to improve automotive user interfaces (Braun, Mainz, Chadowitz, Pfleging, & Alt, 2019). MINI Cooper already offers a feature called ”Experience Modes”, which allows drivers to choose between different driving display preferences. Audi has incorporated machine learning into their ”Multi Media Interfaces” for fea- tures like predictive navigation and editable virtual cockpit (Geyer et al., 2020).

© Mercedes-Benz Group AG. All rights reserved.
© BMW AG. All rights reserved.
© BMW AG MINI. All rights reserved.
© AUDI AG. All rights reserved.

User Interface Design

The user interface for this use case is based on two essential displays that nearly all modern cars must include. A driver instrument cluster, and a head-up display. The prototype was developed using accessible web-based technologies like HTML, CSS and JavaScript, as access to professional automotive development platforms, such as, Android Automotive OS, QNX, were unavailable during time-frame of this study. For the incorporation of facial recognition capabilities, the prototype used FaceApi machine learning model from ml5.js. This model was also used earlier in the research, specifically for the experiment with face recognition.

Driver Instrument Cluster:
Displays key real-time vehicle information such as speed, RPM, fuel level, and warning indicators.
Figure 140: Head-up Display:
Displays key real-time vehicle information directly onto the windshield within the driver’s line of sight.
Typeface specimens created for the variable font.
The design supports a wide range of weights and contrast levels.
Overview of the prototype’s development environment.
The main part of the prototype’s functionality was the face recognition machine learning library from ml5.js and TensorFlow called FaceAPI together with a variable font.
The ml5.js FaceAPI is a JavaScript-based machine learning model that enables real-time face detection and analysis directly in the browser using webcam input. It can detect faces, identify facial landmarks (such as eyes, nose, and mouth), recognize basic facial expressions (like happy or sad), and generate unique descriptor vectors for face identification or comparison.

Main Features of the Prototype (Personalization through VF and ML)

A key feature of the prototype is its ability to personalize the in-car experience by recognizing who is driving. When a driver gets into the car, they can either stick with the default settings or personalize their experience using features like face recognition and adjustable fonts. By tapping the “Begin Personalization” button, the car’s front-facing camera activates and uses machine learning to identify the driver. If it’s a new face, the system prompts the user to create a profile for future use. Once recognized, drivers can pick from four preset themes or customize their own by tweaking things like font size, weight, colors, and background. One standout feature lets drivers set different visual styles for different speeds—using bold, high-contrast fonts at high speeds for better visibility, and more refined styles when driving slower. All preferences are saved to the driver’s profile and automatically applied the next time.

Infinite Interface Variations Through Variable Fonts

By offering granular control over font parameters and color schemes, the system enables millions of possible visual configurations—essentially allowing for an infinite spectrum of font combinations. For example, the variable font can dynamically respond to driving speed. At lower speeds, where cognitive load is reduced, the interface presents a more detailed layout with lighter font weights and additional elements. As the vehicle accelerates, the font becomes bolder and higher in contrast, while non-essential elements are hidden to minimize distraction and help the driver focus on the road. In addition to speed-based changes, the overall interface can also adjust based on the driver’s previously saved personalization settings. This includes not only font properties but also background colors, accent hues, and layout preferences applied throughout the entire interior display.

Conclusion and Future Work

With the final implementation of the use case, it can be stated that the proposed synergy of variable fonts and machine learning can significantly improve the personalization of digital interfaces in a practical context. While initial feedback on the use case was positive, further research is required to evaluate the effectiveness of variable fonts in adaptive design environments. This includes techniques such as, usability testing, A/B testing, task performance analysis, think-aloud protocols, eye-tracking studies, semi-structured interviews and diary studies to gather insights about each case independently. These tests could further reveal user perceptions and contextual usage insights on how designers perceive each use case.

References:

Tarnowski, T., Haidenthaler, R., Pohl, M., & Pross, A. (2022). Mercedes-Benz MBUX
Hyperscreen Merges Technologies into Digital Dashboard Application. Informa-
tion Display, 38 (3), 12–17. doi: 10.1002/msid.1300

Geyer, J., Kassahun, Y., Mahmudi, M., Ricou, X., Durgesh, R., Chung, A. S., . . .
Schuberth, P. (2020). A2D2: Audi Autonomous Driving Dataset. arXiv. doi:
10.48550/arXiv.2004.06320

AITYDE – Artificial Intelligence in Typography and Design
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.