Exploring Variable Fonts and Machine Learning for Personalization in Ambient Home Interfaces

Published by Kushtrim Hamzaj
July 10, 2025

For the second use case, a speculative smart interface prototype for a home display system was created. Powered by machine learning models, the interface could adjust variable font settings based on distance, angle, identity, and context of the user. It could also adjust the variable fonts to hand gestures and sound. The aim of the project is to demonstrate how home computing devices could become more adaptive, inclusive, and intuitive with the integration of variable fonts and machine learning.

A Brief History

The concept of integrating emerging technologies into smart home displays has evolved rapidly over the past decade. Recently, Google’s research has focused on using ultra-sound sensing and ambient AI in the Google Nest Hub to detect gestures, presence and environmental conditions (Udall, 2019). These capabilities enable more intuitive, touchless interactions and context-aware responses. Similar advancements have also been made by Amazon with devices such as the ’Echo Show’ (Youn, Lim, Seo, Chung, & Lee, 2021). This feature enables the device to personalise content such as calendar reminders, music and news based on the detected user. Furthermore, Amazon’s integration of the Alexa voice assistant with visual output provides a multimodal ex- perience that enhances accessibility and efficiency in everyday tasks.

User Interface Design

An interface for a weather application was designed to accommodate the varying needs of the user. This interface included three distinctive layouts, each tailored for a different viewing distance. The prototype was developed using accessible web-based technologies like HTML, CSS and JavaScript, as access to professional smart home devices development platforms, such as, Google Nest SDK or Amazon Alexa Smart Home APIs, were unavailable during time-frame of this study. For the incorporation of machine learning capabilities, the prototype used available machine learning models from ml5.js platform. For example, the models such face, sound, body and hand recognition incporproated (see Chapter 4 for details). These models allowed for real-time, interactive features without the need for proprietary hardware.

To emulate ultrasound sensing, which Google already uses in its Nest displays to detect proximity, the project incorporated machine learning models, to calculate the distance. Specifically, PoseNet and Handpose models were employed to detect body posture, hand gestures, and facial landmarks, and in the process determine the user’s position and distance from the screen. Furthermore, the use of these machine learning models instead of ultrasound sensing, paved the way for future interactive enhancements. For example, since the camera could recognize the body and hand pose it was possible to add further gesture-based commands to the smart home interface.

Interface One
Interface Two
Interface Three

Main Features of the Prototype

One of the main feature of the application, as already presented, is the its ability to adopt its design elements, such as the variable fonts and content based on the users viewing distance and angle. This adaptive behavior of the interface was evaluated through real-time feedback sessions where participants interacted with the interface under varying viewing angles and distances. In addition to detecting the user’s distance from the screen, the prototype also explored the use of hand gestures as an additional layer of interaction. Finally, the prototype also incorporated face detection using the faceapi library to recognize different users in real time. With minimal setup, faceapi enabled the prototype to distinguish between multiple faces in the camera’s view and adapt the interface accordingly.

Responsive Interface Using Machine Learning and Variable Fonts
Unresponsive Interface Using Machine Learning and Variable Fonts

Conclusion and Future Work

With the final implementation of the use case, it can be stated that the proposed synergy of variable fonts and machine learning can significantly improve the personalization of digital interfaces in a practical context. While initial feedback on the use case was positive, further research is required to evaluate the effectiveness of variable fonts in adaptive design environments. This includes techniques such as, usability testing, A/B testing, task performance analysis, think-aloud protocols, eye-tracking studies, semi-structured interviews and diary studies to gather insights about each case independently. These tests could further reveal user perceptions and contextual usage insights on how designers perceive each use case.

References:

Udall, A. (2019). How ultrasound sensing makes Nest displays more acces-
sible. Retrieved 2025-07-03, from https://blog.google/products/google
-nest/ultrasound-sensing/

Youn, M.-A., Lim, Y., Seo, K., Chung, H., & Lee, S. (2021). Forensic Analysis for
AI Speaker with Display Echo Show 2nd Generation as a Case Study. Forensic
Science International: Digital Investigation, 38 , 301130. doi: 10.1016/j.fsidi
.2021.301130

AITYDE – Artificial Intelligence in Typography and Design
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.