Small Eyes, Big Problem: How a Smart Car Got It All Wrong
Small Eyes, Big Problem: How a Smart Car Got It All Wrong
In a strange yet amusing twist from China’s tech scene, a man named Mr. Li found himself repeatedly accused of drowsy driving—by a car. Not by police, not by family, but by his sister’s Xiaomi SU7 Max, an electric vehicle loaded with smart safety features. One of those features, a fatigue detection system, seemed a little too alert.
According to reports, Mr. Li was flagged over 20 times by the system, which insisted he looked tired. The reason? His naturally small eyes. Every few minutes, the system would flash a warning, believing he was falling asleep behind the wheel. But Mr. Li wasn’t tired—he was just born with smaller eyes than average.
The Xiaomi SU7 Max is one of China’s newest electric vehicles, boasting advanced AI-driven safety functions. Among them, fatigue monitoring relies on facial recognition and eye tracking to determine whether the driver appears sleepy. In theory, it’s a brilliant step toward safer roads. In reality, it’s now raising concerns about how AI interprets diverse human features.
This incident, while light-hearted, brings up a serious question: Are these "smart" systems smart enough to handle real-world diversity? Mr. Li’s experience is far from unique. Critics argue that machine learning models, especially those trained on limited datasets, often fail when exposed to the wide range of human traits.
In the end, Mr. Li turned the system off. It was the only way to enjoy a drive without being falsely accused of nodding off. But the moment went viral online, sparking both laughter and debate about how far smart technology has come—and how far it still needs to go to truly understand us.
Whether you're a tech lover or just a tired driver (or not!), one thing's clear: sometimes, even machines need to open their eyes a little wider.