The AI of Radar Gesture-sensing Technology
- Colin Bay
- Jun 3
- 2 min read
Colin Bay
Chief Research Officer at Concrete

You've noticed for a couple of years already that artificial intelligence is everywhere and that companies are being told, “Quick! — claim we’re doing something with AI so it doesn’t look like we’re behind!” I’m reminded of a recent project with Google that, on the contrary, did something real and useful with AI. It began with a hypothesis about human behavior and not, as at so many companies today, with a sense of desperation about how to make it look like they “get” AI.
Concrete worked with a smart, forward-looking team at Google’s ATAP (Advanced Technology and Projects) group to test an intriguing new technology. The ATAP team took the miniature radar gesture-sensing capabilities of the Project Soli sensor and thought deeply about how to infer intention from people’s movements in space.
This wasn’t millimeter-level space, like detecting a hand gesture (say, tapping your thumb and forefinger together), but room-level space. The idea was that a person’s level of interest or engagement with a device varied as their distance and orientation changed. Think of walking past a tablet or TV parallel to the screen, versus approaching it and pausing 15 feet away, versus moving close to it and standing there. Each pattern implies a different intention.
We set up an experiment in a rented home to test a theoretical framework with practical tasks that took advantage of these sensing capabilities, for example, displaying different levels of detail in an email notification on a family room TV based on the person’s inferred level of engagement. (This Wired article illustrates some usages, and a video from Google shows others.)

The testing sessions with consumer participants showed consistent results: people were enthusiastic about the technology’s potential for making their lives easier, while also wanting to smooth out the rough spots in the interaction. It was a prototype, after all. But even the desire to fix the blemishes was good news, because we’ve often seen such trials end in a rejection of a technology rather than a desire to refine it.
I collaborated with Google on an academic paper about the technology and the study, which curious characters can find right here. Besides the promise of a cool product, here’s what made the work worthwhile: in a sense, the technology was being considerate of a person’s implied wishes. It was polite, in fact, in a way that went much deeper than “please” and “thank you.”
So what the Soli team has experimented with is exactly what for years we at Concrete have pushed for AI to do, limitations and all: enable technology to adapt to us rather than vice versa. That’s a sophisticated achievement worth celebrating.