When my children were very young I noticed that they would spend long periods of time studying the details of intricate pictures. This was their "dinosaur period" and so it was mostly illustrations of Jurassic flora and fauna. I had the notion of a game set on a large rug sized illustration they could scamper around on. The rug was pressure sensitive so the children's location was known. The rug would speak and listen. The children would respond to its directions either alone or in small groups. They could be the hunters, the hunted, the treasure seekers, the jungle veterinarians, etc.
The game remained speculative, but the ideas of location aware game boards, audio interaction, and physical game pieces has continued to interest me. I explored using old school pen digitizers, old school touch screens that used infrared interference for locating, magnets and mechanical switches, RFID, image pattern recognition with and without QR-codes or colored dot markers, etc.
Gaming in wild came under scrutiny. How would LARPing or scavenger hunts change with augmented reality? What about audio only games? What would a naval or starship strategy game require from a driver stuck in commuter traffic? How much of the map or simple orientation could the player keep in their head? Clearly, these would not be realtime games or there would be high likelihood of distracting the driver into actual vehicular combat.
When Amazon's Alexa was introduced I read the SDK documentation with excitement. Amazon had done the hard work of creating a conversational model for audio interaction. I think a small, jet fighter combat oriented over the driver's car roof is a game well within the skills of even a moderately skill programmer. Now to make the time.