AI boffins teach office supplies to predict your next move
Briefly

AI boffins teach office supplies to predict your next move
"The system combines ceiling-mounted cameras with computer vision and large language models (LLMs) to monitor what humans are doing and deciding when to lend a hand. When a person begins an action - say, reaching for papers or chopping vegetables - the AI generates a short text description of the scene and uses it to infer what might happen next. Movement commands are then sent to small, wheeled robotic bases under the objects, which the researchers call "proactive assistants.""
"That idea may sound outlandish, but there's thoughtful engineering behind it. Project lead Alexandra Ion, from CMU's Human-Computer Interaction Institute, said the goal is to study what happens when AI and motion merge into familiar, everyday objects. "We classify this work as unobtrusive because the user does not ask the objects to perform any tasks," said Ion. "Instead, the objects sense what the user needs and perform the tasks themselves.""
Researchers at Carnegie Mellon University developed a computer vision system that enables everyday objects to predict human actions and move themselves into helpful positions. Ceiling-mounted cameras feed visual data to computer vision and large language models, which generate short text descriptions of ongoing scenes and infer likely next actions. Movement commands are sent to small wheeled robotic bases under objects, termed proactive assistants, allowing items like staplers or kitchen knives to reposition autonomously. Project lead Alexandra Ion frames the work as unobtrusive physical AI where objects sense needs without user requests. Researchers aim to study consequences when AI and motion merge into familiar objects and acknowledge potential unsettling effects, while noting engineering efforts to increase usefulness in homes and offices.
Read at Theregister
Unable to calculate read time
[
|
]