Despite significant investments and technological advancements, the reality is that no vehicle currently operating on public roads can be classified as fully autonomous. The complexities of real-world driving conditions present insurmountable challenges.
Body agency is a power returned after an incident took it away from the user's physical form, and some wearable devices and technologies have this exact goal in mind.
The new Immersive Navigation mode introduces a detailed 3D map that includes buildings, overpasses, crosswalks, traffic lanes, traffic lights, and stop signs. Google bills this new mode as being the most significant update in over a decade to the app's driving experience. According to the American IT giant, the changes should help drivers stay focused and informed on the road, with Maps giving fresh, real-world information and natural directions.
On February 28, ships navigating the Strait of Hormuz started appearing on tracking screens in places they couldn't possibly be. They appeared to be sitting on airport runways, parked on Iranian land, and clustered at nuclear power plants. More than 1,100 commercial vessels had their navigation systems scrambled in a single day following US-Israeli airstrikes on Iran, bringing a waterway that handles a fifth of the world's oil exports to a halt.
Signals from Global Navigation Satellite Systems are quite vulnerable. They are exceptionally weak, meaning that any radio noise near their frequency, accidental or malicious, can interfere with reception. I am confident that there are people in every government who understand the problem. The challenge is getting leadership to both understand and act to reduce the risk.
Earlier we did episode one of this with Grady Booch where we discussed the principled view of that what's changing and what remains unchanged, what is hyped and what is actually naturally coming with the AI changes. We also spoke about that what is the difference between the design and the architecture and what teams are focusing and what they might be missing.
Looking Glass has been doggedly committed to making holographic displays the next big thing since 2019, and with its new Musubi digital photo frame, it might finally be offering its tech at a price that's hard to deny. Musubi is scheduled to start shipping in June, and unlike the company's previous, more developer-focused kits, the company's new display only costs $149.
Laboratory safety goggles have finally joined the ranks of smart devices. That's the promise behind LabOS, an AI operating system for scientific laboratories built by the Stanford-Princeton AI Coscientist Team, a group led by Stanford University bioengineer Le Cong and Princeton University computer scientist Mengdi Wang, with founding partners that include NVIDIA. Powered by NVIDIA's vision-language models to process visual data, the system is designed to provide AI with real-time knowledge of lab work so it can determine what causes experiments to fail or succeed and rapidly train new scientists to expert levels by guiding them through experimental protocols.
Covert recording is a lot about power. So, I was worried from the very beginning when Meta announced they were going to revive the Google Glass idea. That might be influenced by my study subject very well, but it might as well be influenced by every report and story I read on digital abuse and hate speech in the last twenty to thirty years.
"It's not an overstatement to declare another VR winter," said J.P. Gownder, vice president and principal analyst at Forrester. "I think we might even go as far as to say there's only a handful of successful scenarios where people are using VR." This assessment reflects the industry's struggle to find practical applications beyond niche markets.
Unveiled at CES 2026, the design responds to automated riding, especially Level 4 driving, where the vehicle can manage all driving tasks within certain conditions without human input. The retractable steering wheel is co-developed with Tensor's Robocar autonomous driving system. When the vehicle switches into Level 4 autonomous mode, the steering wheel retracts, clearing the driver's area. This creates more space in the cabin and allows the front seat area to function more like a living or working space rather than a traditional cockpit.
Project Genie, which is currently only available for Google's AI Ultra subscribers, uses AI to build virtual worlds. That sounds interesting, if not necessarily revolutionary. Videogame developers already model and build virtual worlds all the time. Project Genie's simple concept, though, belies the tech's potential impact. The new system, and the Genie 3 model behind it, have the potential to forever change how videogames are built and played.
If this sounds crazy, remember that last month, Watchguard's director of security strategy Corey Nachreiner warned SecurityWatch that Google glass represented an "information goldmine" for both attackers and advertisers. He talked about a sci-fi scenario where Glass could recognize objects in view. "In the future, we're going to have algorithms that will pinpoint things in video automatically," said Nachreiner. This is, more or less, exactly what the Google's gaze tracking patent covers.
The Motoko's dual first-person-view cameras are positioned at eye level to basically see what you see, enabling real-time object and text recognition - translating street signs, tracking gym reps, summarizing documents on the fly, all of that. There are also dual far and near-field mics, working together to capture voice commands and pick up dialogue within view.
To capture the biological impact of this extreme environment, I used a comprehensive suite of sensors and biomarker analyses. I wore a wireless electroencephalograph (EEG) system to monitor brain activity, sleep stages and neural signatures of stress and adaptation; the Oura Ring to continuously track sleep patterns, heart-rate variability and circadian-rhythm shifts; and the glucose monitor to follow metabolic responses in real time.
This past summer, Google DeepMind debuted Genie 3. It's what's known as a world world, an AI system capable of generating images and reacting as the user moves through the environment the software is simulating. At the time, DeepMind positioned Genie 3 as a tool for training AI agents. Now, it's making the model available to people outside of Google to try with Project Genie.