Google's AI model is getting really good at spoofing phone photos
Briefly

Google's AI model is getting really good at spoofing phone photos
"I'm starting to understand where Google's visual AI model gets its name, because after playing around with it for a couple of days, that's how I'd sum it up: bananas. The images it generates are so realistic it's bananas. I feel like I'm going bananas after staring at them for too long. And if I had to pinpoint one reason why Nano Banana Pro's images look so much more realistic than the AI slop that came before them,"
"Sure, the tells are there if you look for them. Take the image at the top of this article of the (not real!) couple on the city sidewalk. The streetlight in the background doesn't look quite right to me, and some of the building facades - especially farther into the background - look a little strange and blocky. But if I was just scrolling past this photo on social media? No way I'd clock it as AI."
Nano Banana Pro produces images that look more realistic by replicating phone-camera qualities and subtle imperfections. The images often feature bright, flat exposure, generous depth of field, and slightly crunchy details that mimic smartphone sharpening and make subjects 'pop.' Minor anomalies appear in backgrounds—strange or blocky building facades and odd lighting—but these flaws can make images feel authentic to casual viewers. Images of people and scenes often pass as genuine when seen in social feeds because they avoid hyper-perfection. Observers note the model's aggressive image sharpening as a deliberate visual trick to enhance perceived realism.
Read at The Verge
Unable to calculate read time
[
|
]