Almost 20 years ago, Apple CEO Steve Jobs announced the iPhone as "an iPod, a phone, an internet communicator." The world swooned at the time because that one device was all those things, and more. Today it is our wallet, our identity, our social media, our likes, dislikes, fitness levels, bank accounts, as well as our personal, sexual, and political identity.
Within days, hundreds of thousands of requests were being made to the Grok chatbot, asking it to strip the clothes from photographs of women. The fake, sexualised images were posted publicly on X, freely available for millions of people to inspect. Relatively tame requests by X users to alter photographs to show women in bikinis, rapidly evolved during the first week of the year, hour by hour, into increasingly explicit demands for women to be dressed in transparent bikinis, then in bikinis made of dental floss,
On Thursday, the company announced a new "AI Inbox" tab, currently in a beta testing phase, that reads every message in a user's Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google's example of what this AI Inbox could look like in Gmail, the new tab takes context from a user's messages and suggests they reschedule their dentist appointment, reply to a request from their child's sports coach, and pay an upcoming fee before the deadline.
The Internet Watch Foundation (IWF) says its analysts have discovered "criminal imagery" of girls aged between 11 and 13 which "appears to have been created" using Grok. The AI tool is owned by Elon Musk's firm xAI. It can be accessed either through its website and app, or through the social media platform X. The IWF said it found "sexualised and topless imagery of girls" on a "dark web forum" in which users claimed they used Grok to create the imagery.
This tactic of using browser extensions to stealthily capture AI conversations has been codenamed Prompt Poaching by Secure Annex. The two newly identified extensions "were found exfiltrating user conversations and all Chrome tab URLs to a remote C2 server every 30 minutes," OX Security researcher Moshe Siman Tov Bustan said. "The malware adds malicious capabilities by requesting consent for 'anonymous, non-identifiable analytics data' while actually exfiltrating complete conversation content from ChatGPT and DeepSeek sessions."
IT security teams, especially the compliance cast, love drama. The slower, more arcane, and less intelligible the script, the louder the applause. Every few years, someone strides onstage with a seemingly edgy rallying cry: "Let's burn it all down and start again!" Let's be honest: torching the set doesn't fix the play. The real villain isn't any one framework. It's the lackluster production we force our best people to perform "assessments" that consume weeks, cost a fortune, and deliver stale, unread artifacts.
Last year, Google decided not to deprecate third-party cookies in Chrome after all. This year, Google decided to jettison its backup plan and not even launch a planned choice prompt for cookies in its browser. By October, the Privacy Sandbox was all but kaput. The UK's Competition and Markets Authority released Google from its Privacy Sandbox commitments and - Psych. I'm done writing about third-party cookie deprecation, guys. Let's move on, fur real.
According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others. The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.
Manage My Health, a portal enabling connection between individuals and their healthcare providers, experienced a cyberattack identified on Dec. 30. The New Zealand-based organization published a statement to its website the following day, and as of Jan. 5, has continued to post subsequent updates as information has come available. Following the forensic investigations, the organization believes around 7% of 1.8 million registered patients may have been impacted.
A statement by the social media behemoth reads: "We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. "While this is common industry practice, we believe this step, winding down over the next six months, will help improve people's privacy on Facebook."
To start making changes to what's visible, click one of the Update buttons on the right. The Profile and Avatar options lead to options such as your display name and picture on Reddit, the social links you have displayed on your profile, and how you want Reddit to notify you about activity on the platform. Switch to the Privacy tab on the settings screen and you get some control over how visible your profile is.
Real estate agencies who ask for phone numbers at open houses, car dealerships that keep driver licences on file, and pubs and bars that scan IDs for entry will be targeted by the privacy regulator in its first compliance sweep of dozens of businesses. The crackdown by the Office of the Australian Information Commissioner could see businesses fined up to $66,000 if their privacy policies fail to meet legal standards.
We do not fear advances in technology - but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
For example, privileged information shared by foreign partners is currently not overseen by the IPC. It's common practice for national intelligence agencies, such as GCHQ, to receive reports from allies overseas, including from those in the Five Eyes alliance. These reports often contain the kind of privileged information that, in the UK, would require permission from a judicial commissioner, under the IPA, to acquire.
Rehmat Alam operates from the mountains of northern Pakistan, according to one of his online profiles. There, he flaunts his talent for harvesting LinkedIn data and advises YouTube viewers how to earn money off the internet. His company, ProAPIs, allegedly boasted in marketing materials that its software can handle hundreds of requests per second to scrape profiles, selling the underlying data for thousands of dollars a month.
Pasted on the wall next to the locked steel door that seals Laura Poitras's studio from visitors and intruders is a black poster depicting a PGP key that the filmmaker has used in the past to receive encrypted messages. It makes sense that this key-a sort of invitation to send her a secret message-is the only identifiable sign that Poitras edits her movies in this building;
Shoshana Zuboff (New England, U.S., 1951) joins the video call from her home in Maine, in the northeastern United States, on the border with Canada, where the cold is relentless at this time of year. She sips tea to warm her throat and apologizes for being late; her schedule is so packed these days that it was impossible to find an opportunity to do this interview in person.
As someone with a child in the US, this new Trump threat to scrutinise tourists' social media is concerning. Providing my user name would be OK the authorities would get sick of scrolling through chicken pics before they found anything critical of their Glorious Leader but what if I have to hand over my phone at the border, as has happened to some travellers already?
Sometimes, a false sense of intimacy with AI can lead people to share information online that they never would otherwise. AI companies may haveemployees who work on improving the privacy aspects of their models, but it's not advisable to share credit card details, Social Security numbers, your home address, personal medical history, or other personally identifiable information with AI chatbots.