One official reportedly described Palantir as 'ethically bankrupt' in justifying his refusal to use the software, and noted that he knows of coworkers who deliberately slow their work pace when forced to use the system.
"This bill has traded on a story of health equality and health access. But the real purpose of this bill continues to be targeting trans and gender-diverse patients and their providers throughout the state of Tennessee."
The same industry also sells that data, including bulk cell phone location data, to police departments and federal government agencies in ways that can reveal intimate details about Americans without a warrant.
We do purchase commercially available information that's consistent with the Constitution and the laws under the Electronic Communications Privacy Act, and it has led to some valuable intelligence for us. Patel said at a hearing before the Senate Intelligence Committee on Wednesday.
This morning we incorrectly showed transaction information from some accounts to other customers in Internet Banking and the mobile app. We're sorry this happened. This issue was quickly identified and resolved We can assure you that nobody had access to your accounts. We're currently reviewing what happened to ensure this cannot occur again.
A briefing by the health justice charity Medact said the highly interoperable nature of Palantir's software could enable data-driven state abuses of power, including US-style ICE raids. The report, released on Thursday and backed by doctors, lawyers, patients and human rights groups from the No Palantir in the NHS campaign and sent to hospital trusts and integrated care boards nationwide, was shared with the Guardian and BMJ.
At this stage of development, it is not possible to definitively estimate the cost to government from developing and running the digital ID system, adding that yet-to-be-taken policy decisions will materially impact the costs involved.
Bias risks: AI can amplify inequalities, like mislabeling non-native English writing as AI-generated. Privacy concerns: Schools face rising cyberattacks, and data misuse risks are high. Accountability: Human oversight is crucial to prevent over-reliance on AI.
Samsung uses automated content recognition (ACR) technology, which can capture hundreds of images of what's on your TV screen each minute, without first obtaining Texans' expressed, informed consent. As mentioned earlier, the concern is that Samsung would use this information for targeted advertising. Although Samsung has disclosures in place and TV owners can opt out of ACR, Paxton finds that the disclosures are inadequate, vague, and run afoul of state law.
Asked by investors about his biggest worries, CEO Rick Smith said: A misstep around privacy and data handling. Without elaborating on specific examples, he said: We are seeing that those are concerns right now out in the public. I think that would be one where we could make a mistake that would have outsized negative consequences.
As the Milano Cortina 2026 Winter Olympics enter their final weekend, we would like to hear about the moment will stay with you. Wherever you are, what was your favourite moment and why? Share your favourite You can tell us your highlight from the Winter Olympics 2026 using this form. Your responses, which can be anonymous, are secure as the form is encrypted
The change stems from the 2025 amendment to the Federal Tax Code, which has sparked controversy in the sector following the reform of Article 30-B. This provision stipulates that taxpayers providing digital services must grant tax authorities permanent, real-time online access only to the information necessary to verify compliance with tax obligations, as recorded in their systems or records. When asked about the reform, the Digital Transformation and Telecommunications Agency referred EL PAIS to the SAT.
AI's next target? Helping you kick your phone addiction. AI devices are a top priority for Big Tech companies that view it as the future of how humans and AI interact, writes BI's Amanda Hoover. You've likely heard of this hardware before, which acts as a sort of AI sidekick for your life. From the Rabbit R1 and Humane to Friend, the names are different, but the stories are the same: big expectations, difficult execution.
Trust has fast become one of the central questions in every serious conversation about AI. Not capabilities. Not efficiency. Trust. If customers don't trust how companies deploy AI, they'll walk away. If employees don't trust it, they'll disengage. If enterprises don't trust their AI providers, they won't adopt. A recent global KPMG study found that while two-thirds of people now use AI regularly, fewer than half say they're willing to trust it.
The European Parliament has taken a rare and telling step: it has disabled built-in artificial intelligence features on work devices used by lawmakers and staff, citing unresolved concerns about data security, privacy, and the opaque nature of cloud-based AI processing. The decision, communicated to Members of the European Parliament (MEPs) in an internal memo this week, reflects a deepening unease at the heart of European institutions about how AI systems handle sensitive data.
TCL is known for its budget TVs, but in recent years, it has delivered some jaw-dropping models and consistently rolled out excellent hardware that often rivals far more expensive sets. My issue, however, is that most TCL TVs ship with default settings that do not give you the best picture, performance, or privacy protections at home. Luckily, the fix is simple.