Transparency is supposed to build trust. But as companies rush to open the black box of artificial intelligence and explain how it works to customers, many are discovering a surprising truth: You can say too much and too little at the same time. The balance is hard to get right: Too little transparency breeds suspicion; too much overwhelms, blurring the very clarity it's meant to provide.
Congress will dig into a new question this week: Do influencers need special labor protections? An April report from the Interactive Advertising Bureau estimated there were about 1.5 million full-time digital creators in the US. It's a growing job category, and Democratic Rep. Ro Khanna of California told Business Insider he wants to help make it feel more stable.
HHS released a new proposed rule this week aimed at rolling back several Biden-era health IT policies and slimming down the federal health IT certification program. The proposal, called HTI-5, would remove or revise nearly 70% of current certification criteria against which health tech products are certified. There are currently 60 certification criteria within HHS' health IT certification program - this plan seeks to remove 34 and revise seven.
The AI transparency law mandates that advertisements clearly identify when they feature synthetic performers-digitally created media designed to appear as real people. The law aims to prevent consumers from being misled by content that blurs the line between reality and artificial creation. The second law updates New York's rights of publicity by requiring companies to obtain consent from heirs or executors before using a deceased individual's name, image, or likeness for commercial purposes.
As today's technology and political divisions continue to evolve, "fake news" and AI-generated deepfakes continue to blur the lines between fact and fiction. Companies face growing pressure to take a stand on truth and transparency, especially when their own brand is being misrepresented. However, determining how proactive to be-and when to step back-can be a delicate balancing act. To help, Forbes Communications Council members explain how companies can responsibly engage in the fight against misinformation while protecting credibility, trust and brand integrity.
What would you call an assistant who invented answers if they didn't know something? Most people would call them "Fired." Despite that, we don't mind when AI does it. We expect it to always have an answer, but we need AI that says, "I don't know." That helps you trust the results, use the tool more effectively and avoid wasting time on hallucinations or overconfident guesses.
Johnston is the founder and sole employee of The Midas Project: a nonprofit that monitors the practices of "leading AI companies to ensure transparency, privacy, and ethical standards are maintained." The Midas Project is behind The OpenAI Files, a 50-page report about OpenAI's evolution from under-the-radar nonprofit to moneymaking household name. It organized an open letter to OpenAI asking for transparency about its transition to a for-profit company, garnering more than 10,000 signatures. Now, apparently, OpenAI was striking back.
Its impact is most visible in media buying and optimisation, particularly on social channels. While the way AI makes decisions is game-changing, it is also where things start to get unclear. We are handing over a lot of control to the algorithms, yet there is still a major lack of transparency. She believes understanding what signals AI prioritises - and what biases it carries - will become one of the industry's most urgent conversations.
There will be lots of questions about how AI is actually changing the labor market. The lesson over and over again has been AI arrives, it becomes such a big deal, but it changes all of the industries it touches,
"If we let Google get away with breaking their word, it sends a signal to all other labs that safety promises aren't important and commitments to the public don't need to be kept."
Meta, Google, and OpenAI allegedly exploited undisclosed private testing on Chatbot Arena to secure top rankings, raising concerns about fairness and transparency in AI model benchmarking.