
"Still, researchers at Stanford University's Institute for Human-Centered AI say that achieving "true political neutrality" in systems is "theoretically and practically impossible," despite the real risks stemming from AI models' influence on people's opinions and actions. "Neutrality is inherently subjective when what seems neutral to one person might seem biased to someone else," they write, proposing that policymakers need to recognize this as they consider potential safeguards like third-party evaluations of AI systems for political bias."
"The executive order "basically says that if you want access to taxpayer money - if you want the government to buy your model - you can't inject an ideology in it, and we don't care which ideology, you just can't have political ideology in it," Sriram Krishnan, a senior White House policy advisor on AI, said at a POLITICO event earlier this month."
The Office of Management and Budget is hosting listening sessions with industry to craft anti-woke guidance for artificial intelligence. An executive order signed in July requires federal agencies to purchase only large language models deemed "truth-seeking" and demonstrating "ideological neutrality." Diversity, equity and inclusion are identified as the primary targets, while screening methods for DEI in AI models remain unclear. OMB requests industry feedback on AI transparency, auditable risk management, and how models handle politically sensitive topics or instructional biases. The order conditions access to government procurement on the absence of political ideology in models. OMB must issue guidance by late November.
Read at Nextgov.com
Unable to calculate read time
Collection
[
|
...
]