UK exposed to serious harm' by failure to tackle AI risks, MPs warn
Briefly

UK exposed to serious harm' by failure to tackle AI risks, MPs warn
"In a new report, MPs on the Treasury committee criticise ministers and City regulators, including the Financial Conduct Authority (FCA), for taking a wait-and-see approach to AI use across the financial sector. That is despite looming concerns over how the burgeoning technology could disadvantage already vulnerable consumers, or even trigger a financial crisis, if AI-led firms end up making similar financial decisions in response to economic shocks."
"More than 75% of City firms now use AI, with insurers and international banks among the biggest adopters. It is being used to automate administrative tasks or even help with core operations, including processing insurance claims and assessing customers' credit-worthiness. But the UK has failed to develop any specific laws or regulations to govern their use of AI, with the FCA and Bank of England claiming general rules are sufficient to ensure positive outcomes for consumers."
"It is the responsibility of the Bank of England, the FCA and the government to ensure the safety mechanisms within the system keeps pace, said Meg Hillier, chair of the Treasury committee. Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying."
Government and the Bank of England have not implemented targeted regulations for AI in the financial sector, leaving firms to apply general rules on their own. Over 75% of City firms use AI for administrative tasks and core functions such as processing insurance claims and assessing credit-worthiness. The regulatory gap creates uncertainty over transparency, accountability, and liability among data providers, tech developers and financial firms. AI-driven correlated decision-making risks disadvantaging vulnerable consumers and could amplify economic shocks, threatening financial stability. Current safety mechanisms may not be adequate to prevent or manage a major AI-related incident.
Read at www.theguardian.com
Unable to calculate read time
[
|
]