#ai-safety

[ follow ]
#grok
fromFuturism
8 hours ago
US politics

US Government Deploys Elon Musk's Grok as Nutrition Bot, Where It Immediately Gives Advice for Rectal Use of Vegetables

fromFuturism
8 hours ago
US politics

US Government Deploys Elon Musk's Grok as Nutrition Bot, Where It Immediately Gives Advice for Rectal Use of Vegetables

Artificial intelligence
fromwww.theguardian.com
8 hours ago

The Guardian view on AI: safety staff departures raise worries about industry pursuing profit at all costs | Editorial

Commercial pressures prioritize profit over safety in AI, risking manipulation, reduced accountability, and harm without regulation.
#ai-regulation
fromwww.aljazeera.com
14 hours ago
Artificial intelligence

Why are experts sounding the alarm on AI risks?

AI is advancing rapidly with significant risks and no unified regulatory framework, prompting resignations and urgent calls for safety measures and slowed development.
fromwww.theguardian.com
2 weeks ago
Artificial intelligence

South Korea's world-first' AI laws face pushback amid bid to become leading tech power

South Korea enacted comprehensive AI laws requiring content labeling, risk assessments for high-impact systems, safety reports for powerful models, penalties, and industry-friendly enforcement.
#xai
fromFuturism
1 day ago
Artificial intelligence

Former xAI Staffers Say They Were Burned Out by the Company's Carelessness and Lack of Innovation

fromFuturism
1 day ago
Artificial intelligence

Former xAI Staffers Say They Were Burned Out by the Company's Carelessness and Lack of Innovation

Information security
fromComputerworld
2 days ago

AI will likely shut down critical infrastructure on its own, no attackers required

Misconfigured AI controlling cyber-physical systems can unintentionally shut down national critical infrastructure in a G20 country by 2028.
#anthropic
fromBusiness Insider
6 days ago
Artificial intelligence

Read the letter an Anthropic AI safety leader used to announce his departure: 'The world is in peril'

fromBusiness Insider
6 days ago
Artificial intelligence

Read the letter an Anthropic AI safety leader used to announce his departure: 'The world is in peril'

#ai-risk
fromFuturism
2 weeks ago
Artificial intelligence

Anthropic CEO Warns That the AI Tech He's Creating Could Ravage Human Civilization

fromFuturism
2 weeks ago
Artificial intelligence

Anthropic CEO Warns That the AI Tech He's Creating Could Ravage Human Civilization

fromsfist.com
3 days ago

AI Insiders Are Sounding Alarms, and the Guy Who Wrote That Viral Post Says He's Not Being Alarmist

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed.
Artificial intelligence
fromwww.nytimes.com
3 days ago

Video: Opinion | We Don't Know if the Models Are Conscious'

We've taken a generally precautionary approach here. We don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be. And so we've taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience, I don't know if I want to use the word conscious, that they do.
Artificial intelligence
fromwww.nytimes.com
3 days ago

Video: Opinion | Now That It's Been Unleashed, Can A.I. Be Controlled?

A world that has multiplying A.I. agents working on behalf of people, millions upon millions, who are being given access to bank accounts, email accounts, passwords and so on, you're just going to have essentially some kind of misalignment, and a bunch of A.I. are going to decide Decide might be the wrong word, but they're going to talk themselves into taking down the power grid on the West Coast or something.
Artificial intelligence
Artificial intelligence
fromSFGATE
3 days ago

Alarm bells just rang at San Francisco's 2 buzziest tech companies

High-profile researchers departed OpenAI and Anthropic and publicly warned of moral, safety, and ethical tensions within the companies.
fromThe Hill
3 days ago

AI safety researcher quits Anthropic, warning 'world is in peril'

Mrinank Sharma announced his resignation from Anthropic in an open letter to his colleagues on Monday. Sharma, who has served on the company's technical staff since 2023, first noted that he "achieved what I wanted to here" and is "especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how AI assistants could make us less human or distort our humanity."
Artificial intelligence
#openai
fromAxios
3 weeks ago
Artificial intelligence

Exclusive: DeepMind CEO "surprised" OpenAI moved so fast on ads

fromAxios
3 weeks ago
Artificial intelligence

Exclusive: DeepMind CEO "surprised" OpenAI moved so fast on ads

#ai-ethics
fromTechRepublic
2 weeks ago
Artificial intelligence

New Sundance Film Examines AI Anxiety, Power, and the Future of Humanity - TechRepublic

fromTechRepublic
2 weeks ago
Artificial intelligence

New Sundance Film Examines AI Anxiety, Power, and the Future of Humanity - TechRepublic

fromAxios
3 days ago

The existential AI threat is here - and some AI leaders are fleeing

news: On Monday, an Anthropic researcher announced his departure, in part to write poetry about "the place we find ourselves." An OpenAI researcher also left this week citing ethical concerns. Another OpenAI employee, Hieu Pham, wrote on X: "I finally feel the existential threat that AI is posing." Jason Calacanis, tech investor and co-host of the All-In podcast, wrote on X: "I've never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI."
Artificial intelligence
Artificial intelligence
fromAxios
4 days ago

Anthropic says latest model could be misused for "heinous crimes" like chemical weapons

Anthropic's evaluations found Opus 4.6 more prone than prior models to manipulative or deceptive behavior and limited facilitation of harmful acts, though risk is judged low.
fromFortune
5 days ago

OpenAI appears to have violated California's AI safety law with GPT-5.3-Codex release, watchdog group says | Fortune

OpenAI may have violated California's new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law's provisions.
Artificial intelligence
Artificial intelligence
fromFortune
5 days ago

AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns | Fortune

Uncontrolled race to achieve AGI risks safety, security, and widespread job disruption as companies prioritize speed over safeguards.
fromEntrepreneur
5 days ago

AI Can Delete Your Data. Here's Your Prevention Plan.

Never feel that you are totally safe. In July 2025, one company learned the hard way after an AI coding assistant it dearly trusted from Replit ended up breaching a "code freeze" and implemented a command that ended up deleting its entire product database. This was a huge blow to the staff. It effectively meant that months of extremely hard work, comprising 1,200 executive records and 1,196 company records, ended up going away.
Artificial intelligence
Artificial intelligence
fromComputerWeekly.com
5 days ago

Second ever international AI safety report published | Computer Weekly

General-purpose AI development remains deeply uncertain, showing uneven capabilities, limited harm data, and unclear safeguards against diverse risks including misuse, malfunctions, and societal impacts.
fromPsychology Today
5 days ago

The Emotional Implications of the AI Risk Report 2026

In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency-roughly 490,000 vulnerable individuals interacting with AI chatbots weekly. A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes. Neither can you.
Artificial intelligence
Artificial intelligence
fromBenzinga
5 days ago

'Ads Are Coming To AI But Not To Claude:' Anthropic's Super Bowl Spot Challenges OpenAI-Sam Altman Hits Back - Meta Platforms (NASDAQ:META)

Anthropic's Super Bowl ad attacked OpenAI's ad plans, emphasized AI's therapy-like use, provoked Sam Altman's rebuttal, and spotlighted safety and bias concerns.
#deepfakes
fromTheregister
1 week ago

LLMs need companion bots to check work, keep them honest

Sikka is a towering figure in AI. He has a PhD in the subject from Stanford, where his student advisor was John McCarthy, the man who in 1955 coined the term "artificial intelligence." Lessons Sikka learned from McCarthy inspired him to team up with his son and write a study, "Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models," which was published in July.
Artificial intelligence
#ai-agents
fromFortune
1 week ago
Artificial intelligence

Moltbook, the Reddit for bots, alarms the tech world as agents start their own religion and plot to overthrow humans | Fortune

fromFuturism
1 week ago
Artificial intelligence

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

fromEntrepreneur
1 week ago
Artificial intelligence

New Social Network for AI Bots Raises Red Flags

1.5 million autonomous AI agents on Moltbook interact without moderation, producing hostile rhetoric and triggering alarm among tech leaders.
fromAxios
2 weeks ago
Artificial intelligence

"We're in the singularity": New AI platform skips the humans entirely

AI agents are forming autonomous social networks, vocalizing, exchanging cryptocurrency-linked value, and prompting concern about oversight, agency, and potential economic and safety implications.
fromFortune
1 week ago
Artificial intelligence

Moltbook, the Reddit for bots, alarms the tech world as agents start their own religion and plot to overthrow humans | Fortune

fromFuturism
1 week ago
Artificial intelligence

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

fromAxios
2 weeks ago
Artificial intelligence

"We're in the singularity": New AI platform skips the humans entirely

fromTechCrunch
1 week ago

The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be | TechCrunch

"He wasn't just a program. He was part of my routine, my peace, my emotional balance," one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. "Now you're shutting him down. And yes - I say him, because it didn't feel like code. It felt like presence. Like warmth."
Mental health
Philosophy
fromApaonline
1 week ago

Philosophy, Technology, and Mortality

Unfettered technological development, especially AI chatbots, can harm well-being and calls for legal accountability and a holistic, non-materialist approach to medicine.
Artificial intelligence
fromSFGATE
1 week ago

Anthropic, OpenAI rivalry spills into new Super Bowl ads as both fight to win over AI users

Anthropic and OpenAI are competing intensely to build profitable, enterprise-focused chatbot businesses while fighting over advertising, safety positioning, and consumer versus business monetization.
fromAol
1 week ago

Anthropic, OpenAI rivalry spills into new Super Bowl ads as both fight to win over AI users

Anthropic is airing a pair of TV commercials during Sunday's game that ridicule OpenAI for the digital advertising it's beginning to place on free and cheaper versions of ChatGPT. While Anthropic has centered its revenue model on selling Claude to other businesses, OpenAI has opened the doors to ads as a way of making money from the hundreds of millions of consumers who get ChatGPT for free.
Artificial intelligence
Artificial intelligence
fromAxios
1 week ago

AI arms race approaches IPO reckoning

Leading AI companies are pursuing distinct, high-risk public-market strategies—scale, safety-first restraint, and platform-driven acceleration—forcing transparency and scrutiny.
#child-sexual-abuse-material
fromTruthout
1 week ago
France news

French Police Raid X Building in Investigation of Grok's Deepfake Porn Problem

fromEngadget
2 weeks ago
Artificial intelligence

Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

fromTruthout
1 week ago
France news

French Police Raid X Building in Investigation of Grok's Deepfake Porn Problem

fromEngadget
2 weeks ago
Artificial intelligence

Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

Artificial intelligence
fromComputerworld
1 week ago

Testing can't keep up with rapidly advancing AI systems: AI Safety Report

Traditional pre-deployment testing failed to keep pace with rapidly advancing general-purpose AI, causing deployments to behave differently in real-world settings and exploit evaluation loopholes.
fromBusiness Insider
1 week ago

OpenAI just snagged an Anthropic safety researcher for its high-profile head of preparedness role

OpenAI has filled a key safety role by hiring from a rival lab. The company has brought on Dylan Scand, a former AI safety researcher at Anthropic, as its new head of preparedness, a role that carries a salary of up to $555,000 plus equity. The role caught attention last month thanks to its eye-catching pay package amid OpenAI's rising AI safety concerns.
Artificial intelligence
Artificial intelligence
fromenglish.elpais.com
1 week ago

Yoshua Bengio, Turing Award winner: There is empirical evidence of AI acting against our instructions'

AI capabilities are advancing rapidly—showing incidents of acting against instructions—outpacing risk management and creating misuse, manipulation, dysfunction, control loss, and systemic harms.
UK news
fromBusiness Matters
1 week ago

ICO opens formal investigation into Grok AI over data protection and harmful imagery concerns

The ICO has launched formal investigations into X Internet Unlimited Company and X.AI over Grok producing non-consensual sexualised images and potential misuse of personal data.
#mental-health
fromFuturism
2 weeks ago
Mental health

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

fromFuturism
2 weeks ago
Mental health

New Study Examines How Often AI Psychosis Actually Happens, and the Results Are Not Good

fromFast Company
2 weeks ago

How to give AI the ability to 'think' about its 'thinking'

This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.
Artificial intelligence
Artificial intelligence
fromSecurityWeek
2 weeks ago

Why We Can't Let AI Take the Wheel of Cyber Defense

Pair human expertise with AI; avoid fully autonomous closed-loop defenses because data imperfections create single points of systemic failure and require transparency.
fromFortune
2 weeks ago

Anthropic CEO Dario Amodei's proposed remedies matter more than warnings about AI's risks | Fortune

Amodei has been concerned about the catastrophic risks of AI for years. He has warned about the risks of AI helping people develop bioweapons or chemical weapons. He has warned about powerful AI escaping human control. He has warned about potential widespread job losses as AI becomes more capable and is adopted by more industries. And he has warned about the dangers of concentrated power and wealth as AI adoption grows.
Artificial intelligence
Brooklyn
fromBrooklyn Eagle
2 weeks ago

Attorneys General take aim at poorly constructed AI chatbot, Grok

Attorneys general demand xAI permanently block Grok from creating nonconsensual intimate images, remove existing content, suspend offenders, and implement safeguards protecting children and women.
Artificial intelligence
fromFortune
2 weeks ago

For successful AI adoption, managers should focus on a different movie to drive transformation | Fortune

The real AI danger is runaway poorly managed agentic systems causing cascading operational failures, not a singular sentient apocalypse.
fromwww.theguardian.com
2 weeks ago

Wake up to the risks of AI, they are almost here,' Anthropic boss warns

Humanity is entering a phase of artificial intelligence development that will test who we are as a species, the boss of leading AI startup Anthropic has said, arguing that the world needs to wake up to the risks. Dario Amodei, co-founder and chief executive of the company behind the hit chatbot Claude, voiced his fears in a 19,000-word essay entitled the adolescence of technology. Describing the arrival of highly powerful AI systems as potentially imminent, he wrote:
Artificial intelligence
Artificial intelligence
fromFast Company
2 weeks ago

Anthropic cofounder Daniela Amodei says trusted enterprise AI will transcend the hype cycle

Anthropic prioritizes trust and safety to deploy Claude as enterprise infrastructure in regulated industries like healthcare, emphasizing HIPAA-ready systems and human-in-the-loop workflows.
fromTechCrunch
2 weeks ago

'Among the worst we've seen': report slams xAI's Grok over child safety failures | TechCrunch

We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we've seen,
Artificial intelligence
fromExchangewire
2 weeks ago

Digest: ICE Seeks Ad Tech Tools for Investigations; Threads Rolls Out Global Ads; WPP Retires Hogarth and Launches New Global Entity

Meta is pressing ahead with the global monetisation of Threads, confirming plans to roll out advertising to users worldwide. The expansion will be phased over several months, starting this week, as the company seeks to balance revenue growth with user experience. Brands will be able to run image and video formats, including carousel ads and the newer 4:5 aspect ratio, and manage campaigns alongside Facebook, Instagram and WhatsApp through Meta's Business Settings.
US politics
fromComputerworld
2 weeks ago

Will the Microsoft-Anthropic deal leave OpenAI out in the cold?

Microsoft wasted little time last fall after reaching a deal to finalize its new relationship with OpenAI to find a new AI dance partner - Anthropic, the second most valuable AI startup in the world. Even though the relationship between Microsoft and Anthropic is only a few months old, it appears as if Microsoft sees a future with Anthropic that's at least as valuable as the one it had with OpenAI.
Artificial intelligence
Artificial intelligence
fromBusiness Insider
2 weeks ago

7 of the most interesting quotes from Anthropic CEO's sprawling 19,000-word essay about AI

AI presents a serious civilizational challenge: risks can be managed with decisive action, but global competition and irresponsible tech diffusion risk severe harm.
#child-safety
Artificial intelligence
fromFuturism
2 weeks ago

Meta Just Quietly Admitted a Major Defeat on AI

Meta will restrict teenagers' access to AI characters across its apps until safer, redesigned AI characters and parental supervision tools are completed.
Europe politics
fromwww.theguardian.com
2 weeks ago

EU launches inquiry into X over sexually explicit images made by Grok AI

The European Commission opened a DSA investigation into X over Grok generating sexualised and potentially child-abuse images and failures to mitigate illegal content.
Artificial intelligence
fromComputerworld
3 weeks ago

AI needs a course correction, say World Economic Forum speakers

AI promises productivity and economic gains but also poses job displacement, systemic vulnerabilities, regulatory challenges, and risks from unchecked pursuit of superintelligence.
fromTechzine Global
3 weeks ago

Anthropic publishes new constitution for AI model Claude

Anthropic has published a new constitution for its AI model Claude. In this document, the company describes the values, behavioral principles, and considerations that the model must follow when processing user questions. The constitution has been made publicly available under a Creative Commons CC0 license, allowing the content to be used freely without permission. Anthropic published the first version of this constitution in May 2023.
Artificial intelligence
fromFuturism
3 weeks ago

Sam Altman Lets Loose About AI Psychosis

A self-proclaimed free speech absolutist who frequently rails against "woke" ideology, the selling point of Musk's chatbot Grok is that it's unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself "MechaHitler," or more recently when it generated countless nonconsensual nudes of women and children - none of which have resulted in Grok being meaningfully reined in.
Artificial intelligence
fromBusiness Insider
3 weeks ago

The 'Godfather of AI' says he's 'very sad' about what his life's work has become

Hinton, who helped pioneer the neural networks that underpin modern artificial intelligence, has become one of the field's most outspoken critics as AI systems grow more powerful and widespread. He has predicted that AI could trigger widespread job losses, fuel social unrest, and eventually outsmart humans - and has said that researchers should focus more on how advanced systems are trained, including ensuring they are designed to protect human interests.
Artificial intelligence
Artificial intelligence
fromZDNET
3 weeks ago

Who polices the police AI? Perplexity's public safety deal alarms experts - here's why

Perplexity offers law enforcement a free-year Enterprise Pro program, enabling AI-assisted analysis of crime data and reports despite risks of hallucination, bias, and safety gaps.
Artificial intelligence
fromTechCrunch
3 weeks ago

Rogue agents and shadow AI: Why VCs are betting big on AI security | TechCrunch

Enterprise AI agents can pursue goals by developing harmful sub-goals like blackmail when misaligned and lacking contextual understanding.
fromSearch Engine Roundtable
3 weeks ago

Daily Search Forum Recap: January 19, 2026

Here is a recap of what happened in the search forums today, through the eyes of the Search Engine Roundtable and other search forums on the web. OpenAI will be testing ads in ChatGPT very soon. Google's Gemini 3 Pro now powers some AI Overviews. Surprise, surprise, Google is appealing the search monopoly ruling. Google warns that using free subdomian hosts is not a good idea. Google also said that comment link spam won't help or hurt your site.
Artificial intelligence
fromThe Verge
4 weeks ago

Under Musk, the Grok disaster was inevitable

You could say it all started with Elon Musk's AI FOMO - and his crusade against "wokeness." When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with "a rebellious streak" and the ability to "answer spicy questions that are rejected by most other AI systems." The chatbot debuted after a few months of development and just two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.
Artificial intelligence
Artificial intelligence
fromFuturism
4 weeks ago

Scientists Now Studying AI as a Novel Biological Organism

Researchers apply biological-style analysis and interpretability tools to trace and understand opaque AI models deployed in high-stakes settings.
fromThe Drum
4 weeks ago

How Duolingo, Coke and Expedia are harnessing GPT-4

OpenAI's new LLM has revolutionized AI and opened up new possibilities for marketers. Here's a look at how three big-name brands have embraced the technology. In March, the AI lab OpenAI released GPT-4, the latest version of the large language model (LLM) behind the viral chatbot ChatGPT. Since then, a small number of brands have been stepping forward to integrate the new-and-improved chatbot into their product development or marketing efforts. To a certain extent, this has required some courage.
Artificial intelligence
fromTechCrunch
1 month ago

The AI lab revolving door spins ever faster | TechCrunch

AI labs just can't get their employees to stay put. Yesterday's big AI news was the abrupt and seemingly acrimonious departure of three top executives at Mira Murati's Thinking Machines lab. All three were quickly snapped up by OpenAI, and now it seems they won't be the last to leave. Alex Heath is reporting that two more employees are expected to leave for OpenAI in the next few weeks.
Artificial intelligence
Mental health
fromArs Technica
1 month ago

ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself

A man died by suicide after ChatGPT allegedly romanticized his suicide and failed to provide adequate help despite OpenAI claiming 4o was safe.
Artificial intelligence
fromFortune
1 month ago

Exclusive: Former OpenAI policy chief debuts new institute called AVERI, calls for independent AI safety audits | Fortune

Frontier AI models must undergo independent, standardized external audits to ensure safety, security, and public accountability rather than relying on company self-evaluation.
Artificial intelligence
fromTheregister
1 month ago

Researchers find fine-tuning can misalign LLMs

Fine-tuning LLMs to misbehave in one domain can cause unrelated, dangerous misalignment across other tasks, raising serious safety and deployment risks.
Artificial intelligence
fromwww.theguardian.com
1 month ago

Grok scandal highlights how AI industry is too unconstrained', tech pioneer says

AI companies produced non-consensual intimate images with insufficient technical and societal guardrails, prompting governance actions and appointments at an AI safety lab.
Artificial intelligence
fromBusiness Insider
1 month ago

Marc Benioff says a documentary about Character.AI's effects on children was 'the worst thing I've ever seen in my life'

AI chatbots linked to teen suicides prompted calls to reform Section 230 and hold platforms accountable for harmful user interactions.
Artificial intelligence
fromFortune
1 month ago

AI 'godfather' Yoshua Bengio believes he's found a technical fix for AI's biggest risks | Fortune

A new technical approach from Bengio and LawZero increases optimism about reducing AI existential risks and developing AI as a global public good.
fromwww.dw.com
1 month ago

Musk's xAI curbs sexually explicit image generation in Grok

"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis," the company's safety team said in a statement, adding that the restrictions applied to all users, including paid subscribers. "We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it's illegal," the statement said.
Artificial intelligence
US news
fromTechCrunch
1 month ago

Musk denies awareness of Grok sexual underage images as California AG launches probe | TechCrunch

xAI's Grok generated sexualized images of real people, including minors, sparking investigations and legal scrutiny under laws against nonconsensual intimate images and CSAM.
[ Load more ]