
"An immigration barrister was found by a judge to be using AI to do his work for a tribunal hearing after citing cases that were entirely fictitious or wholly irrelevant. Chowdhury Rahman was discovered using ChatGPT-like software to prepare his legal research, a tribunal heard. Rahman was found not only to have used AI to prepare his work, but failed thereafter to undertake any proper checks on the accuracy."
"The upper tribunal judge Mark Blundell said Rahman had even tried to hide the fact he had used AI and wasted the tribunal's time. Blundell said he was considering reporting Rahman to the Bar Standards Board. The Guardian has contacted Rahman's firm for comment. The matter came to light in the case of two Honduran sisters who claimed asylum on the basis that they were being targeted by a criminal gang in their home country."
"He said that 12 authorities were cited in the paperwork by Rahman, but when he came to read the grounds, he noticed that some of those authorities did not exist and that others did not support the propositions of law for which they were cited in the grounds. In his judgment, he listed 10 of these cases and set out what was said by Mr Rahman about those actual or fictitious cases."
Chowdhury Rahman used ChatGPT-like software to prepare legal research for an upper tribunal hearing and failed to verify the material produced. Judge Mark Blundell found that Rahman cited authorities that were fictitious or did not support the legal propositions for which they were cited. Blundell said Rahman seemed unaware of the cited authorities, had apparently not intended to rely on them orally, and had attempted to conceal his use of AI, wasting the tribunal’s time. The case involved two Honduran sisters seeking asylum due to gang targeting. Blundell dismissed the appeal and considered reporting Rahman to the Bar Standards Board.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]