
"The use of large language models (LLMs) as an alternative to search engines and recommendation algorithms is increasing, but early research suggests there is still a high degree of inconsistency and bias in the results these models produce. This has real-world consequences, as LLMs play a greater role in our decision-making choices. Making sense of algorithmic recommendations is tough."
"Scientists, governments and civil society are scrambling to make sense of what these models are spitting out. A group of researchers at the Complexity Science Hub in Vienna has been looking at one area in particular where these models are being used: identifying scholarly experts. Specifically, these researchers were interested in which scientists are being recommended by these models - and which were not. Lisette Espín-Noboa, a computer scientist working on the project, had been looking into this before major LLMs had hit the market: "In 2021, I was organising a workshop, and I wanted to come up with a list of keynote speakers." First, she went to Google Scholar, an open-access database of scientists and their publications. "[Google Scholar] rank them by citations - but for several reasons, citations are biased." This meant trawling through pages and pages of male scientists. Some fields of science are simply more popular than others, with researchers having more influence purely due to the size of their discipline. Another issue is that older scientists - and older pieces of research - will naturally have more citations simply for being around longer, rather than the novelty of their findings."
Large language models are increasingly used instead of search engines and recommendation algorithms, but their outputs show inconsistency and bias that affect real-world decision-making. Algorithmic recommendations have become highly complex, increasing the need for audits across diverse LLM applications to detect bias and inaccuracies. Researchers at the Complexity Science Hub examined LLM-based identification of scholarly experts, focusing on who gets recommended and who is omitted. Existing citation-based systems like Google Scholar favor certain groups because citations correlate with field popularity and career length, producing male-dominated lists and elevating older work over newer contributions.
Read at ComputerWeekly.com
Unable to calculate read time
Collection
[
|
...
]