Summary: Audit AI search tools now, before they skew research

Although dedicated academic LLM-assisted search systems are less likely to hallucinate because they are querying a set scientific database, the extent of their limitations is still unclear.

AI-assisted search systems must be tested before researchers inadvertently introduce biased results on a large scale.

And because AI-assisted search systems, even open-source ones, are ‘black boxes’ — their mechanisms for matching terms, ranking results and answering queries aren’t transparent — methodical analysis is needed to learn whether they miss important results or systematically favour specific types of papers, for example.

In addition, researchers can use general artificial intelligence (AI)-assisted search systems, such as Bing, with queries that target only academic databases such as CORE, PubMed and Crossref.

Search tools assisted by large language models (LLMs) are changing how researchers find scholarly information.

Similar Articles

Audit AI search tools now, before they skew research

Generative AI could be a boon for literature search, but only if independent groups scrutinize its biases and limitations.

Read the complete article at: www.nature.com

Add a Comment

Your email address will not be published. Required fields are marked *