Skip to Main Content

AI Tools for Research and Study

This guide provides an overview of the artificial intelligence tools available useful for research and academic use.

AI Tools for Literature Reviews

Below are some of the suggested AI-tools that uses RAG techniques to generate answers and allows generative AI to ingest information from reliable sources of information to create contextually relevant, timely, and more accurate responses. 

The list is adopted from Tay, A. (2024, Aug.). List of academic search engines that use large language models for generative answers (updated to Aug 2024). Aaron Tay's Musings about Librarianshiphttps://musingsaboutlibrarianship.blogspot.com/p/list-of-academic-search-engines-that.html

Tool Access Sources LLM used Cost Uploading function Produced literature review matrix? Exporting Function Other features
Consensus Account set-up is encouraged Semantic Scholar GPT-4-powered scientific summaries Free, with some limitations No No, has Consensus meter Yes Also exists as free CustomGPT if you have ChatGPT+
SciSpace Account set-up is encouraged OpenAlex, Semantic Scholar and their own crawlers that crawl websites, repositories like Arxiv, Bioarxiv of around 200 million documents  ChatGPT Free, with some limitations Yes Yes Yes Also exists as free Custom GPT (if you have ChatGPT+)
Undermind.ai Account set-up is encouraged Semantic Scholar GPT 4 Free, with some limitations No No, but can provide summary, categories of papers, citation network, results with relevancy score, and coverage estimation.  Yes

Deep search - does iterative searching, citation searching and uses GPT4 to 'read' and judge for inclusion

Models and estimates % of papers relevant

Classifies papers into categories and provides timeline and citation graph 

High relevancy results but at cost of speed.

Scite Account set-up is encouraged Scite.ai's own index - Open Scholarly metadata and citation statements from selected partners "We use a variety of Language models depending on situation." GPT3.5 (generally), GPT4 (enterprise client), Claude instant (fallback)  7-day free trial Yes Yes Yes

Summaries include text from citation statements

Edit search used

Many options to control what is being cited including number of references, journals cited, type of publication cited, to cite from a list of publications etc.

Elicit Account set-up is encouraged Semantic Scholar OpenAI GPT models & other opensource LLMs Free, with some limitations Yes Yes Yes for paid accounts  List of concept search
The Literature No PubMed GPT 4 Free No Yes Yes (via PubMed) Able to control Literature Search (PubMed Boolean) to be used in analysis
Answerthis.io Account set-up is encouraged No mentioned of source except:

Comprehensive Source Coverage: Includes academic journals, research databases, internet sources, preprints, books, technical reports, and conference proceedings/

Unknown Free, with some limitations Yes (max of 8 document uploads for free version) Yes (max of 6 literature reviews) Yes

"The World’s Best Research Assistant" : intelligent paper search, smart library organization, and rapid literature reviews

Lumina Account set-up is encouraged Open Alex Claude 3 Haiku / OpenAI Sonnet Free  No No but AI summary No

Filter by journal, Source type, SJR Quartile, Citation

System Pro Account set-up is encouraged PubMed Medical only, highly validated system,

Use synthesis mode to generate summary of studies.
Free No No but with research synthesis  No 4 modes. Besides synthesis mode there is 
 

OpenScholar

 

"To access our Services, we may ask you to create an account." OpenScholar Datastore (OSDS) which includes 45 million papers from Semantic Scholar + 237 million passages/embeddings formulated by experts OpenScholar 8B (based on Llama 3.1 8B) "Outperfoms GPT-4 and Llama 3.1 70B."  Free (demo) No No, but can provide a summary and list of references No

OpenScholar article (2024)

  • developed ScholarQABench—a specialized benchmark designed to assess LLMs on open-ended scientific questions that require synthesizing information from multiple papers.
  • answers scientific queries by identifying relevant passages from 45 million open-access papers and synthesizing citation-backed responses
  • achieves citation accuracy on par with human experts
Ai2 ScholarQA (Beta)

Free access 

 

Semantic Scholar/ S2ORC (Semantic Scholar Open Research Corpus with 8 million academic papers) + mostly ArXiV papers Claude Sonnet 3.5 Beta version No Yes (literature comparison table) No

"the model focuses on writing an answer built around evidence, rather than writing an answer and then trying to find evidence"

- iterative, deep searching with a cost of speed

- quote extraction

- answer outline and clustering

- report generation

Perplexity Account set-up is encouraged

Internet in real-time.

Search results include: academic journals, research databases, internet sources, preprints, books, technical reports, and conference proceedings/

Claude 3.5, GPT-4o, and Sonar Free, with some limitations Yes Yes Yes
  • Knowledge Base Integration
  • Real-Time Search
  • Fact-Checking
  • Image Processing
  • Speech Recognition and Synthesis
  • Video and Audio Understanding

Further Reading

Gu, J. (2024, Mar. 20). Five AI research tools that referencing genuine sources reviews. Research Bridge. https://library.hkust.edu.hk/sc/ai-tools-with-genuine-sources/

Tay, A. (2024, Aug.). List of academic search engines that use large language models for generative answers (updated to Aug 2024). Aaron Tay's Musings about Librarianshiphttps://musingsaboutlibrarianship.blogspot.com/p/list-of-academic-search-engines-that.html

Zeichick, A. (2023, Sept. 19). What is retrieval-augmented generation (RAG)?. OCI. https://www.oracle.com/artificial-intelligence/generative-ai/retrieval-augmented-generation-rag/

Zhao, A. (2024, Mar. 20). Trust in AI: Evaluating Scite, Elicit, Consensus, and Scopus AI for generating literature reviews. Research Bridge. https://library.hkust.edu.hk/sc/trust-ai-lit-rev/

Library Homepage Facebook Youtube Instagram Twitter Telegram E-mail