As every scholarly researcher knows, the creation of knowledge depends upon scientists’ ability to find, understand, and synthesize work that’s been done before. We all know that significant (and career-building) research progress can be made only by building on the work of peers—whether their results are groundbreaking, negative, or simply launch points for further iteration.
Technology is described as the application of science, and so it comes full circle for technology to help advance science and accelerate discovery. But scientific progress doesn’t develop in a vacuum. It learns and grows through looking at past research—both successes and failures—not unlike machine learning and artificial intelligence.
Have you ever had tasks that you wish someone else—or something else—could do for you or make easier? Like the seemingly endless activity of performing keyword searches? Or skimming articles to discern what’s most relevant in their content? Or even determining if a paper is worth purchasing at all, let alone reading and prioritizing over all the other literature?
Researchers in cognitive and computational science have a deep interest in such questions and have already begun creating tools that promise to simplify the literature exploration process. But even the most sophisticated among these tools—whether intended to aid researchers in discovering relevant publications, validating existing hypotheses, or uncovering relationships between seemingly unrelated findings—are still in their infancy. At this point, they are best used as a supplement to existing search engines, databases, and research methodologies.
Natural language processing and machine learning have transformative potential
Inputting search terms into scholarly literature search engines or citation databases can be time-consuming and tedious. Searches typically return enormous numbers of results, and many of the papers retrieved turn out to be less relevant or less replicable than what researchers need.
New algorithms rely on innovations in data mining, computer vision, and natural language processing to do more than merely search the study’s abstract for keywords. Instead, AI-powered tools like Semantic Scholar and Iris.ai are able to rank findings according to their relevance, discern how frequently a paper has been cited, and display a visual representation of the relationships between studies.
These projects rely on semantic search techniques, which try to understand searchers’ intentions and the meanings of the terms they use in order to return the most meaningful results. For researchers working in areas where the scientific journals have been incorporated by such tools, they promise to speed and simplify the literature search process. They also provide deeper insights into what’s inside the papers than what appears in their results, illuminating relationships between research methodologies, data sets, and citations, even across disciplines.
Semantic Search, developed by the Allen Institute for Artificial Intelligence (AI2), is probably the most widely used among these new search tools. It’s said to handle over a million search queries per month. But most researchers continue to use it in conjunction with traditional literature discovery tools like Google Scholar and PubMed. The reason? Semantic Search looks at around 40 million documents, mostly in the fields of computer science and biomedical research. This is a fraction of the size of Google Scholar’s article database, which includes more than 160 million records.
Neural networks may help scientists keep abreast of research in adjacent fields
Researchers at the Massachusetts Institute of Technology are at work on AI-powered tools that could ultimately make it easier for scientists—and members of the public—to keep track of the latest findings published in scientific papers. Their team is training neural networks to produce concise, readable brief summaries of academic papers, and hopes its results will ultimately help scientists—and the librarians who support them—keep up with more research in less time. Although such tools are growing in accuracy, they’re not yet reliable enough for regular use in the literature review process.
Our take on it is that we expect to see advances in the next few years which will make it easier for researchers to find, synthesize, understand, and measure the impacts of developments in their field. When, where, and how it will progress in some ways is up to us. What we know for sure is that keeping up with the cutting edge of scientific discovery is not a nice-to-have, but a necessity, and discovery alone is no guarantee for access.
To see some of today’s most efficient tools for discovery paired with on-demand access to scientific literature in action, sign up for a 14-day free trial* of the Article Galaxy enterprise solution.