Machine learning is a quickly-growing field, and Google has been leading the charge for years. The company uses AI to predict flight delays, improve virtual keyboards, give names to streets, create charts from spreadsheets, recommend online articles, and much more.

However, machine learning is not the precise technology that many assume it is. Ali Rahimi, a researcher at the company, received a 40-second ovation at an AI conference for calling machine learning, "a form of alchemy." He said researchers often don't know why some algorithms work while other ones don't. "Many of us feel like we're operating on an alien technology," Rahimi explained.

He collaborated with other researchers on a paper, presented on April 30 at the International Conference on Learning Representations in Vancouver, that goes into detail about AI's 'reproducibility problem.' To summarize, it alleges that many researchers don't understand how algorithms come to conclusions - they simply keep tweaking the program until it behaves how they want. As a result, these algorithms can't be (easily) recreated by others, because the researchers themselves can't explain it.

The team also claimed that most papers on AI are focused on algorithms beating benchmarks, not about how said algorithms actually work. "The purpose of science is to generate knowledge," Rahimi said. "You want to produce something that other people can take and build on."