Saturday, December 12, 2015

machines learn like humans

http://stgist.com/2015/12/scientists-develop-algorithm-that-teach-machines-to-learn-like-humans-6039

Scientists say they’ve developed an algorithm that can teach machines how to learn like humans.

By: Matt Dayo |
 
 
 



Artificial Intelligence researchers at the New York University, University of Toronto in Canada and the Massachusetts Institute of Technology reported Thursday that they’ve developed an algorithm that captures the human-level learning abilities allowing machines to surpass humans for a narrow set of vision-related tasks.

The New York Times reported Thursday that the scientific article published in the journal Science is “noteworthy” because machine-vision technology is becoming more common, from video games to roads courtesy of self-driving cars, e.g. Google self-driving car that can detect pedestrians.
Brenden Lake, lead author of the research, explains in a press release that by “reverse engineering” how humans think about the problem — they can develop a better algorithm. Lake, a Moore-Sloan Data Science Fellow at New York University, also added that their research could also help develop other methods to narrow the gap for other machine learning tasks.

When humans are exposed to new abstract ideas, like for instance a new letter in an unfamiliar alphabet, they often need only a few examples to understand its form and recognize new instances. Meanwhile, machines typically need to be given hundreds to thousands of examples before they can perform with similar accuracy.

Ruslan Salakhutdinov, the study co-author, explains that it has been difficult to build machines that can replicate the human-level learning.

Salakhutdinov, a University of Toronto assistant professor of Computer Science, says replicating these human-level abilities is an exciting area of research that connects machine learning with statistics, computer vision and even cognitive science.

Lake, Salakhutdinov and Professor Joshua Tenenbaum of the MIT Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines, have developed a framework called “Bayesian Program Learning” or BPL where concepts are represented as simple machine programs. For example, the letter A in the English alphabet is represented by a computer code that resembles a computer programmer work. The study says no computer programmer is required during the learning process, because the algorithm itself can construct its own code to produce the letter it sees.

In addition, this algorithm can produce different outputs at each execution — allowing machines to capture the way instances of a concept vary, like for example the difference between the letter ‘A’ drawn by a 15-year-old and a 30-year-old man.

Researchers say the BPL is designed to capture both the causal and compositional properties of processes, and it will allow machines to use data more efficiently. Their model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts. For example, using its own knowledge from learning the Greek Alphabet, it can learn Latin alphabet much faster.
Lake, Salakhutdinov, and Tenenbaum have applied their algorithm to over 1,600 types of human handwritten characters in fifty of the world’s writing systems including Sanskrit, and even the characters invented in Futurama, a U.S. television series.

In the research, the authors tested their algorithm with humans and machines through “visual Turing tests.”

The authors asked both humans and computers with the algorithm to reproduce a series of handwritten characters after being shown a single example of each character — and then they compared the outputs from both machines and humans.

In the visual Turing Tests, human judges were given paired examples of both the human and machine output, along with the original prompt, and the authors asked the judges to identify which of the symbols were produced by the computer.

While human judges’ correct responses varied across characters, for each visual Turing test, researchers say fewer than 25 percent of human judges performed significantly better than chance in assessing whether a computer or a human produced a given set of symbols.
Professor Tenenbaum says they’re “still far” from building machines that are as smart as a human child, but this is the first time we had a machine that can learn and use a large class of concepts.

Credit: The research is titled “Human-level concept learning through probabilistic program induction” and it’s now accessible via the Sciencemag.org.