When left to its own devices, what does one of the largest neural networks for machine learning in the world use its 16,000 computer processors and one billion connections do? Solve complex environmental problems? Crunch scientific data to illuminate the mysteries of deep space? No and no. Turns out, the huge, powerful neural network taught itself to recognize cats (and humans) in only three days.
Researchers from Google and Stanford working at Google’s secretive X Lab connected 16,000 processors (still far short of a human’s estimated 80 billion neurons) and fed the neural network digital images extracted at random from 10 million YouTube videos. The machine was then left alone to learn what it could with no instruction on how to proceed. For three days, the network pored through the images, making connections and finding commonalities between objects. Next, the researchers attempted to see what the computer could identify from a list of 20,000 items.
By learning from the most commonly occurring images the computer was able to achieve an 81.7% accuracy rating in identifying human faces, 76.7% accuracy at identifying human body parts, and 74.8 percent accuracy when identifying cats. The increases represent a 70% jump in accuracy compared to previous studies.
The network constructed a rough image of what a cat would look like by extracting general features as it was exposed to the 10 million images, in much the same way that a human brain uses repeated firing of specific neurons in the visual cortex to train itself to recognize a particular face. What the experiment proved is that it is possible for the computer to learn what something is without it being labeled; the neural network created the concept of both humans and cats without being prompted.
Machine learning lies in the use of algorithms to allow computers to evolve behavior based on data by recognizing complex problems and making intelligent, data-based decisions. The problem is that when faced with complex problems with large data sets, it is next to impossible for all possible variables to be covered, so the system must generalize from the data it has. In the case of the cats, Google’s neural network was able to generalize what humans and cats look like with relatively high accuracy.
The potential applications of this kind of neural network are broad. Speech and facial recognition, as well as translation software would benefit from machine learning that only requires vast amounts of data with no hints or guidance from human operators. The current Big Data movement is providing huge quantities of data, and it is encouraging to know that there may be actual applications for it.
Reflecting the confidence that Google has in the project, the company is moving the neural network research out of X Lab and into its division charged with search and related services. Expect to see larger neural networks with even higher accuracy rates constructed in the near future. Hopefully we can move on to using them for something more important… perhaps even recognizing dogs?
Cody Burke is a senior analyst at Basex. He can be reached at email@example.com