Google, IBM look to mimic the human brain

Pro
(Source: Stockfresh)

11 July 2017

 

Several years ago, there were reports that an IBM artificial intelligence (AI) project had mimicked the brain of a cat. This of course, prompted the query as to whether it spent 18 hours a day in sleep mode.

Joking aside, the report was later debunked, but efforts to simulate the brain continue, using new types of processors far faster and more brain-like than your standard x86 processors. IBM and the US Air Force have announced one such project, while Google has its own.

Researchers from Google and the University of Toronto have released an academic paper titled “One Model To Learn Them All,” and they were pretty quiet about it. What Google is proposing is a template for how to create a single machine learning model that can address multiple tasks.

MultiModel
Google calls this MultiModel. The model was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition and object detection. What Google found was the machine slowly but incrementally learned how to do the tasks better with each iteration. Machine translation, for example, improved with each pass.

More significant, Google’s MultiModel improved its accuracy with less training data. That is important because sometimes you just might not have all the data you need or available to train the computer to learn. One of the problems with deep/machine learning is you have to prime the pump, so to speak, with a lot of information before learning can begin. Here, it did it with less.

The challenge, the researchers note, is to create a single, unified deep learning model to solve tasks across multiple domains. Because right now, each task requires significant data preparation for learning.

IBM and USAF
In the case of IBM and the US Air Force Research Lab, the two have announced plans to build a supercomputer based on IBM’s TrueNorth neuromorphic architecture. Neuromorphic architectures are very-large-scale integration (VLSI) systems containing electronic analogue circuits designed to mimic the neurological architectures in the nervous system. The chips are a mix of analogue and digital, so they do more than the usual binary on/off mode of digital processors, again to mimic the complexity of cells.

IBM’s TrueNorth chips first came out in 2014 after several years of research in the DARPA SyNAPSE programme. Interest has picked up because people are realising that x86 processors and even FPGAs are just not up to the task of mimicking human cells.

There are quite a few efforts behind neuromorphic design, including Stanford, The University of Manchester, Intel, Qualcomm, Fujitsu, NEC and IBM.

Low power
The new supercomputer will consist of 64 million neurons and 16 billion synapses, while using just 10W of wall power (less than a lightbulb). The new system will fit in a 4U standard server rack with 512 million neurons in total per rack. A single processor in the system consists of 5.4 billion transistors organised into 4,096 neural cores, creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.

So, what will they do with it? Well, the Air Force operates military systems that must recognise and categorise data from multiple sources (images, video, audio and text) in real time. Some of those systems are ground-based, but others are installed in aircraft. So, the Air Force would like deep neural learning both on the ground and in the air.

The real advance in these neural processors is they stay off until they are actually needed, so they can have a ridiculously low power draw like the IBM chip. That would be welcomed in the supercomputing world where those monstrosities use power in the megawatts.

 

IDG News Service

Read More:


Back to Top ↑

TechCentral.ie