Murata Cheerleader

AI to power next stage of human evolution

Blogs
Murata Manufacturing's cheerleading robot balances on a ball and can do synchronised dance routines. Image: IDGNS

13 January 2015

Billy MacInnesYou don’t have to think of the Terminator films as documentaries to be wary of the dangers posed by artificial intelligence (AI). Almost as soon as we dreamed up the notion of robots, we started to fear they might one day replace us. Ever since, humans have been torn between two contrasting visions of the future. In one, robots are willing minions created to make the world a better place for their human overlords but in the other they are super-intelligent beings that will one day overthrow or destroy the human race that created them. In a culture where the Frankenstein myth has become deeply engrained to the point of folk memory, it’s perhaps hardly surprising we should choose to reinvent it for the 20th and 21st centuries with added metal, electronics, flashing lights and lasers.

In this light, it was interesting to read a story on CNET reporting that The Future of Life Institute has issued an open letter, entitled Research priorities for robust and beneficial artificial intelligence. Signed by AI experts around the world, the letter says that in addition to making AI more capable, research should focus on “maximising the societal benefit of AI”. The letter adds: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

An accompanying research priorities document sketches the short-term and long-term research priorities to achieve the goal of robust and beneficial AI systems. The CNET story also highlighted comments from Prof Stephen Hawking and Elon Musk expressing their concern over the dangers posed by AI.

In May last year, Hawking wrote in the UK’s The Independent newspaper: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”. And Musk tweeted in August: “we need to be super careful with AI. Potentially more dangerous than nukes”. In October, he suggested regulatory oversight could be required “just to make sure we don’t do something very foolish”.

While it would be madness to disagree with distinguished intellects such as Hawking and Musk, it’s intriguing that humans can be so certain AI will evolve to the point where it poses a danger to us but seem far less convinced of our own abilities to evolve ourselves to the point where we don’t allow it to happen. I wonder if this fuels into an underlying unconscious fear we have that whereas machines are visibly evolving at an incredible rate, we humans appear to have hit our evolutionary ceiling.

Read More:


Back to Top ↑

TechCentral.ie