The institute will soon be awarding grants to 37 different research teams, to be chosen out of 300 applicants, who will then study a number of current problems within the field of AI – among them, basic human wants and needs, as well as setting a common ground for the interests of both robots and humans.
The type of disasters imagined by Elon Musk are not the typical Hollywood fare involving machines gaining consciousness and turning on humans, but they are more practical in nature.
Today the Future of Life Institute has announced that it has awarded $7 million in funds to 37 research teams around the world so that society can reap the benefits of AI, while avoiding the apocalypse and the end of life on earth.
The money comes from a $10 million donation Musk, founder of Tesla and SpaceX, made to the foundation in January.
“With artificial intelligence, we are summoning the demon”, Musk, also CEO of Space Exploration Technologies Corp., said in October. As the projects produce research, Tegmark hopes to garner other sources of funding to create a follow-up programme. Winners included groups from Carnegie-Mellon University, Stanford University and a new AI centre at Oxford-Cambridge universities in the United Kingdom, the institute said. It has been said that about $7 million will be divided among the research teams, coming from Musk and the Open Philanthropy Project.
These teams will be pushed to undertake research in economics, law and computer science for the program.
Tesla’s Musk had previously also invested in Vicarious, an artificial intelligence company aimed to build a “unified algorithmic architecture”, the idea of which was to attain human level intelligence in language, vision and motor control. It’s not from the standpoint of actually trying to make any investment return. In the project, AI systems explained their decisions to humans. “So we need to be very careful”. The FLI must first decide which specific projects are the most likely to bear fruit.
Musk himself has brought up the Terminator series in the past when talking about the evolution of AI, saying that it could be more unsafe than nuclear weapons. Prof. Hawking said that the future of humanity is a race between the growing power of technology and the wisdom with which it is used.
p style=”text-align: center;”>