Tesla’s Musk had previously also invested in Vicarious, an artificial intelligence company aimed to build a “unified algorithmic architecture”, the idea of which was to attain human level intelligence in language, vision and motor control. Many others, including the likes of Stephen Hawking, Bill Gates and Apple co-founder Steve Wozniak, have expressed serious concerns over the rise of AI.
Musk is not the only prominent personality cautioning humanity against the development of artificially intelligent machines.
Another $4 million is to be distributed later, awarded once the institute has determined the most promising fields of research to focus on.
Musk said last August that AI could be “potentially more unsafe than nukes” and followed that in October by saying that AI may require “regulatory oversight” so the world doesn’t “do something very foolish”.
Elon Musk is not anxious, however, that machines will be sent from the future to assassinate people or take over the world, a plot popular with Hollywood movies.
In January Elon Musk donated $10 million to the Future For Life organization, and now a large portion of that money has been assigned to help research teams develop ways of keeping AI technologies safe. But they all work toward a common goal.
These teams will be pushed to undertake research in economics, law and computer science for the program.
The grant winners, however, are not necessarily expected to beat up on AI.
For more than 50 years, academics and companies struggled to push AI beyond the limits of human intelligence. A Duke University research project that netted $200,000 will study ethics and AI, while another from Rice University will spend its $69,000 on how AI will impact working in the future.
Nick Bostrom, Oxford University philosopher and author of the book “Superintelligence: Paths, Dangers, Strategies”, wants to create a joint Oxford-Cambridge research center to create policies that would be enforced by governments, industry leaders, and others, to minimize risks and maximize benefit from AI in the long term. Their research will assist in mitigating possible catastrophes from the technology. The research groups, which will receive their respective grants in the span of three years, aim to find ways to “manage the liability for the harms they [AI] might cause to individuals and property”, Fortune reported.
The majority of these special AI-related projects will begin in earnest sometime this September. The Future of Life Institute has not said when final reports will be given.
p style=”text-align: center;”>