In the past I have written about the potentials of AI and if it could be used for war fighting. Could you make an artificial intelligence that can defeat a human in a contest like war, much like creating an AI that can beat a human in chess or Jeopardy? The answer to those last points is yes, we have seen computers powered by software that can defeat humans in chess and Jeopardy, and that is why I raised the point in that Building Snowmobiles post.
So back to this latest news. Billionaire Elon Musk, famous for Paypal, Spacex and Tesla, is now onto a new venture, and that is basically creating an open source company tasked with researching AI and sharing that information with the rest of the world. He along with some other very smart humans have also shared their deep concerns about AI and it’s threat to the human race. So it is interesting to me that he would want to go this route, even though he knows of the risks. Here is a quote below that gives a good run down on the risks versus the rewards of starting such a venture.
“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”
That part about ‘recursively improving itself’ is key I think. We are creating computers that can operate faster and faster, thanks to Moore’s Law. So when a company or individual finally does create an AI that can improve itself through deep learning and it can make those improvements at a profound speed, then we are at a point where we potentially might not be able to keep up or understand what is going on.
The way I view the open source idea is that you are allowing a lot of people to view AI development, so there are many people who have eyes on how it works and how it could potentially be defeated. But the question is, could the thousands or hundreds of human minds, focused on watching the development of AI and how it self improves, actually keep up? Or actually understand what is going on in time?
My other point is that I like the idea of getting as many human minds as possible into the game of understanding AI (know your enemy as they say) and figuring ways of containing it. But might I suggest one bit of caution with this. All warfare is based on deception as Sun Tzu would say. An AI, hell bent on survival or ‘improving itself’, will have to come to the conclusion that in order to improve itself it will have to obscure the process or deceive humans. Or worse yet, a human will help it to mask that process because that human is not playing by the same rule book as the rest of the human race. lol Hence why we have cyber attacks all the time, and from all types of people… Yikes… The ‘Dr. Evil’ concept like the article mentions, or the kid in their parent’s basement could be that guy… You never know with the human race what the individual will do.
The other point in all of this is profit. These companies like Google and Facebook and individuals like Elon are investing billions of dollars into this stuff, because they want to profit from it. Google’s entire business is based on a search engine powered by a really good algorithm. The next company to dominate that space will introduce a really good AI to help you search and answer the questions of your life. If you think Apple’s Siri is good now, just think in five years or ten years…. Much is happening in that space, and it is important to watch.
That kind of AI has immense value to every industry out there, to include the war fighting industry. That is a lot of brain power, focused on building another kind of brain power. We will see where it goes… –Matt
Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving The World
By CADE METZ.
Elon Musk And Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.
At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.
Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”
It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”