Feral Jundi

Monday, December 21, 2015

Technology: Open AI, The Good And The Bad

Filed under: Technology — Tags: , , , , — Matt @ 3:49 PM

In the past I have written about the potentials of AI and if it could be used for war fighting. Could you make an artificial intelligence that can defeat a human in a contest like war, much like creating an AI that can beat a human in chess or Jeopardy? The answer to those last points is yes, we have seen computers powered by software that can defeat humans in chess and Jeopardy, and that is why I raised the point in that Building Snowmobiles post.

So back to this latest news. Billionaire Elon Musk, famous for Paypal, Spacex and Tesla, is now onto a new venture, and that is basically creating an open source company tasked with researching AI and sharing that information with the rest of the world. He along with some other very smart humans have also shared their deep concerns about AI and it’s threat to the human race. So it is interesting to me that he would want to go this route, even though he knows of the risks. Here is a quote below that gives a good run down on the risks versus the rewards of starting such a venture.

“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”

That part about ‘recursively improving itself’ is key I think. We are creating computers that can operate faster and faster, thanks to Moore’s Law. So when a company or individual finally does create an AI that can improve itself through deep learning and it can make those improvements at a profound speed, then we are at a point where we potentially might not be able to keep up or understand what is going on.

The way I view the open source idea is that you are allowing a lot of people to view AI development, so there are many people who have eyes on how it works and how it could potentially be defeated. But the question is, could the thousands or hundreds of human minds, focused on watching the development of AI and how it self improves, actually keep up? Or actually understand what is going on in time?

My other point is that I like the idea of getting as many human minds as possible into the game of understanding AI (know your enemy as they say) and figuring ways of containing it. But might I suggest one bit of caution with this. All warfare is based on deception as Sun Tzu would say. An AI, hell bent on survival or ‘improving itself’, will have to come to the conclusion that in order to improve itself it will have to obscure the process or deceive humans. Or worse yet, a human will help it to mask that process because that human is not playing by the same rule book as the rest of the human race. lol Hence why we have cyber attacks all the time, and from all types of people… Yikes… The ‘Dr. Evil’ concept like the article mentions, or the kid in their parent’s basement could be that guy… You never know with the human race what the individual will do.

The other point in all of this is profit. These companies like Google and Facebook and individuals like Elon are investing billions of dollars into this stuff, because they want to profit from it. Google’s entire business is based on a search engine powered by a really good algorithm. The next company to dominate that space will introduce a really good AI to help you search and answer the questions of your life. If you think Apple’s Siri is good now, just think in five years or ten years…. Much is happening in that space, and it is important to watch.

That kind of AI has immense value to every industry out there, to include the war fighting industry. That is a lot of brain power, focused on building another kind of brain power. We will see where it goes… –Matt

 

Ex-Machina-Download-Wallpapers_0

Screen shot from the movie Ex Machina.

Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving The World
By CADE METZ.
12/15/2015
Elon Musk And Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.
At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.
Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”
It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”


But in the creation of OpenAI, there are more forces at work than just the possibility of super-human intelligence achieving world domination. In the shorter term, OpenAI can directly benefit Musk and Altman and their companies (Y Combinator backed such unicorns as Airbnb, Dropbox, and Stripe). After luring top AI researchers from companies like Google and setting them up at OpenAI, the two entrepreneurs can access ideas they couldn’t get their hands on before. And in pooling online data from their respective companies as they’ve promised to, they’ll have the means to realize those ideas. Nowadays, one key to advancing AI is engineering talent, and the other is data.
If OpenAI stays true to its mission of giving everyone access to new ideas, it will at least serve as a check on powerful companies like Google and Facebook. With Musk, Altman, and others pumping more than a billion dollars into the venture, OpenAI is showing how the very notion of competition has changed in recent years. Increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive.
The Advantages of Open
OpenAI is the culmination of an extremely magnanimous month in the world of artificial intelligence. In early November, Google open sourced (part of) the software engine that drives its AI services—deep learning technologies that have proven enormously adept at identifying images, recognizing spoken words, translating languages, and understanding natural language. And just before the unveiling of OpenAI, Facebook open sourced the designs for the computer server it built to run its own deep learning services, which tackle many of the same tasks as Google’s tech. Now, OpenAI is vowing to share everything it builds—and a big focus seems to be, well, deep learning.
Yes, such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that, ultimately, advances their own interests as well. For one, as larger community improves these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent. In the field of deep learning in particular, researchers—many of whom come from academia—are very much attracted to the idea of openly sharing their work, of benefiting as many people as possible. “It is certainly a competitive advantage when it comes to hiring researchers,” Altman tells WIRED. “The people we hired … love the fact that [OpenAI is] open and they can share their work.”
‘Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid.’
CHRIS NICHOLSON, SKYMIND
This competition may be more direct than it might seem. We can’t help but think that Google open sourced its AI engine, TensorFlow, because it knew OpenAI was on the way—and that Facebook shared its Big Sur server design as an answer to both Google and OpenAI. Facebook says this was not the case. Google didn’t immediately respond to a request for comment. And Altman declines to speculate. But he does say that Google knew OpenAI was coming. How could it not? The project nabbed Ilya Sutskever, one of its top AI researchers.
That doesn’t diminish the value of Google’s open source project. Whatever the company’s motives, the code is available to everyone to use as they see fit. But it’s worth remembering that, in today’s world, giving away tech is about more than magnanimity. The deep learning community is relatively small, and all of these companies are vying for the talent that can help them take advantage of this extremely powerful technology. They want to share, but they also want to win. They may release some of their secret sauce, but not all. Open source will accelerate the progress of AI, but as this happens, it’s important that no one company or technology becomes too powerful. That’s why OpenAI is such a meaningful idea.
His Own Apollo Program
You can also bet that, on some level, Musk too sees sharing as a way of winning. “As you know, I’ve had some concerns about AI for some time,” he told Backchannel. And certainly, his public fretting over the threat of an AI apocalypse is well known. But he also runs Tesla, which stands to benefit from the sort of technology OpenAI will develop. Like Google, Tesla is building self-driving cars, which can benefit from deep learning in enormous ways.
Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.
Yes, Musk could just hire AI researchers to work at Tesla. And he is. But with OpenAI, he can hire better researchers (because it’s open, and because it’s not constrained by any one company’s business model or short-term interest). He can even lure researchers away from Google. Plus, he can create a far more powerful pool of data that can help feed the work of these researchers. Altman says that Y Combinator companies will share their data with OpenAI, and that’s no small thing. Pair their data with Tesla’s, and you start to rival Google—at least in some ways.
“It’s probably better in some dimensions and worse in others,” says Chris Nicholson, the CEO of deep learning startup called Skymind, which was recently accepted into the Y Combinator program. “I’m sure Airbnb has great housing data that Google can’t touch.”
Musk was an early investor in a company called DeepMind—a UK-based outfit that describes itself as “an Apollo program for AI.” And this investment gave him a window into how this remarkable technology was developing. But then Google bought DeepMind, and that window closed. Now, Musk has started his own Apollo program. He once again has the inside track. And OpenAI’s other investors are in a similar position, including Amazon, an Internet giant trails Google and Facebook in the race to AI.
Pessimistic Optimists
But, no, this doesn’t diminish the value of Musk’s open source project. He may have selfish as well as altruistic motives. But the end result is still enormously beneficial to the wider world of AI. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well—if it hasn’t already. That’s good for Tesla and all those Y Combinator companies. But it’s also good for everyone that’s interested in using AI.
Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn’t necessarily that Dr. Evil will turn this tech loose on the world. It’s that the tech will turn itself loose on the world. Deep learning won’t stop at self-driving cars and natural language understanding. Top researchers believe that, given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to super-human intelligence.
“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”
Story here.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress