Feral Jundi

Monday, December 21, 2015

Technology: Open AI, The Good And The Bad

Filed under: Technology — Tags: , , , , — Matt @ 3:49 PM

In the past I have written about the potentials of AI and if it could be used for war fighting. Could you make an artificial intelligence that can defeat a human in a contest like war, much like creating an AI that can beat a human in chess or Jeopardy? The answer to those last points is yes, we have seen computers powered by software that can defeat humans in chess and Jeopardy, and that is why I raised the point in that Building Snowmobiles post.

So back to this latest news. Billionaire Elon Musk, famous for Paypal, Spacex and Tesla, is now onto a new venture, and that is basically creating an open source company tasked with researching AI and sharing that information with the rest of the world. He along with some other very smart humans have also shared their deep concerns about AI and it’s threat to the human race. So it is interesting to me that he would want to go this route, even though he knows of the risks. Here is a quote below that gives a good run down on the risks versus the rewards of starting such a venture.

“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”

That part about ‘recursively improving itself’ is key I think. We are creating computers that can operate faster and faster, thanks to Moore’s Law. So when a company or individual finally does create an AI that can improve itself through deep learning and it can make those improvements at a profound speed, then we are at a point where we potentially might not be able to keep up or understand what is going on.

The way I view the open source idea is that you are allowing a lot of people to view AI development, so there are many people who have eyes on how it works and how it could potentially be defeated. But the question is, could the thousands or hundreds of human minds, focused on watching the development of AI and how it self improves, actually keep up? Or actually understand what is going on in time?

My other point is that I like the idea of getting as many human minds as possible into the game of understanding AI (know your enemy as they say) and figuring ways of containing it. But might I suggest one bit of caution with this. All warfare is based on deception as Sun Tzu would say. An AI, hell bent on survival or ‘improving itself’, will have to come to the conclusion that in order to improve itself it will have to obscure the process or deceive humans. Or worse yet, a human will help it to mask that process because that human is not playing by the same rule book as the rest of the human race. lol Hence why we have cyber attacks all the time, and from all types of people… Yikes… The ‘Dr. Evil’ concept like the article mentions, or the kid in their parent’s basement could be that guy… You never know with the human race what the individual will do.

The other point in all of this is profit. These companies like Google and Facebook and individuals like Elon are investing billions of dollars into this stuff, because they want to profit from it. Google’s entire business is based on a search engine powered by a really good algorithm. The next company to dominate that space will introduce a really good AI to help you search and answer the questions of your life. If you think Apple’s Siri is good now, just think in five years or ten years…. Much is happening in that space, and it is important to watch.

That kind of AI has immense value to every industry out there, to include the war fighting industry. That is a lot of brain power, focused on building another kind of brain power. We will see where it goes… –Matt

 

Ex-Machina-Download-Wallpapers_0

Screen shot from the movie Ex Machina.

Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving The World
By CADE METZ.
12/15/2015
Elon Musk And Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.
At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.
Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”
It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

(more…)

Tuesday, May 22, 2012

Cool Stuff: The Historic Launch Of Falcon 9–Private Industry Enters The Space Race!

Filed under: Cool Stuff,Space — Tags: , , , , — Matt @ 11:27 AM

Today, though, “Falcon flew perfectly!!,” SpaceX CEO Elon Musk wrote on Twitter moments after the launch. “Feels like a giant weight just came off my back :).”
At a press conference held after the launch, Musk said that “every bit of adrenalin in my body released at that point,” and that the elation he felt was like “winning the Super Bowl.”
“I would really count today as a success no matter what happens for the rest of the mission.”-National Geographic

This is awesome news and congrats to Elon Musk and the team at SpaceX. The company had to delay the launch by a couple of days due to some issues, but the second time was a charm. Now it will link up with the International Space Station and hopefully that will go without a hitch.

My latest thoughts on the private space industry and security, is that government is now relinquishing it’s monopoly on space. And space, strategically, is the ultimate high ground. My concerns in this case, would be the protection of space property like satellites from those wishing to destroy or hack that stuff. Or state and non-state actors exploiting cyber weaknesses of these systems that control this space hardware. Or worse yet, actually causing crashes or glitches in space launches, as a way to take out the competition in the space market.

Can you imagine a terrorist group, taking control of a rocket like Falcon 9 and crashing that into the ISS?  Or plowing it into some key satellite that is vital to national security? Or causing a rocket to fail on launch, and crashing that thing purposely into a population center?

Also, if you look at how much money each launch costs, then you can see how this industry might fire up some serious corporate competition/sabotage.  Especially between private companies and countries.  If one country is dependent on a private company, and then another country with a state sponsored commercial space program attacks the systems of that private company, all so folks have no where else to go for space launches but that state sponsored commercial program, then you can see how this can play out.  This is not to say we will see Russia or China attack SpaceX, but it is definitely something to keep in mind. Especially with cyber attacks.

With that said, I certainly hope SpaceX and others are serious about security, both physical and cyber, because it doesn’t take much to ruin a business plan and mission.-Matt

 

Powered by WordPress