In The Art of War, Sun Tzu advised that one of the most important rules of warfare is to “know thine enemy.” At the chess table, the more you know about your opponent — how he has played in the past, his favorite moves, his strengths and weaknesses — the more likely you are to defeat him. Knowing these things requires sorting through a wealth of information — and that’s where computers come in….
Kris says today’s chess players have absolutely benefited from the technology: “They are better players because of it, and they’re achieving more at a younger age. Bobby Fischer was considered an anomaly when he earned the grandmaster title at 15. Today, if you aren’t a GM by the age of 14 or 15, you probably won’t go far in chess. Talent will always matter, but technology is helping talented players learn faster and better.”
What would a war game look like between a Boyd Machine(or just Boyd) and General Petraeus? Or better yet, a general or political strategist with a Boyd Machine assisting in strategy and planning? These are some interesting concepts to ponder as militaries, companies, and politicians continue to seek that edge that will help them to defeat their opponents.
If you look at the progression of machine development for Chess playing, Deep Blue was the end result of continuous improvements (kaizen) to the software. Deep Blue ended up beating the human race’s top chess player and that is significant. It is a key point to remember when conceptualizing the Boyd Machine.
Furthermore, I believe that Watson will at one point dominate Jeopardy. It is doing very well now, and the four years of work on the machine is telling. Even if it doesn’t do it now, it will certainly do it in the near future because of Kaizen and because of Moore’s Law.
So with these two examples of a machine evolution, is it a stretch to envision a computer defeating a top general or a team of generals in a war game? After all, war is the ultimate game of chess.
I will take this a step further. If not man versus machine in the endeavor of war, how about cyborg versus cyborg? The way the human race is interfacing with machines in the present could easily classify us as ‘cyborgs’. We carry around smart phones or cellphones, we check our computers daily and highly depend on both of these devices. Most humans have a hard time being away from their computer or phone, because they are so important to their lives. This is reality.
So with that said, imagine a general with a Boyd machine, versus another general with his machine? Or a CEO hybrid versus a CEO hybrid. You get the idea, and this is exactly the point of the various articles below.
In the world of Chess, this reality has already presented itself. Will we see a similar future where strategists in political campaigns or military campaigns will be assisted by a machine for planning? I think so, because that is the natural progression, and the computing power is there thanks to Moore’s Law and Kaizen.
Remember the rule of mimicry strategy? Folks will copy the most successful strategies for winning, and add one little thing to it to give them the edge. If everyone knows all the strategies and thought processes of all of mankind’s strategists and their opponent’s history, then what would give an edge to one side over the other? Could a Boyd machine be that edge? Something that can analyze and synthesize faster than an opponent. Or help it’s human counterpart’s decision making cycle and come up with the winning strategies necessary to win that war, campaign, or competition in a market place? Interesting stuff.
It would also be cool to see how such a Boyd machine would be constructed. Take all of his theories and papers, as well as all of the material ever created in regards to strategy, and construct a machine that would think like Boyd? There are plenty of individuals out there that could contribute as advisors to such a project. Best of all, it would be really cool to build a Sun Tzu machine or a Clausewitz machine, and have cyborg teams war game against one another. Al Qaeda or Taliban machines could be constructed as well, and I think war gaming in the future will greatly benefit from such efforts. –Matt
Edit: 02/17/2011- Watson wins in Jeopardy, which to me is incredible because this was it’s first attempt! Watson won $77,147 to Mr. Jennings’s $24,000 and Mr. Rutter’s $21,600. Good job to the crew at IBM for building such an amazing machine.
Watson, the ‘Jeopardy!’ computer, has grander plans
IBM’s Watson Just Latest Edition of Man Vs. Machine Battle
The role of computers in planning chess strategy
The website for IBM’s Watson here.
TED: We Are All Cyborgs Now, Amber Case
Watson, the ‘Jeopardy!’ computer, has grander plans
02/16/2011
By Hayley Tsukayama
Watson, the computer that’s winning hearts and cash on “Jeopardy!” this week, is more than just a pretty interface.
David F. McQueeney, vice president of IBM Research said that Watson’s real applications are far more practical. The computer is actually intended to help users get a handle on unstructured data such as text, e-mails and in-company mail messages.
“We’ve been working for a long time about helping humans navigate a large amount of data, ” McQueeney told Post Tech in an interview. “There’s all kinds of incredibly valuable information about the way an agency runs in unstructured data, and we’ve been working for decades on extracting meaning and structure from it.”
What McQueeney hopes IBM can do by showing Watson off on television is let people know machines have evolved to the point where they can help humans struggle with problems without having to modify all the data for a computer.
“I’m so pleased that the ‘Jeopardy!’ producers agreed to work with us,” he said, “and I’m as pleased as they are that the result was good science and good entertainment.”
McQueeney has been working for decades on developing computers such as Watson, which can process unstructured queries in a snap. The barrier that has been the hardest to overcome, McQueeney said, is the complexity of human language. “It’s hard for a computer to understand the way you and I speak to each other,” he said. “It requires context.”
For example, he said, if the computer is given the words “St. Paul,” it needs some context to determine whether that phrase refers to a city in Minnesota, a person or a college.
To imagine how Watson could be used in the public and private sectors, McQueeney said, you have to imagine a different sort of data set than the one programmers used to prepare the computer for the show.
“Imagine taking Watson, and instead of feeding it song lyrics and Shakespeare, imagine feeding it medical papers and theses,” he said. “And put that machine next to an expert human clinician.”
With the computer intelligently processing a vast amount of data and the doctor using his or her own professional knowledge to guide and refine a search, McQueeney said, IBM thinks the quality of that diagnosis could be better than what the doctor comes up with alone.
“It’s still the human doing the diagnosis; we’re not going to license the machine to practice medicine,” McQueeney joked. “But it provides an incredible amount of differentiation to make the human better.”
Answering critics who say that Watson is little more than a search engine, McQueeney said that the computer runs much more sophisticated, targeted queries. “When people say you could use a search engine, I have to smile and say this is not the same problem. If we stood there with an Internet search engine and started paging through, well, you can’t hand Alex the top 10 documents and say you think the answer’s in there somewhere, can you? You have to hand him the right answer … or really, I guess, the right question.”
Search engine are wonderful tools, he said, but not in the same time zone as Watson’s results.
McQueeney said the government applications of a machine like Watson are endless.
The scale of government data is daunting, he said, adding that every government agency he’s shown the computer too has thought of applications for it within seconds.
“They say, Dave, I know what I have to do and the questions I have to ask, but I don’t have tools to let me manipulate unstructured amounts of data and search for patterns. Here’s a machine doing that over a broader body of knowledge.”
Being on television does expose some of Watson’s limitations, however. For example, programmers had to work hard to help the computer understand the trademark puns and double meanings of the show’s writing, and Watson has also given the same wrong answers as other contestants because it cannot hear. (McQueeney said they considered giving Watson a microphone to pick up sound, but couldn’t finish it in time.)
The programmers took pains to make sure that Watson and his fellow contestants were on fairly equal footing. They made sure that Watson was sent a text file of the clues at the same time that his competitors see it, and programmed the computer to look for Daily Doubles in the same manner that seasoned human contestants do.
And how does Watson do in the end? McQueeney’s not telling. Like all “Jeopardy!” contestants, those who work with Watson were asked to keep mouths shut.
On Tuesday, Watson ($35,734) was leading “Jeopardy!” champions Ken Jennings ($4,800) and Brad Mutter ($10,400) after the first round. The three-day match concludes today.
Story here.
—————————————————————–
IBM’s Watson Just Latest Edition of Man Vs. Machine Battle
Jason Gallagher
Feb 15, 2011
An IBM supercomputer named Watson is playing “Jeopardy!” with two past champions. The human players are not exactly slouches either. Brad Rutter, who won $3.3 million playing the popular TV trivia contest, and Ken Jennings, who was victorious in 74 games of “Jeopardy!” represent the man portion of this man vs. machine challenge. Watson is made up of 90 IBM servers and has access to hundreds of millions of documents in its memory banks.
While humans competing against machines can be traced all the way back to the legend of John Henry versus the steam engine, computers and other high tech entities have been raising the bar for a few decades with varying degrees of success.
In 1989 Deep Thought was a chess playing computer like no other, having won the Sixth World Computer Chess Championship. Deep Thought was also a creation of the folks at IBM and was pitted against chess champion Gary Kasparov later that year. Kasparov, considered by many to be the greatest chess mind ever, had little trouble putting the computer in its place besting it easily. However, IBM would do for Deep Thought what inevitably happens to computers everywhere — it upgraded it.
In 1996, IBM’s newest chess playing computer, Deep Blue, played Kasparov again. While the machine put up a better fight, man won again. However, in May 1997 Deep Blue and Kasparov would meet again. This time, Deep Blue would lose the first match of the series, then draw the next three, and finally beat Kasparov in the fifth game, giving the chess-playing computer an unheard of defeat against a legendary chess champion.
In 2010, more than a decade after Deep Blue proved extraordinary in the realm of beating humans at the subtleties of chess, a robot would step onto the football field. Ziggy a 340 pound combat robot was set to face off with San Francisco 49ers kicker Joe Nedney in a kickoff frenzy. While Nedney would best the machine in kicking field goals easily, he was a gracious competitor helping the designers of Ziggy with ball placement and other football kicking nuances.
So while man has a solid track record in man vs. machine battles, especially when it comes to physical active tasks, machines can be upgraded and redesigned. So whether Watson beats Jennings and Rutter at “Jeopardy!” or not, the advances in the fields of computers and robotics will likely eventually lead to more competitions and challenges. Since the limits of technology get pushed every year, who knows what the coming decades will bring.
Story here.
—————————————————————–
The role of computers in planning chess strategy
By Debra Littlejohn Shinder
February 4, 2010
Takeaway: Many of today’s chess champions rely on computers to help them prepare for matches. Learn about some of the behind-the-scenes preparation that goes into Hikaru Nakamura’s winning moves.
Many of today’s chess champions rely on computers to help them prepare for matches. Learn about some of the behind-the-scenes preparation that goes into Hikaru Nakamura’s winning moves.
In the chess world of the 1990s, humans and computers were rivals. IBM’s Deep Blue, a supercomputer designed specifically to play chess, defeated reigning world champion Garry Kasparov. (Check out Geek Trivia: Deeper than deep.) Today, men and machines work together. There is a form of chess called advanced chess or cyborg chess, introduced by Kasporov, in which a human and a computer play as a team. A variation, called freestyle chess, allows consultation teams.
Electronic assistance isn’t allowed during a conventional match, but that doesn’t mean chess champs can’t use digital devices when assessing the opponents they may encounter in a tournament and when plotting their strategies. Many chess players at the international tournament level now utilize the processing power of today’s powerful — but much less costly — computers to help them prepare for matches.
Preparing for battle
The game of chess has been around much longer than computing. The current form of the game came from Europe, where it was played as early as the 15th century. However, according to some sources, the origins of the terminology can be tracked back to the Egyptian dynasties, and chess boards and pieces have been discovered in tombs dating to 4,000 B.C. For a fascinating discussion of the origins of chess, read this article from the Western Chess Chronicle.
The names given to chess pieces — king, queen, bishop, knight, castle, and pawn — make it clear that the game was intended to simulate war in the safe environment of the gaming table. As in a real battle, you’re more likely to come out the winner if you plan and prepare carefully.
In The Art of War, Sun Tzu advised that one of the most important rules of warfare is to “know thine enemy.” At the chess table, the more you know about your opponent — how he has played in the past, his favorite moves, his strengths and weaknesses — the more likely you are to defeat him. Knowing these things requires sorting through a wealth of information — and that’s where computers come in.
It’s all about the data
One of the earliest functions of computers was the compilation of databases — large collections of information organized for quick search and retrieval. The first database management systems (DBMS) were developed in the 1960s. Database software has become much more sophisticated and specialized over the years.
Chess databases are key to preparing for tournament play. The databases contain all of the moves made in a large number of games. Games can be classified according to the Encyclopedia of Chess Openings (ECO). There are also endgame databases that contain analyses of endgame positions and optimal moves in each possible position.
One database program commonly used by chess players is ChessBase, made by a German company that also operates a server for playing chess online at www.playchess.com and hosts a chess news Web site. The company maintains an online database that contains more than four million games and can be accessed via its software, which is also called ChessBase. It runs on Windows and allows you to store and search the records of games that are stored in a proprietary format (it also allows you to convert game files to PGN, Portable Game Notation, a format that can be accessed by ordinary ASCII editors and is recognized by many different chess programs).
The software isn’t cheap; ChessBase 10 Premium costs 349 euros, which is almost $500 USD. There is, however, a Light version that’s free.
Revving up the engines
Another factor in preparing for tournament competition is practicing your strategies and techniques. Ideally, you grow stronger in your game by playing against those who are equal or higher in rating than you. For top players, it may be hard to find such partners back home between tournaments, because many of them are the best in their regions.
Another type of software that can help here is the chess engine, which is a computer program that actually plays chess. The engine communicates with a GUI client such as WinBoard (or XBoard for Linux) via the Universal Chess Interface (UCI) protocol or, in some cases, the Chess Engine Communication Protocol (CECP). The client software is usually free. Chess tools are often packaged as a suite. For example, in addition to the database itself, the ChessBase software includes an engine named Fritz, and a GUI interface into which you can plug any UCI engine.
The top-rated chess engine is Rybka, which was created by International Master Vasik Rajlich. Other popular engines include Deep Fritz (published by ChessBase), Shredder/Deep Shredder, Naum, Stockfish, and Thinker.
José is a GUI tool that lets you plug in a chess engine and analyze games; it also operates as a front-end client for a MySQL database in which you can store chess games.
Like players, chess engines have assigned ratings that indicate their performance relative to one another. There are tournaments for chess engines, where the computer programs play one another. The ratings use the Elo rating system but are not necessarily equivalent to Elo ratings of human players. For more information about how Elo is used to rate the skills of human players, see http://chesselo.com/.
Heavy-duty hardware
Deep Blue was a special purpose computer first conceived at Carnegie Mellon University and built by IBM Research with the goal of defeating the world chess champion. It was originally named Deep Thought. It won its first game against reigning champion Kasparov in 1996, but Kasparov won the match. IBM upgraded the system. and it won the six-game rematch the next year.
Deep Blue was a parallel processing system with 32 120 MHz RISC Power2 (RS/6000) microprocessors and 8 Very Large Scale Integrated (VLSI) special-purpose chess processors on each of the 32 nodes. It ran a special chess playing program written in C on the AIX operating system. The upgraded version that beat Kasparov (sometimes called Deeper Blue) was about twice as fast as its predecessor, at 1 trillion flops per second. It weighed almost a ton and a half, and IBM spent millions of dollars building it.
Hydra was another dedicated chess computer (on the order of Deep Blue) owned by Sheikh Tahnoon Bin Zayed Al Nahyan of Abu Dhabi. It ran 32 Xeon processors and 64 GB of RAM. Developed several years after Deep Blue, it was more powerful and, in 2005, it defeated Michael Adams, who was the 7th ranked player in the world.
Computer hardware has come a long way in the last few years. Today’s Nehalem-based multi-core processors can provide power and reliability comparable to RISC at a fraction of the cost. Most modern chess engines are designed to run on multiple processors. You can get even more power by using a distributed computing model or multiple clustered computers.
But it’s not necessary to spend a fortune to get a good system for chess analysis today. For example, the Deep Shredder 12 engine, which has won a number of computer championship titles and can be installed on Windows XP SP3, Windows Vista, or Windows 7, will run on a Pentium III 1 GHz single processor machine with 1.5 GB of RAM, although at least a Core 2 Duo 2.4 GHz with 3 GB of RAM is recommended. Still, that’s not a powerhouse by today’s standards. Although not necessarily essential, it also helps to have a decent graphics card that supports DirectX 9 or 10 and 512 MB of video RAM is good.
One piece of premium hardware from which a chess system can benefit is a fast solid state disk (SSD) in place of a conventional hard drive. Chess databases may contain many gigabytes of stored positions, and the increased performance of the best SSDs (such as those made by Intel) allow the program to use disk-based tables more efficiently because of the high random read speed. The benefits of SSDs are greater if you use an operating system that is optimized for them, such as Windows 7 or Windows Server 2008 R2. SSDs are still slower than RAM, but you can’t load 50 GB of data into RAM on any but the highest end (and most expensive) PCs.
A day in the life
Some chess players are deeply into technology; others, not so much. Many of today’s young champions are in their teens and twenties. They are “digital natives” — part of the generation that grew up with computers. They tend to be comfortable with using high-tech aids to help them prepare for games and hone their tactics and techniques. Many of the players at the top layers hire someone else to handle the data analysis and assist them in planning strategies — after all, two heads are always better than one, and it helps to have different perspectives.
My son Kris Littlejohn works many tournaments as “second” to U.S. Chess Champion Hikaru Nakamura. He handles much of the data gathering and analysis and works closely with Hikaru in planning for tournaments. Kris built a computer for that purpose — a Nehalem i920 3.2 GHz processor-based tower with 6 GB of RAM and a fast Intel x25 M SSD. It runs Windows Server 2008 R2, a 64-bit operating system that, like Windows 7, takes advantage of TRIM, a technology that allows the operating system to pass information to the SSD controller about data blocks that are no longer in use. This helps the SSD maintain its high speed over its lifespan, instead of slowing down after too many cells have been written to.
Kris performs some of his work weeks or even months before a tournament, as soon as he knows which players are entered. He starts gathering information from the databases about the moves those players like to use. Once he knows which players Hikaru will be going up against and finds out the “colors” (who will play white and who will play black in each game), he analyzes the openings commonly used by Hikaru’s opponents. Then he tries to find a “novelty” — a responsive move that has never been played before. He uses branching to predict all the possible moves that a given opponent could play, and which moves that player would be comfortable with, given his historical games. Branching each move, he eventually comes up with a report in the format of game notation, but with all the branching possibilities included.
He works right up to (and often through) the night before a game, taking into consideration changes based on an opponent’s performance in previous games during the current tournament or tournament conditions (such as the position/ranking Hikaru is in at a given time and whether he needs a win or just a draw at that point). Since the tournaments are played all over the world, Kris uses his laptop and the Remote Desktop Protocol (RDP) to connect to his Nehalem computer back home and perform all these tasks. He also has a backup laptop available that runs the chess engine and database, albeit more slowly, in case of Internet outages.
Kris and Hikaru go over the report together, and Hikaru memorizes the 500-1000 moves that it includes, reciting it back to Kris without looking at the board to ensure that he has all the information in his head when he goes into a game. And that’s when computational power ends and human skill and talent take over.
Kris puts it this way: “Some people were disheartened, when Kasparov lost to Deep Blue, that a computer could beat a man at chess, but I don’t see it that way. If I were a star runner, just because my car can go faster than I can, I don’t believe that takes away anything from my skill. Computers can process more information, faster than the human brain — but there are things computers can’t do. Much of chess is intuitive, and machines will always miss those nuances. That’s the reason we use the computer’s output as a starting point — but we are the ones who make the final strategic decisions.”
Summary
Chess may not have the mass-market appeal of more “active” sports like football and basketball, but the stakes can be high in the top tournaments. The 2010 World Chess Championship, which will be played in Bulgaria this spring, boasts a prize fund of 2 million euros (that’s more than $2.7 million USD). Perhaps more important than the money is the prestige that goes with taking a national or an international title. Top players work hard to get there, and they appreciate technology that can give them an edge against tough opponents.
There has been some controversy over whether the use of computers constitutes “cheating” or makes players lazy or somehow detracts from the “pureness” of the game. But like it or not, the technology is here to stay, and all of those who aspire to play in the rarified air of the national, international, and world championship tournaments are using it. Kris says today’s chess players have absolutely benefited from the technology: “They are better players because of it, and they’re achieving more at a younger age. Bobby Fischer was considered an anomaly when he earned the grandmaster title at 15. Today, if you aren’t a GM by the age of 14 or 15, you probably won’t go far in chess. Talent will always matter, but technology is helping talented players learn faster and better.”
Story here.
Had to add this one about the concept of the coming singularity. Interesting stuff.
————-
2045: The Year Man Becomes Immortal
By Lev Grossman
Thursday, Feb. 10, 2011
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME's photo-essay "Cyberdyne's Real Robot.")
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. (Comment on this story.)
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.
See pictures of cinema's most memorable robots.
From TIME's archives: "Can Machines Think?"
See TIME's special report on gadgets, then and now.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. (See "Is Technology Making Us Lonelier?")
The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."
By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. (See pictures of adorable robots.)
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."(See the world's most influential people in the 2010 TIME 100.)
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting — you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.
See TIME's video "Five Worst Inventions."
See the 100 best gadgets of all time.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."
Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he's not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.)
The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME's computer covers.)
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.
For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. (Comment on this story.)
Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
From TIME's archives: "The Immortality Enzyme."
See Healthland's 5 rules for good health in 2011.
But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion." (See the top 10 medical breakthroughs of 2010.)
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.
Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude. (See how to live 100 years.)
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error. (Comment on this story.)
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
See TIME's photo-essay "A Global Look at Longevity."
See how genes, gender and diet may be life extenders.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."
This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?) (See portraits of centenarians.)
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."
In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. (See the costs of living a long life.)
Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits. (Comment on this story.)
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.
But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
http://www.time.com/time/health/article/0,8599,20…
Comment by headjundi — Thursday, February 17, 2011 @ 11:09 AM
[…] the past I have written about the potentials of AI and if it could he used for war fighting. Could you make an artificial intelligence that can defeat […]
Pingback by Technology: Open AI, The Good And The Bad « Feral Jundi — Monday, December 21, 2015 @ 4:00 PM