Smarter Than You Think

I finished reading Smarter Than You Think: How Technology is Changing Our Minds for the Better by Clive Thompson. This book was not on the top of my book list to read, but after I finished the book I realized that I really enjoyed it. Below is an excerpt from the book.

The Rise of the CentaursSmarter Than You Think

Who’s better at chess—computers or humans?

The question has long fascinated observers, perhaps because chess seems like the ultimate display of human thought: the players sit like Rodin’s Thinker, silent, brows furrowed, making lightning-fast calculations. It’s the quintessential cognitive activity, logic as an extreme sport.

So the idea of a machine outplaying a human has always provoked both excitement and dread. In the eighteenth century, Wolfgang von Kempelen caused a stir with his clockwork Mechanical Turk—an automaton that played an eerily good game of chess, even beating Napoleon Bonaparte. The spectacle was so unsettling that onlookers cried out in astonishment when the Turk’s gears first clicked into motion. But the gears, and the machine, were fake; in reality, the automaton was controlled by a chess savant cunningly tucked inside the wooden cabinet. In 1915, a Spanish inventor unveiled a genuine, honest-to-goodness robot that could actually play chess—a simple endgame involving only three pieces, anyway. A writer for Scientific American fretted that the inventor “Would Substitute Machinery for the Human Mind.”

Eighty years later, in 1997, this intellectual standoff clanked to a dismal conclusion when world champion Garry Kasparov was defeated by IBM’s Deep Blue supercomputer in a tournament of six games. Faced with a machine that could calculate two hundred million positions a second, even Kasparov’s notoriously aggressive and nimble style broke down. In its final game, Deep Blue used such a clever ploy—tricking Kasparov into letting the computer sacrifice a knight—that it trounced him in nineteen moves. “I lost my fighting spirit,” Kasparov said afterward, pronouncing himself “emptied completely.” Riveted, the journalists announced a winner. The cover of Newsweek proclaimed the event “The Brain’s Last Stand.” Doomsayers predicted that chess itself was over. If machines could out-think even Kasparov, why would the game remain interesting? Why would anyone bother playing? What’s the challenge?

Then Kasparov did something unexpected.

The truth is, Kasparov wasn’t completely surprised by Deep Blue’s victory. Chess grand masters had predicted for years that computers would eventually beat humans, because they understood the different ways humans and computers play. Human chess players learn by spending years studying the world’s best opening moves and endgames; they play thousands of games, slowly amassing a capacious, in-brain library of which strategies triumphed and which flopped. They analyze their opponents’ strengths and weaknesses, as well as their moods. When they look at the board, that knowledge manifests as intuition—a eureka moment when they suddenly spy the best possible move.

In contrast, a chess-playing computer has no intuition at all. It analyzes the game using brute force; it inspects the pieces currently on the board, then calculates all options. It prunes away moves that lead to losing positions, then takes the promising ones and runs the calculations again. After doing this a few times—and looking five or seven moves out—it arrives at a few powerful plays. The machine’s way of “thinking” is fundamentally unhuman. Humans don’t sit around crunching every possible move, because our brains can’t hold that much information at once. If you go eight moves out in a game of chess, there are more possible games than there are stars in our galaxy. If you total up every game possible? It outnumbers the atoms in the known universe. Ask chess grand masters, “How many moves can you see out?” and they’ll likely deliver the answer attributed to the Cuban grand master Jose Raul Capablanca: “One, the best one.”

The fight between computers and humans in chess was, as Kasparov knew, ultimately about speed. Once computers could see all games roughly seven moves out, they would wear humans down. A person might make a mistake; the computer wouldn’t. Brute force wins. As he pondered Deep Blue, Kasparov mused on these different cognitive approaches.

It gave him an audacious idea. What would happen if, instead of competing against one another, humans and computers collaborated? What if they played on teams together—one computer and a human facing off against another human and a computer? That way, he theorized, each might benefit from the other’s peculiar powers. The computer would bring the lightning-fast—if uncreative—ability to analyze zillions of moves, while the human would bring intuition and insight, the ability to read opponents and psych them out. Together, they would form what chess players later called a centaur: a hybrid beast endowed with the strengths of each.

In June 1998, Kasparov played the first public game of human-computer collaborative chess, which he dubbed “advanced chess,” against Veselin Topalov, a top-rated grand master. Each used a regular computer with off-the-shelf chess software and databases of hundreds of thousands of chess games, including some of the best ever played. They considered what moves the computer recommended; they examined historical databases to see if anyone had ever been in a situation like theirs before. Then they used that information to help plan. Each game was limited to sixty minutes, so they didn’t have infinite time to consult the machines; they had to work swiftly.

Kasparov found the experience “as disturbing as it was exciting.” Freed from the need to rely exclusively on his memory, he was able to focus more on the creative texture of his play. It was, he realized, like learning to be a race-car driver: He had to learn how to drive the computer, as it were—developing a split-second sense of which strategy to enter into the computer for assessment, when to stop an unpromising line of inquiry, and when to accept or ignore the computer’s advice. “Just as a good Formula One driver really knows his own car, so did we have to learn the way the computer program worked,” he later wrote. Topalov, as it turns out, appeared to be an even better Formula One “thinker” than Kasparov. On purely human terms, Kasparov was a stronger player; a month before, he’d trounced Topalov 4-0. But the centaur play evened the odds. This time, Topalov fought Kasparov to a 3-3 draw.

In 2005, there was a “freestyle” chess tournament in which a team could consist of any number of humans or computers, in any combination. Many teams consisted of chess grand masters who’d won plenty of regular, human-only tournaments, achieving chess scores of 2,500 (out of 3,000). But the winning team didn’t include any grand masters at all. It consisted of two young New England men, Steven Cramton and Zackary Stephen (who were comparative amateurs, with chess rankings down around 1,400 to 1,700), and their computers.

Why could these relative amateurs beat chess players with far more experience and raw talent? Because Cramton and Stephen were expert at collaborating with computers. They knew when to rely on human smarts and when to rely on the machine’s advice. Working at rapid speed—these games, too, were limited to sixty minutes—they would brainstorm moves, then check to see what the computer thought, while also scouring databases to see if the strategy had occurred in previous games. They used three different computers simultaneously, running five different pieces of software; that way they could cross-check whether different programs agreed on the same move. But they wouldn’t simply accept what the machine accepted, nor would they merely mimic old games. They selected moves that were low-rated by the computer if they thought they would rattle their opponents psychologically.

In essence, a new form of chess intelligence was emerging. You could rank the teams like this: (1) a chess grand master was good; (2) a chess grand master playing with a laptop was better. But even that laptop-equipped grand master could be beaten by (3) relative newbies, if the amateurs were extremely skilled at integrating machine assistance. “Human strategic guidance combined with the tactical acuity of a computer,” Kasparov concluded, “was overwhelming.”

Better yet, it turned out these smart amateurs could even outplay a supercomputer on the level of Deep Blue. One of the entrants that Cramton and Stephen trounced in the freestyle chess tournament was a version of Hydra, the most powerful chess computer in existence at the time; indeed, it was probably faster and stronger than Deep Blue itself. Hydra’s owners let it play entirely by itself, using raw logic and speed to fight its opponents. A few days after the advanced chess event, Hydra destroyed the world’s seventh-ranked grand master in a man-versus-machine chess tournament.

But Cramton and Stephen beat Hydra. They did it using their own talents and regular Dell and Hewlett-Packard computers, of the type you probably had sitting on your desk in 2005, with software you could buy for sixty dollars. All of which brings us back to our original question here: Which is smarter at chess—humans or computers?

Neither. It’s the two together, working side by side.

 

TMN: 10 Wrong Ways Your Company Is Measuring Productivity

Is your company measuring productivity? Below is a blog post from Time Management Ninja.

10 Wrong Ways Your Company Is Measuring Productivity

Does your company know who gets the work done?

I mean, really does the work.

Not talks about it. Brags about it. Or in some cases, even lies about it.

I am talking about actually delivering tangible results.

How does your company measure productivity?

Productivity: You’re Measuring It Wrong

In the hasty pursuit of goals, most companies don’t take time to look at how they are measuring results.

Instead, they rely on stories, anecdotes, and often plain lies about what is getting done and by whom.

“Most companies are ignorant of who is actually getting the work done.”

This leads to assumptions, guesses, or even silly ways to measure productivity.

Doubt this?

Here are 10 Wrong Ways That Your Company Is Measuring Productivity:

  1. The Most Hours – The most common mistake companies make is to reward the people who put in the most hours. The assumption is that these people must be doing the most work. Yet, often they are the biggest procrastinators, wasters of their own and other people’s time, and most responsible for rework.
  2. The Loudest – Many companies make the mistake of rewarding the loudest employees. By this, I mean the ones who are always telling others what they are doing and how important they are. I worked with one employee who closed every email by announcing what “important thing he was doing that day.” He had to make sure that people knew he was doing important work.
  3. No Measuring Scale – Does your company even know what success looks like? What is the measuring scale? Not knowing how to keep score not only keeps companies blind, but leads to employee dismay. One company rewards employees with an annual success trip. Yet, with no published criteria, every year employees are left scratching their heads as to why certain employees are rewarded.
  4. Sending the Most Emails – Somewhere along the line, the paper-pusher made the leap to the 21st century as the “email spammer.” I have seen individuals make entire careers out of mass emailing their co-workers. Of course, this only works if companies mistake email barrages as work, which unfortunately many do.
  5. Not Taking Status Updates Seriously – Status updates are only as good as the information in them. If you send out worthless updates, no one will read them. (True in most companies.) Bogus status updates are just as dangerous. I worked with one company whose weekly updates always showed all projects as Green. (As opposed to Yellow or Red.) They had one project that was 182 days past its deadline. When I inquired, they stated it was Green because, “People are still working on it.”
  6. The Most Things Done – This the Busy Bee Syndrome. It is closely related to #1. However, in this case, the company measures productivity by the sheer quantity of tasks completed. Never mind if they were the right tasks, or if any positive results were achieved. But, wow, they sure got a lot of tasks done.
  7. Over-dedicated – It our fast paced, job-hopping world, it is easy for companies to mistake over-dedication for effective work. Consider the company that gives a leadership award based on the employee working nights and weekends and canceling their vacation. Does this show dedication or an unrealistic work-life balance?
  8. Not Keeping Score – Many companies don’t have the discipline to track results. (For others, it’s a conscious choice.) When it comes to recognizing employee effectiveness, it comes down to gut-feelings or who that manager likes. Not a great measuring stick for potential success.
  9. No Priority or Direction – Speed without direction is just a quick way to get off course. If your company hasn’t set the direction or goals, then it does’t matter how productive employees are. They are just working for work’s sake.
  10. Not Meeting Deadlines or Goals – Many companies set deadlines or goals only to “explain them away” at a later time. Well, the deadline wasn’t reached but that’s OK. Or, we didn’t reach our sales goal, but we tried really hard. Results aren’t necessarily about tough love, but rather hard truth.

Measure Productivity by Measuring Results

Don’t let your company fall for these productivity measuring mistakes.

Productivity is seldom about getting the most things done, but rather getting the right things done.

It’s about results. Positive impact.

Good companies measure busyness.

Great companies measure results.

Question: How does your company measure productivity?