Saturday, August 30, 2014

Tech trends for 2014: what's in store this year?
IN DEPTH The gadgets are yet to come, but some of the trends in tech for 2014 are becoming clear
By Jamie Carter  January 2nd


Technology never stops moving, and 2014 is set to be no different. We've picked 10 industry areas that are sure to see innovation the likes of which just might revolutionise the way we live and work in the years ahead.
Read on to learn more about everything from online supercomputers to new compression technologies and the possible dawning of a new industrial age...

1. Tablets will get bigger

The iPad mini was just a phase. The 12-inch Samsung Galaxy Note has already had its details leaked and could make an appearance at CES 2014 or soon after. Meanwhile January also marks the appearance of the unique A3-sized Panasonic Toughpad 4K UT-MB5, a 20-inch tablet with a 4K resolution aimed at architects, photographers, and – laughably – sales people.




Tablets will also get bigger in reputation as they spread beyond our homes and offices to … everywhere. "We can expect to see tablets infiltrating many public spaces such as cafes, airports, buses and taxis," says Kevin Curran, senior member at the Institute of Electrical and Electronics Engineers (IEEE) and Reader in Computer Science at the University of Ulster. "They require little maintenance and thus are suited to public spaces … it will be much more common to order food and drink from tablets in 2014." It's about time, too. They've been doing that in Japan for yonks.

2. Ultra HD 4K will spread to TV and phones

Both the 2014 FIFA World Cup in Brazil and Sochi Winter Olympics will be filmed in 4K, and by the end of 2014 both Netflix and Sony's Video Unlimited services will be stuffed with 4K content. Meanwhile, the South Korean government has mandated a 4K rollout in 2014 (there are already five channels of Ultra HD content being broadcast in South Korea as part of a trial).
"Unlike 3D, 4K has the legs to become an industry norm," says Sam Rosen, Practice Director at analysts ABI Research, "but it will take time for the necessary infrastructure, installed base of devices, and content to come together."

Towards the end of 2014, expect to see a plethora of 3,840 x 2,160 pixel resolution mobile devices from the usual brands, all armed with aQualcomm Snapdragon 800 processor (the only one so far that's capable of dealing in 4K video). This will likely usher-in the 4K revolution on mobiles with both 4k video support and 4k-capable cameras.

3. 3D printing kickstarts a new industrial age

Worldwide shipments of 3D printers are expected to grow by 75% in 2014, followed by a near doubling of unit shipments in 2015. "The consumer market hype has made organisations aware of the fact that 3D printing is a real, viable and cost-effective means to reduce costs through improved designs and streamlined prototyping," says Curran. "We can expect to see more virtual world merging such as 3D-printing software, which is letting fans of the construction computer game Minecraft bring their creations into the real world."

2014 should see the re-entry of Hewlett Packard into the 3D printing industry, which is a big deal for making it a mainstream movement, though it will also cement itself as a game-changing industrial process. The aviation and space industries are gearing up to use 3D printing technology to produce lightweight components for jet engines, satellites and more. CAD can be used to insert gaps and vents into ever more complex one-piece objects, while the lack of waste materials means that, for example, the pricier, lighter, but stronger titanium can be used instead of aluminium.

4. Internet of Things gets its own space

The internet is expanding beyond computers and smartphones. Not only are more gadgets getting Wi-Fi, Bluetooth or data connectivity (think wearable fitness devices, Google Glass and smart home devices like the Nest thermostat), but platforms are beginning to appear that will integrate them together.

However, the so-called Internet of Things needs its own space. "Ofcom is currently investigating the possibility of using the old analogue TV channels, known as 'white space', to trial a new 'weightless standard' which could allow small, low-power connected devices to talk to each other," says John-Paul Rooney, partner and patent attorney at Withers & Rogers.
"The weightless standard will be a cornerstone of the future Internet of Things, bringing vastly improved connectivity and data sharing, leading to new possibilities in the functioning of home devices." Rooney thinks that we'll soon see sophisticated cooking and heating systems that can switch themselves on based on the movement or proximity of a vehicle – so when you arrive home the house is warm and the slow-cooked casserole is ready. But only, presumably, if you peeled the carrots and chopped up the meat before breakfast. Can't wait for that.

5. Video dominates the web

There's a transition going on. What was over the air is increasingly on fibre, and what was on the wires is now delivered over the air. "Data downloads this year are about 17GB a month and by 2017 that will be about 70GB per month because of the increase in video content, while more people are working either at home or on the move," says technology commentator Peter Cochrane, former CTO and head of research at British Telecom.
"In broadband hotspots like Hong Kong, where they have 100Mbps services even in hotel rooms, people are no longer watching TV or listening to radio over the air as everything is being put down fibre."
In the UK we're not blessed with a reliable, fast broadband network, so all hail the rise of the powerful video compression technology HEVC, which will soon make even 4K videos stream-able. "HEVC will enable service providers to extend their reach and expand the footprint of TV everywhere outside the home," says Tim Gropp, senior vice president Asia-Pacific sales at video technology company ARRIS.

6. Smartphones toughen up

We've already seen the first efforts, but 2014 will extend the trend for smartphones that claim to be unbreakable. LG's six-inch elastic-coated G Flex and the 7.9 mm-slim Samsung Galaxy Round will develop, but there's something for mainstream handsets, too.

"I think we'll see almost all high-end smartphones become waterproof in 2014," says Cochrane, though he believes that the real change will be in the materials used to make smartphones. "Soon these things aren't going to be made from chunks of glass and pieces of metal – they're going to use printed circuit boards. You can bond them together as a plastic block in any profile you like. They'll be thinner, lighter, flexible and connector-less."
Curran thinks that the curved phone and the smart watch could, in fact, be the same thing, saying: "The killer smart watch may be more of a wraparound device or extendable foldable screen, as the main downside to a smart watch is the restrictive screen size," he says.

7. Ask Watson apps will make Siri look like an idiot

2014 will also witness the dawn of the online super-computer. In November IBM quietly put Watson – its 2,880-core super-computer cluster of 90 servers running on 16TB RAM – in the cloud for app developers to tinker with.


It's big news because Watson has DeepQA, IBM's smart learning software that means Watson can both understand and interpret written or spoken questions – and can learn from its mistakes.
Although it's bound to super-charge the likes of Siri and Google Voice, it's in 'knowledge' industries such as medicine and science that AskWatson apps are destined to appear first, likely before the end of 2014. "There will be services where professionals can call up and ask a question … but I can't imagine a profession that isn't going to use this," says Cochrane, who thinks that the number-crunching, pattern-spotting skills of Watson will put some workers out of a job. First for the chop? Investment bankers. Bonus!

8. Wearables start swapping data

We're destined to see dozens of wearable devices throughout 2014. The upcoming HAPIfork will slow-down those who eat too fast, the Narrative Clip pendant-camera will take constant snapshots, and Sony's recently patented SmartWig concept could monitor vital statistics, navigate, and even control other gadgets with a blink of an eye.


However, it's how wearables integrate with other devices that will improve most in 2014. "Most wearables pair with an app that shows your activity over time, letting you spot patterns and change what you do," says Curran, "but the Jawbone UP takes this a step further with its UP platform, importing data from other services and letting those services access the data from the UP bracelet." UP currently swaps data with apps like RunKeeper, Strava, Withings (WiFi scales) and the IFTTT (If This Then That) app, which integrates with connected gadgets like the Philips Hue lights and Belkin WeMo switch.

9. Advertisers cotton on to 'movement data'

"Wearable technology, particularly health-related devices, have finally become affordable, accurate and accessible – 2013 was just the beginning," says Norm Johnston, Chief Digital Officer for global media network Mindshare Worldwide.


"Brands will truly begin to explore their role in this new space, whether by co-developing new products and applications, or inserting relevant advertising into the experiences." Johnston suggests that data from devices like the Jawbone UP could be used by brands to customise advertising; Nytol could target you if you're not sleeping well, or life insurance companies could develop tailor-made walking routes to increase your health.
"Brands will need to redefine boundaries and walk a fine line between opt-in relevance versus annoying people," says Johnston. "Consumers will also have to gain tighter control of their data, and self-select which brands they will allow into this new universe of IP-enabled devices."

10. Smartphones will retain their crown

Google GlassGalaxy Gear and wearables galore will get a lot of publicity in 2014, but they won't eat away at the dominance of smartphones. "The potential benefits of wearable technology to businesses and consumers alike are obvious," says Gary Calcott, Technical Marketing Manager at Progress Software.


"They could allow surgeons to access information they need as they operate on patients, perhaps helping to lower mortality rates considerably in the process," he says, adding that 'smart glass' could also enable forklift truck drivers to access real-time updates on stock in a warehouse. "However, if you delve deeper and look into the back-end that allows applications on wearable devices to run, you'll find that it will almost certainly be running on either a smartphone or a tablet device. Almost all of the heavy lifting will be done by the smartphone, not the wearable device."
In short, the success or otherwise of wearables will depend totally on apps, not the devices themselves.
DEFINITION

file format

Part of the Computing fundamentals glossary:
In a computer, a file format is the layout of a file in terms of how the data within the file is organized. A program that uses the data in a file must be able to recognize and possibly access data within the file. For example, the program that we call a Web browser is able to process and display a file in the HTML file format so that it appears as a Web page, but it cannot display a file in a format designed for Microsoft's Excel program. A particular file format is often indicated as part of a file's name by a file name extension (suffix). Conventionally, the extension is separated by a period from the name and contains three or four letters that identify the format. A program that uses or recognizes a particular file format may or may not care whether the file has the appropriate extension name since it can actually examine the bits in the file to see whether the format (layout) is one it recognizes.
There are as many different file formats as there are different programs to process the files. A few of the more common file formats are:
  • Word documents (.doc)
  • Web text pages (.htm or .html)
  • Web page images (.gif and .jpg)
  • Adobe Postcript files (.ps)
  • Adobe Acrobat files (.pdf)
  • Executable programs (.exe)
  • Multimedia files (.mp3 and others)
Sample Business Letter 


August 30, 2014

The Manager
Taal Vista Hotel
Kilometer 60, Aguinaldo Highway
Tagaytay City 4120, Philippines

Telephone No: +63 (2) 917-8225 | +63 (46) 413-1000
Mobile No: +63 (917) 809-1254
Fax No: +63 (46) 413-1225

Dear Sir / Madam

I am a highly successful and experienced sales executive and am writing to inquire if you have any openings at your company for which I might apply. 

I am currently working for Summit Ridge Hotel as a sale executive. My duties include cold calling, chasing up leads, meeting potential clients and closing sales. I have a very successful track record in all of these fields.

I have enclosed my CV with this enquiry letter, if after reviewing it you feel there may be a position in your company for me then please do not hesitate to contact me.

I look forward very much to an opportunity to discuss my related work experience and explain in more detail how I can contribute to the continued success of your company.

Yours faithfully,


ADRIANA RONI C. BOMBASE
Marketing Manager

Word 2013 New Features

The following is a brief snapshot of the more prominent new features of Word 2013. To get a fuller picture of what’s new, check out What’s New In Word 2013.
Word 2013 Logo
It barely seems like two minutes ago that Word 2010 was released to the public, and yet here now is Word 2013. Many potential users may be confused by the presence of another shadowy figure though; that of Office 365. So what is the difference between Office 2013 and Office 365?
Simply put, Office 365 is a subscription based service that lets you use all the Office applications in conjunction with the cloud. Web versions of Word, Excel, Powerpoint etc allow you to store your documents, spreadsheets and presentations in “the cloud”. This is merely jargon way of saying that Microsoft store your data on their servers. And, in fact, you can use the cloud with Office 2013, too, it’s just that the web based programs (Office 365) are more geared to using the cloud because, well, they’re web based.
On the other hand, Office 2013 requires a one off payment to install the Office applications on your computer. So, to recap the major differences:
  • Office 365 – you have to pay a monthly subscription to use the programs
  • Office 2013 – you pay a one off price to buy the programs
A quick visit to Microsoft’s site will show you that they are really pushing the subscription based service (Office 365). They probably see more revenue being earned via this model. Call me old fashioned, but I would rather pay the one off fee and have the software installed on my machine. It’s horses for courses, though, and different users will prefer different options.





One thought on “Microsoft Word 2013”

“Robo Brain” to teach robots about the human world
By AGATA BLASZCZAK-BOXECBS NEWS August 26, 2014, 11:52 AM

A new system being developed by computer scientists at Cornell University can both "learn" new information from the Internet and serve as a resource for increasingly intelligent robots.
The computational "Robo Brain" system absorbs data from public Internet sites and computer simulations so that robots can apply that knowledge in their future interactions. The Robo Brain is now "studying" about 1 billion photographs, 120,000 YouTube videos and 100 million how-to documents and appliance manuals. All this information is then translated and stored in a format that robots can later access.
According to the project's website, the system has potential uses in in robotics research, household robots and self-driving cars.
To become effective helpers for people in homes, offices and factories, robots need to understand how our world works and how people behave. Researchers have been trying to teach robots how to perform basic actions such as finding a person's keys or pouring a drink, and they say the new new system could help.
For instance, if a robot sees a coffee mug, Robo Brain will quickly recognize from its base of knowledge that liquids can be poured into or out of it, and that the robot can grasp it by the handle. It will also understand that while the mug must be carried upright while it is full, it's ok to turn it sideways when it's being carried from the dishwasher to the cupboard.
And just like a human learner, Robo Brain will have human teachers. The learning process will facilitated by crowdsourcing. The Robo Brain website will display what the robot's "brain" has learned, and visitors to the site will be able to contribute to the existing data and correct it if needed.
"Our laptops and cell phones have access to all the information we want," Ashutosh Saxena, an assistant professor of computer science at Cornell University and lead author on the project, said in a statement. "If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud."
The researchers say that Robo Brain will be able to process images to select and recognize the objects in them. It will also be able to connect images and video with text, learning to recognize objects and understand how they are used, along with human language and behavior.

The researchers presented the project at the 2014 Robotics: Science and Systems Conference in Berkeley in July. Here a video of the presentation:


What Is I.B.M.’s Watson?
Danielle Levitt for The New York Times
A part of Watson’s ‘‘brain,’’ located in a room near the mock ‘‘Jeopardy!’’ set.
By CLIVE THOMPSON

Published: June 16, 2010

 “Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?
Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.
For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.
With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.
Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.
This time, though, the computer was doing the right thing. Watson won $1,000 (in pretend money, anyway), pulled ahead and eventually defeated Gilmartin and Kolani soundly, winning $18,400 to their $12,000 each.
“Watson,” Crain shouted, “is our new champion!”
It was just the beginning. Over the rest of the day, Watson went on a tear, winning four of six games. It displayed remarkable facility with cultural trivia (“This action flick starring Roy Scheider in a high-tech police helicopter was also briefly a TV series” — “What is ‘Blue Thunder’?”), science (“The greyhound originated more than 5,000 years ago in this African country, where it was used to hunt gazelles” — “What is Egypt?”) and sophisticated wordplay (“Classic candy bar that’s a female Supreme Court justice” — “What is Baby Ruth Ginsburg?”).
By the end of the day, the seven human contestants were impressed, and even slightly unnerved, by Watson. Several made references to Skynet, the computer system in the “Terminator” movies that achieves consciousness and decides humanity should be destroyed. “My husband and I talked about what my role in this was,” Samantha Boardman, a graduate student, told me jokingly. “Was I the thing that was going to help the A.I. become aware of itself?” She had distinguished herself with her swift responses to the “Rhyme Time” puzzles in one of her games, winning nearly all of them before Watson could figure out the clues, but it didn’t help. The computer still beat her three times. In one game, she finished with no money.
“He plays to win,” Boardman said, shaking her head. “He’s really not messing around!” Like most of the contestants, she had started calling Watson “he.”
WE LIVE IN AN AGE of increasingly smart machines. In recent years, engineers have pushed into areas, from voice recognition to robotics to search engines, that once seemed to be the preserve of humans. But I.B.M. has a particular knack for pitting man against machine. In 1997, the company’s supercomputer Deep Blue famously beat the grandmaster Garry Kasparov at chess, a feat that generated enormous publicity for I.B.M. It did not, however, produce a marketable product; the technical accomplishment — playing chess really well — didnt translate to real-world business problems and so produced little direct profit for I.B.M. In the mid ’00s, the company’s top executives were looking for another high-profile project that would provide a similar flood of global publicity. But this time, they wanted a “grand challenge” (as they call it internally), that would meet a real-world need.
Question-answering seemed to be a good fit. In the last decade, question-answering systems have become increasingly important for firms dealing with mountains of documents. Legal firms, for example, need to quickly sift through case law to find a useful precedent or citation; help-desk workers often have to negotiate enormous databases of product information to find an answer for an agitated customer on the line. In situations like these, speed can often be of the essence; in the case of help desks, labor is billed by the minute, so high-tech firms with slender margins often lose their profits providing telephone support. How could I.B.M. push question-answering technology further?
When one I.B.M. executive suggested taking on “Jeopardy!” he was immediately pooh-poohed. Deep Blue was able to play chess well because the game is perfectly logical, with fairly simple rules; it can be reduced easily to math, which computers handle superbly. But the rules of language are much trickier. At the time, the very best question-answering systems — some created by software firms, some by university researchers — could sort through news articles on their own and answer questions about the content, but they understood only questions stated in very simple language (“What is the capital of Russia?”); in government-run competitions, the top systems answered correctly only about 70 percent of the time, and many were far worse. “Jeopardy!” with its witty, punning questions, seemed beyond their capabilities. What’s more, winning on “Jeopardy!” requires finding an answer in a few seconds. The top question-answering machines often spent longer, even entire minutes, doing the same thing.
“The reaction was basically, ‘No, it’s too hard, forget it, no way can you do it,’ ” David Ferrucci told me not long ago. Ferrucci, I.B.M.’s senior manager for its Semantic Analysis and Integration department, heads the Watson project, and I met him for the first time last November at I.B.M.’s lab. An artificial-intelligence researcher who has long specialized in question-answering systems, Ferrucci chafed at the slow progress in the field. A fixture in the office in the evenings and on weekends, he is witty, voluble and intense. While dining out recently, his wife asked the waiter if Ferrucci’s meal included any dairy. “Is he lactose intolerant?” the waiter inquired. “Yes,” his wife replied, “and just generally intolerable.” Ferrucci told me he was recently prescribed a mouth guard because the stress of watching Watson play had him clenching his teeth excessively.
Ferrucci was never an aficionado of “Jeopardy!” (“I’ve certainly seen it,” he said with a shrug. “I’m not a big fan.”) But he craved an ambitious goal that would impel him to break new ground, that would verge on science fiction, and this fit the bill. “The computer on ‘Star Trek’ is a question-answering machine,” he says. “It understands what you’re asking and provides just the right chunk of response that you needed. When is the computer going to get to a point where the computer knows how to talk to you? That’s my question.”
What makes language so hard for computers, Ferrucci explained, is that it’s full of “intended meaning.” When people decode what someone else is saying, we can easily unpack the many nuanced allusions and connotations in every sentence. He gave me an example in the form of a “Jeopardy!” clue: “The name of this hat is elementary, my dear contestant.” People readily detect the wordplay here — the echo of “elementary, my dear Watson,” the famous phrase associated with Sherlock Holmes — and immediately recall that the Hollywood version of Holmes sports a deerstalker hat. But for a computer, there is no simple way to identify “elementary, my dear contestant” as wordplay. Cleverly matching different keywords, and even different fragments of the sentence — which in part is how most search engines work these days — isn’t enough, either. (Type that clue into Google, and you’ll get first-page referrals to “elementary, my dear watson” but none to deerstalker hats.)
What’s more, even if a computer determines that the actual underlying question is “What sort of hat does Sherlock Holmes wear?” its data may not be stored in such a way that enables it to extract a precise answer. For years, computer scientists built question-answering systems by creating specialized databases, in which certain facts about the world were recorded and linked together. You could do this with Sherlock Holmes by building a database that includes connections between catchphrases and his hat and his violin-playing. But that database would be pretty narrow; it wouldn’t be able to answer questions about nuclear power, or fish species, or the history of France. Those would require their own hand-made databases. Pretty soon you’d face the impossible task of organizing all the information known to man — of “boiling the ocean,” as Ferrucci put it. In computer science, this is known as a “bottleneck” problem. And even if you could get past it, you might then face the issue of “brittleness”: if your database contains only facts you input manually, it breaks any time you ask it a question about something beyond that material. There’s no way to hand-write a database that would include the answer to every “Jeopardy!” clue, because the subject matter is potentially all human knowledge.
The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. Using this method, you could put hundreds of articles and books and movie reviews discussing Sherlock Holmes into the computer, and it would calculate that the words “deerstalker hat” and “Professor Moriarty” and “opium” are frequently correlated with one another, but not with, say, the Super Bowl. So at that point you could present the computer with a question that didn’t mention Sherlock Holmes by name, but if the machine detected certain associated words, it could conclude that Holmes was the probable subject — and it could also identify hundreds of other concepts and words that weren’t present but that were likely to be related to Holmes, like “Baker Street” and “chemistry.”
In theory, this sort of statistical computation has been possible for decades, but it was impractical. Computers weren’t fast enough, memory wasn’t expansive enough and in any case there was no easy way to put millions of documents into a computer. All that changed in the early ’00s. Computer power became drastically cheaper, and the amount of online text exploded as millions of people wrote blogs and wikis about anything and everything; news organizations and academic journals also began putting all their works in digital format. What’s more, question-answering experts spent the previous couple of decades creating several linguistic tools that helped computers puzzle through language — like rhyming dictionaries, bulky synonym finders and “classifiers” that recognized the parts of speech.
Still, the era’s best question-answering systems remained nowhere near being able to take on “Jeopardy!” In 2006, Ferrucci tested I.B.M.’s most advanced system — it wasn’t the best in its field but near the top — by giving it 500 questions from previous shows. The results were dismal. He showed me a chart, prepared by I.B.M., of how real-life “Jeopardy!” champions perform on the TV show. They are clustered at the top in what Ferrucci calls “the winner’s cloud,” which consists of individuals who are the first to hit the buzzer about 50 percent of the time and, after having “won” the buzz, solve on average 85 to 95 percent of the clues. In contrast, the I.B.M. system languished at the bottom of the chart. It was rarely confident enough to answer a question, and when it was, it got the right answer only 15 percent of the time. Humans were fast and smart; I.B.M.’s machine was slow and dumb.
“Humans are just — boom! — they’re just plowing through this in just seconds,” Ferrucci said excitedly. “They’re getting the questions, they’re breaking them down, they’re interpreting them, they’re getting the right interpretation, they’re looking this up in their memory, they’re scoring, they’re doing all this just instantly.”
But Ferrucci argued that I.B.M. could be the one to finally play “Jeopardy!” If the firm focused its computer firepower — including its new “BlueGene” servers — on the challenge, Ferrucci could conduct experiments dozens of times faster than anyone had before, allowing him to feed more information into Watson and test new algorithms more quickly. Ferrucci was ambitious for personal reasons too: if he didn’t try this, another computer scientist might — “and then bang, you are irrelevant,” he told me.
“I had no interest spending the next five years of my life pursuing things in the small,” he said. “I wanted to push the limits.” If they could succeed at “Jeopardy!” soon after that they could bring the underlying technology to market as customizable question-answering systems. In 2007, his bosses gave him three to five years and increased his team to 15 people.
FERRUCCI’S MAIN breakthrough was not the design of any single, brilliant new technique for analyzing language. Indeed, many of the statistical techniques Watson employs were already well known by computer scientists. One important thing that makes Watson so different is its enormous speed and memory. Taking advantage of I.B.M.’s supercomputing heft, Ferrucci’s team input millions of documents into Watson to build up its knowledge base — including, he says, “books, reference material, any sort of dictionary, thesauri, folksonomies, taxonomies, encyclopedias, any kind of reference material you can imagine getting your hands on or licensing. Novels, bibles, plays.”
Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one. In essence, Watson thinks in probabilities. It produces not one single “right” answer, but an enormous number of possibilities, then ranks them by assessing how likely each one is to answer the question.
Ferrucci showed me how Watson handled this sample “Jeopardy!” clue: “He was presidentially pardoned on Sept. 8, 1974.” In the first pass, the algorithms came up with “Nixon.” To evaluate whether “Nixon” was the best response, Watson performed a clever trick: it inserted the answer into the original phrase — “Nixon was presidentially pardoned on Sept. 8, 1974” — and then ran it as a new search, to see if it also produced results that supported “Nixon” as the right answer. (It did. The new search returned the result “Ford pardoned Nixon on Sept. 8, 1974,” a phrasing so similar to the original clue that it helped make “Nixon” the top-ranked solution.)
Other times, Watson uses algorithms that can perform basic cross-checks against time or space to help detect which answer seems better. When the computer analyzed the clue “In 1594 he took a job as a tax collector in Andalusia,” the two most likely answers generated were “Thoreau” and “Cervantes.” Watson assessed “Thoreau” and discovered his birth year was 1817, at which point the computer ruled him out, because he wasn’t alive in 1594. “Cervantes” became the top-ranked choice.
When Watson is playing a game, Ferrucci lets the audience peek into the computer’s analysis. A monitor shows Watson’s top five answers to a question, with a bar graph beside each indicating its confidence. During one of my visits, the host read the clue “Thousands of prisoners in the Philippines re-enacted the moves of the video of thisMichael Jackson hit.” On the monitor, I could see that Watson’s top pick was “Thriller,” with a confidence level of roughly 80 percent. This answer was correct, and Watson buzzed first, so it won $800. Watson’s next four choices — “Music video,” “Billie Jean,” “Smooth Criminal” and “MTV” — had only slivers for their bar graphs. It was a fascinating glimpse into the machine’s workings, because you could spy the connective thread running between the possibilities, even the wrong ones. “Billie Jean” and “Smooth Criminal” were also major hits by Michael Jackson, and “MTV” was the main venue for his videos. But it’s very likely that none of those correlated well with “Philippines.”
After a year, Watson’s performance had moved halfway up to the “winner’s cloud.” By 2008, it had edged into the cloud; on paper, anyway, it could beat some of the lesser “Jeopardy!” champions. Confident they could actually compete on TV, I.B.M. executives called up Harry Friedman, the executive producer of “Jeopardy!” and raised the possibility of putting Watson on the air.
Friedman told me he and his fellow executives were surprised: nobody had ever suggested anything like this. But they quickly accepted the challenge. “Because it’s I.B.M., we took it seriously,” Friedman said. “They had the experience with Deep Blue and the chess match that became legendary.”
WHEN THEY FIRST showed up to play Watson, many of the contestants worried that they didn’t stand a chance. Human memory is frail. In a high-stakes game like “Jeopardy!” players can panic, becoming unable to recall facts they would otherwise remember without difficulty. Watson doesn’t have this problem. It might have trouble with its analysis or be unable to logically connect a relevant piece of text to a question. But it doesn’t forget things. Plus, it has lightning-fast reactions — wouldn’t it simply beat the humans to the buzzer every time?
“We’re relying on nerves — old nerves,” Dorothy Gilmartin complained, halfway through her first game, when it seemed that Watson was winning almost every buzz.
Yet the truth is, in more than 20 games I witnessed between Watson and former “Jeopardy!” players, humans frequently beat Watson to the buzzer. Their advantage lay in the way the game is set up. On “Jeopardy!” when a new clue is given, it pops up on screen visible to all. (Watson gets the text electronically at the same moment.) But contestants are not allowed to hit the buzzer until the host is finished reading the question aloud; on average, it takes the host about six or seven seconds to read the clue.
Players use this precious interval to figure out whether or not they have enough confidence in their answers to hazard hitting the buzzer. After all, buzzing carries a risk: someone who wins the buzz on a $1,000 question but answers it incorrectly loses $1,000.
Often those six or seven seconds weren’t enough time for Watson. The humans reacted more quickly. For example, in one game an $800 clue was “In Poland, pick up somekalafjor if you crave this broccoli relative.” A human contestant jumped on the buzzer as soon as he could. Watson, meanwhile, was still processing. Its top five answers hadn’t appeared on the screen yet. When these finally came up, I could see why it took so long. Something about the question had confused the computer, and its answers came with mere slivers of confidence. The top two were “vegetable” and “cabbage”; the correct answer — “cauliflower” — was the third guess.
To avoid losing money — Watson doesn’t care about the money, obviously; winnings are simply a way for I.B.M. to see how fast and accurately its system is performing — Ferrucci’s team has programmed Watson generally not to buzz until it arrives at an answer with a high confidence level. In this regard, Watson is actually at a disadvantage, because the best “Jeopardy!” players regularly hit the buzzer as soon as it’s possible to do so, even if it’s before they’ve figured out the clue. “Jeopardy!” rules give them five seconds to answer after winning the buzz. So long as they have a good feeling in their gut, they’ll pounce on the buzzer, trusting that in those few extra seconds the answer will pop into their heads. Ferrucci told me that the best human contestants he had brought in to play against Watson were amazingly fast. “They can buzz in 10 milliseconds,” he said, sounding astonished. “Zero milliseconds!”
On the third day I watched Watson play, it did quite poorly, losing four of seven games, in one case without any winnings at all. Often Watson appeared to misunderstand the clue and offered answers so inexplicable that the audience erupted in laughter. Faced with the clue “This ‘insect’ of a gangster was a real-life hit man for Murder Incorporated in the 1930s & ’40s,” Watson responded with “James Cagney.” Up on the screen, I could see that none of its lesser choices were the correct one, “Bugsy Siegel.” Later, when asked to complete the phrase “Toto, I’ve a feeling we’re not in Ka—,” Watson offered “not in Kansas anymore,” which was incorrect, since the precise phrasing was simply “Kansas anymore,” and “Jeopardy!” is strict about phrasings. When I looked at the screen, I noticed that the answers Watson had ranked lower were pretty odd, including “Steve Porcaro,” the keyboardist for the band Toto (which made a vague sort of sense), and “Jackie Chan” (which really didn’t). In another game, Watson’s logic appeared to fall down some odd semantic rabbit hole, repeatedly giving the answer “Tommy Lee Jones” — the name of the Hollywood actor — to several clues that had nothing to do with him.
In the corner of the conference room, Ferrucci sat typing into a laptop. Whenever Watson got a question wrong, Ferrucci winced and stamped his feet in frustration, like a college-football coach watching dropped passes. “This is torture,” he added, laughing.
Seeing Watson’s errors, you can sometimes get a sense of its cognitive shortcomings. For example, in “Jeopardy!” the category heading often includes a bit of wordplay that explains how the clues are to be addressed. Watson sometimes appeared to mistakenly analyze the entire category and thus botch every clue in it. One game included the category “Stately Botanical Gardens,” which indicated that every clue would list several gardens, and the answer was the relevant state. Watson clearly didn’t grasp this; it answered “botanic garden” repeatedly. I also noticed that when Watson was faced with very short clues — ones with only a word or two — it often seemed to lose the race to the buzzer, possibly because the host read the clues so quickly that Watson didn’t have enough time to do its full calculations. The humans, in contrast, simply trusted their guts and jumped.
Ferrucci refused to talk on the record about Watson’s blind spots. He’s aware of them; indeed, his team does “error analysis” after each game, tracing how and why Watson messed up. But he is terrified that if competitors knew what types of questions Watson was bad at, they could prepare by boning up in specific areas. I.B.M. required all its sparring-match contestants to sign nondisclosure agreements prohibiting them from discussing their own observations on what, precisely, Watson was good and bad at. I signed no such agreement, so I was free to describe what I saw; but Ferrucci wasn’t about to make it easier for me by cataloguing Watson’s vulnerabilities.
Computer scientists I spoke to agreed that witty, allusive clues will probably be Watson’s weak point. “Retrieval of obscure Italian poets is easy — [Watson] will never forget that one,” Peter Norvig, the director of research at Google, told me. “But ‘Jeopardy!’ tends to have a lot of wordplay, and that’s going to be a challenge.” Certainly on many occasions this seemed to be true. Still, at other times I was startled by Watson’s eerily humanlike ability to untangle astonishingly coy clues. During one game, a category was “All-Eddie Before & After,” indicating that the clue would hint at two different things that need to be blended together, one of which included the name “Eddie.” The $2,000 clue was “A ‘Green Acres’ star goes existential (& French) as the author of ‘The Fall.’ ” Watson nailed it perfectly: “Who is Eddie Albert Camus?”
Ultimately, Watson’s greatest edge at “Jeopardy!” probably isn’t its perfect memory or lightning speed. It is the computer’s lack of emotion. “Managing your emotions is an enormous part of doing well” on “Jeopardy!” Bob Harris, a five-time champion, told me. “Every single time I’ve ever missed a Daily Double, I always miss the next clue, because I’m still kicking myself.” Because there is only a short period before the next clue comes along, the stress can carry over. Similarly, humans can become much more intimidated by a $2,000 clue than a $200 one, because the more expensive clues are presumably written to be much harder.
Whether Watson will win when it goes on TV in a real “Jeopardy!” match depends on whom “Jeopardy!” pits against the computer. Watson will not appear as a contestant on the regular show; instead, “Jeopardy!” will hold a special match pitting Watson against one or more famous winners from the past. If the contest includes Ken Jennings — the best player in “Jeopardy!” history, who won 74 games in a row in 2004 — Watson will lose if its performance doesn’t improve. It’s pretty far up in the winner’s cloud, but it’s not yet at Jennings’s level; in the sparring matches, Watson was beaten several times by opponents who did nowhere near as well as Jennings. (Indeed, it sometimes lost to people who hadn’t placed first in their own appearances on the show.) The show’s executive producer, Harry Friedman, will not say whom it is picking to play against Watson, but he refused to let Jennings be interviewed for this story, which is suggestive.
Ferrucci says his team will continue to fine-tune Watson, but improving its performance is getting harder. “When we first started, we’d add a new algorithm and it would improve the performance by 10 percent, 15 percent,” he says. “Now it’ll be like half a percent is a good improvement.”
Ferrucci’s attitude toward winning is conflicted. I could see that he hungers to win. And losing badly on national TV might mean negative publicity for I.B.M. But Ferrucci also argued that Watson might lose merely because of bad luck. Should one of Watson’s opponents land on both Daily Doubles, for example, that player might double his or her money and vault beyond Watson’s ability to catch up, even if the computer never flubs another question.
Ultimately, Ferrucci claimed not to worry about winning or losing. He told me he’s happy that I.B.M. has simply pushed this far and produced a system that performs so well at answering questions. Even a televised flameout, he said, won’t diminish the street cred Watson will give I.B.M. in the computer-science field. “I don’t really care about ‘Jeopardy!’ ” he told me, shrugging.
I.B.M. PLANS TObegin selling versions of Watson to companies in the next year or two. John Kelly, the head of I.B.M.’s research labs, says that Watson could help decision-makers sift through enormous piles of written material in seconds. Kelly says that its speed and quality could make it part of rapid-fire decision-making, with users talking to Watson to guide their thinking process.
“I want to create a medical version of this,” he adds. “A Watson M.D., if you will.” He imagines a hospital feeding Watson every new medical paper in existence, then having it answer questions during split-second emergency-room crises. “The problem right now is the procedures, the new procedures, the new medicines, the new capability is being generated faster than physicians can absorb on the front lines and it can be deployed.” He also envisions using Watson to produce virtual call centers, where the computer would talk directly to the customer and generally be the first line of defense, because, “as you’ve seen, this thing can answer a question faster and more accurately than most human beings.”
“I want to create something that I can take into every other retail industry, in the transportation industry, you name it, the banking industry,” Kelly goes on to say. “Any place where time is critical and you need to get advanced state-of-the-art information to the front of decision-makers. Computers need to go from just being back-office calculating machines to improving the intelligence of people making decisions.” At first, a Watson system could cost several million dollars, because it needs to run on at least one $1 million I.B.M. server. But Kelly predicts that within 10 years an artificial brain like Watson could run on a much cheaper server, affordable by any small firm, and a few years after that, on a laptop.
Ted Senator, a vice president of SAIC — a high-tech firm that frequently helps design government systems — is a former “Jeopardy!” champion and has followed Watson’s development closely; in October he visited I.B.M. and played against Watson himself. (He lost.) He says that Watson-level artificial intelligence could make it significantly easier for citizens to get answers quickly from massive, ponderous bureaucracies. He points to the recent “cash for clunkers” program. He tried to participate, but when he went to the government site to see if his car qualified, he couldn’t figure it out: his model, a 1995 Saab 9000, was listed twice, each time with different mileage-per-gallon statistics. What he needed was probably buried deep inside some government database, but the bureaucrats hadn’t presented the information clearly enough. “So I gave up,” he says. This is precisely the sort of task a Watson-like artificial intelligence can assist in, he says. “You can imagine if I’m applying for health insurance, having to explain the details of my personal situation, or if I’m trying to figure out if I’m eligible for a particular tax deduction. Any place there’s massive data that surpasses the human’s ability to sort through it, and there’s a time constraint on getting an answer.”
Many experts imagine even quirkier ways that everyday life might be transformed as question-answering technology becomes more powerful and widespread. Andrew Hickl, the C.E.O. of Language Computer Corporation, which makes question-answering systems, among other things, for businesses, was recently asked by a client to make a “contradiction engine”: if you tell it a statement, it tries to find evidence on the Web that contradicts it. “It’s like, ‘I believe that Dallas is the most beautiful city in the United States,’ and I want to find all the evidence on the Web that contradicts that.” (It produced results that were only 70 percent relevant, which satisfied his client.) Hickl imagines people using this sort of tool to read through the daily news. “We could take something that Harry Reidsays and immediately figure out what contradicts it. Or somebody tweets something that’s wrong, and we could automatically post a tweet saying, ‘No, actually, that’s wrong, and here’s proof.’ ”
CULTURALLY, OF COURSE, advances like Watson are bound to provoke nervous concerns too. High-tech critics have begun to wonder about the wisdom of relying on artificial-intelligence systems in the face of complex reality. Many Wall Street firms, for example, now rely on “millisecond trading” computers, which detect deviations in prices and order trades far faster than humans ever could; but these are now regarded as a possible culprit in the seemingly irrational hourlong stock-market plunge of the spring. Would doctors in an E.R. feel comfortable taking action based on a split-second factual answer from a Watson M.D.? And while service companies can clearly save money by relying more on question-answering systems, they are precisely the sort of labor-saving advance deplored by unions — and customers who crave the ability to talk to a real, intelligent human on the phone.
Some scientists, moreover, argue that Watson has serious limitations that could hamper its ability to grapple with the real world. It can analyze texts and draw basic conclusions from the facts it finds, like figuring out if one event happened later than another. But many questions we want answered require more complex forms of analysis. Last year, the computer scientist Stephen Wolfram released “Wolfram Alpha,” a question-answering engine that can do mathematical calculations about the real world. Ask it to “compare the populations of New York City and Cincinnati,” for example, and it will not only give you their populations — 8.4 million versus 333,336 — it will also create a bar graph comparing them visually and calculate their ratio (25.09 to 1) and the percentage relationship between them (New York is 2,409 percent larger). But this sort of automated calculation is only possible because Wolfram and his team spent years painstakingly hand-crafting databases in a fashion that enables a computer to perform this sort of analysis — by typing in the populations of New York and Cincinnati, for example, and tagging them both as “cities” so that the engine can compare them. This, Wolfram says, is the deep challenge of artificial intelligence: a lot of human knowledge isn’t represented in words alone, and a computer won’t learn that stuff just by encoding English language texts, as Watson does. The only way to program a computer to do this type of mathematical reasoning might be to do precisely what Ferrucci doesn’t want to do — sit down and slowly teach it about the world, one fact at a time.
“Not to take anything away from this ‘Jeopardy!’ thing, but I don’t think Watson really is answering questions — it’s not like the ‘Star Trek’ computer,” Wolfram says. (Of course, Wolfram Alpha cannot answer the sort of broad-ranging trivia questions that Watson can, either, because Wolfram didn’t design it for that purpose.) What’s more, Watson can answer only questions asking for an objectively knowable fact. It cannot produce an answer that requires judgment. It cannot offer a new, unique answer to questions like “What’s the best high-tech company to invest in?” or “When will there be peace in the Middle East?” All it will do is look for source material in its database that appears to have addressed those issues and then collate and compose a string of text that seems to be a statistically likely answer. Neither Watson nor Wolfram Alpha, in other words, comes close to replicating human wisdom.
At best, Ferrucci suspects that Watson might be simulating, in a stripped-down fashion, some of the ways that our human brains process language. Modern neuroscience has found that our brain is highly “parallel”: it uses many different parts simultaneously, harnessing billions of neurons whenever we talk or listen to words. “I’m no cognitive scientist, so this is just speculation,” Ferrucci says, but Watson’s approach — tackling a question in thousands of different ways — may succeed precisely because it mimics the same approach. Watson doesn’t come up with an answer to a question so much as make an educated guess, based on similarities to things it has been exposed to. “I have young children, you can see them guessing at the meaning of words, you can see them guessing at grammatical structure,” he notes.
This is why Watson often seemed most human not when it was performing flawlessly but when it wasn’t. Many of the human opponents found the computer most endearing when it was clearly misfiring — misinterpreting the clue, making weird mistakes, rather as we do when we’re put on the spot.
During one game, the category was, coincidentally, “I.B.M.” The questions seemed like no-brainers for the computer (for example, “Though it’s gone beyond the corporate world, I.B.M. stands for this” — “International Business Machines”). But for some reason, Watson performed poorly. It came up with answers that were wrong or in which it had little confidence. The audience, composed mostly of I.B.M. employees who had come to watch the action, seemed mesmerized by the spectacle.
Then came the final, $2,000 clue in the category: “It’s the last name of father and son Thomas Sr. and Jr., who led I.B.M. for more than 50 years.” This time the computer pounced. “Who is Watson?” it declared in its synthesized voice, and the crowd erupted in cheers. At least it knew its own name.
Clive Thompson, a contributing writer for the magazine, writes frequently about technology and science.

Source: http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all&_r=0

A version of this article appeared in print on June 20, 2010, on page MM30 of the Sunday Magazine.