Visit additional Tabor Communication Publications
October 05, 2010
It's one of the biggest challenges in computing -- getting a machine to think like a human. This long-standing computational problem is one that captivates public interest, as evidenced by the much-hyped 1997 chess match between IBM's Deep Blue supercomputer and world champion Garry Kasparov. The machine won a six-game match by two wins to one with three draws, but more importantly, the game brought about an international love affair with supercomputing. It's actually been a long-time since supercomputing has so moved the world. Arguably, not even the 2008 accomplishment of breaking the petaflop barrier created such an intense international stir. There's something about a machine being able to do something so seemingly human, like playing a centuries old game of strategy, that touches hearts and minds more than achieving some remote-sounding number of computations per minute.
But it turns out that playing chess is actually not such a great predictor of "human-ness" for a machine. It's actually pretty easy for computers to beat humans at well-defined tasks such as playing rule-oriented games or predicting weather changes. What's not so easy is for machines to understand language -- indeed semantics is one area where humans still have the clear edge.
In a recent New York Times article, author Steve Lohr covers current advances in the field of computational semantics being undertaken by a group of researchers at Carnegie Mellon University.
Team leader Tom M. Mitchell, a computer scientist and chairman of the machine learning department, outlines the nature of the challenge: "For all the advances in computer science, we still don't have a computer that can learn as humans do, cumulatively, over the long term."
The researchers are working on a project, called the Never-Ending Language Learning system, or NELL. NELL is fed facts, which are grouped into semantic categories, such as cities, companies, sports teams, actors, universities, plants and 274 others. Examples of category facts are "San Francisco is a city" and "sunflower is a plant." NELL has been able to glean 390,000 facts by scanning hundreds of millions of Web pages. The larger the pool of facts, the more refined the system will get.
So much of language understanding is predicated on an underlying knowledge base and that's what NELL is developing. In the sentence: "The girl caught the butterfly with the spots," a human reader innately understands that "spots" refers to the butterfly because the human knows that butterflies are likely to be spotted whereas girls are not. Such "basic" knowledge that we take for granted confounds the computer. This general knowledge can only be learned, and that's why NELL was programmed to learn so many facts.
There have been similar attempted artificial learning programs, but NELL is different in that the system is being taught to learn on its own with little assistance from researchers, although if they notice NELL has gotten something blatantly wrong, like classifying an Internet cookie as a baked good, they will correct those errors.
Full story at The New York Times
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.