Visit additional Tabor Communication Publications
May 18, 2009
Welcome to the era of the computational Web. Wolfram Alpha, Stephen Wolfram's online "computational knowledge engine" debuted on Friday evening, and was officially launched on Monday on its Web site. Wolfram is calling it the first killer app for his universal computation paradigm he developed seven years ago.
Unlike Google or other Internet search engines, Wolfram Alpha focuses on what comes most naturally to computers: crunching numbers. Using Mathematica as the software foundation, the engine applies over 50,000 types of algorithms across more than 1,000 knowledge domains. The query syntax is fairly straightforward and forgiving, and suggestions are provided if you manage to stump the input parser. The results are displayed in a number of useful ways -- graphically whenever possible.
Users can ask for operations as diverse as comparing two publicly traded companies, figuring out the mileage between two cities, blending colors, and finding some interesting facts about your birthday, or playing a major scale. Not surprisingly, it can also handle straight math problems like multiplying matrices, computing a derivative, or factoring a polynomial expression.
Unlike vanilla database Web applications, Wolfram Alpha employs HPC clusters for its computational hardware. At launch time, the application had access to about 10,000 x86 CPUs spread across five datacenters. The largest cluster is R Smarr, a 40 teraflop (Linpack) Dell machine owned and operated by R Systems Inc. The system consists of 576 dual-socket quad-core Harpertown servers, hooked together by DDR InfiniBand. R Smarr currently sits at number 66 on the TOP500 list.
During its weekend debut, Wolfram Alpha was processing queries at the rate of between 80 to 120 per second, although not flawlessly. According to Wolfram Research co-founder Theodore Gray, about 70 percent of the queries were successful. The rest prompted the user to frame the question somewhat differently, usually with some suggestions. "I think that that's a pretty good ratio considering that these are just people from the wild coming in without coaching on what ought to work," said Gray.
A smaller number of queries delivered incomplete results or a "computation timed out" message. Gray claimed that this was due to some faulty servers, rather than a load issue. Supposedly, the current setup is able to handle thousands of queries per second.
On the other hand, not all queries are created equal. For example, computing a Haferman fractal is no problem at 5 iterations (the default), but if you specify the same fractal at 7 iterations or greater, the engine just chokes.
The initial Wolfram Alpha database is said to contain over 10 trillion items of curated data, with live feeds being used to provide continuous updates. Despite that effort, it wasn't too difficult to find some holes. For example, while Wolfram Alpha correctly computes that red + yellow produces orange, it's also is under that impression that blue + yellow produces gray. Finally, some data is just missing, e.g., it can answer why the sky is blue, but not why grass is green.
Obviously, one of the biggest challenges for Wolfram Alpha will be to fill in missing data, while keeping it coherent and accurate -- no small task (just ask Wikipedia). Given the exponential growth rates of information, manual curation can only do so much, so presumably they will have to find ways to automate the process or rely on third parties to deliver domain-specific data.
Despite some of the limitations, the computational model is quite powerful. A lot of the functionality in Wolfram Alpha previously existed in other online computational applications (currency converters, calculators, geomapping services, gene mapping. etc.), but up until now there was no common framework that brought them together. If successful, an online tool such as this for general-purpose computation could change the Web landscape.
The business model remains a question. The Web site is free to the public, and there are currently no advertisers on the site. According to the Wolfram Alpha FAQ, they initially intend to go after corporate sponsorships and customized business deployments. In the latter case, they're looking for companies who can apply the technology to internal databases via a Wolfram Alpha API. Longer term, they may look for targeted advertising. If Wolfram's dream of universal computation comes true, he'll have more business than he knows what do with.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.