Visit additional Tabor Communication Publications
June 25, 2012
A novel use of supercomputing has resulted in a unique approach to researching the history of the world. SGI and Kalev Leetaru, Assistant Director for Text and Digital Media Analytics at the University of Illinois, set out to map the full contents of Wikipedia’s English language edition using a history analytics application.
To implement the application, Leetaru took advantage of the UV 2000’s global memory architecture and high performance capabilities to perform in-memory data-mining. According to the press release, the project can now visually represent historical events using dates, locations and sentiment data gleaned from the text.
Leetaru recently published Culturnomics 2.0, which utilized 100 million global news articles over a 25-year span, a network of 10 billion people, and contained 100 trillion relationships. A 2.4 petabyte dataset visualized changes in society, including the lead up to the Arab Spring and the location of Osama Bin Laden.
That led to the idea of building a historical map based on Wikipedia entries. The project encompassed a wide range of analysis, generating videos, graphs and charts detailing any number of relationships. Some examples include connectivity structures, visualizing persons who were plotted and cross-referenced in the same article, and graphs depicting the online encyclopedia’s sentiment context over a millennium.
This does not mark the first time a project has attempted to map Wikipedia entries. Previous attempts involved manual metadata entry, which resulted in a narrower scope of location data. In this case, SGI and Leetaru were able to identify and build connections based on every location and date found in Wikipedia’s four million pages.
To achieve these results, the entire English version of the Wikipedia dataset was loaded into the UV 2000’s memory, although no specifics were provided about how big a chunk of RAM that involved or how many processors were being utilized. The UV 2000 architecture is capable of scaling up to 4,096 threads, using Intel E5-4600 processors, and up to 64 TB of memory.
Once in memory, the Wikipedia data was geo- and date-coded using algorithms that tracked locations and dates in text. An average article included 19 locations and 11 dates. The resulting connections were then placed in a large network structure representing the history of the world. With all tags and connections established, visual analysis of the entire dataset could be generated in “near real-time.”
The in-memory application model gave Leetaru the ability to test theories and research historical data, in a way that has never been done before. “It's very similar to using a word processor instead of using a typewriter,” he said. “I can conduct my research in a completely different way, focusing on the outcomes, not the algorithms."
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.