A novel use of supercomputing has resulted in a unique approach to researching the history of the world. SGI and Kalev Leetaru, Assistant Director for Text and Digital Media Analytics at the University of Illinois, set out to map the full contents of Wikipedia’s English language edition using a history analytics application.
To implement the application, Leetaru took advantage of the UV 2000’s global memory architecture and high performance capabilities to perform in-memory data-mining. According to the press release, the project can now visually represent historical events using dates, locations and sentiment data gleaned from the text.
Leetaru recently published Culturnomics 2.0, which utilized 100 million global news articles over a 25-year span, a network of 10 billion people, and contained 100 trillion relationships. A 2.4 petabyte dataset visualized changes in society, including the lead up to the Arab Spring and the location of Osama Bin Laden.
That led to the idea of building a historical map based on Wikipedia entries. The project encompassed a wide range of analysis, generating videos, graphs and charts detailing any number of relationships. Some examples include connectivity structures, visualizing persons who were plotted and cross-referenced in the same article, and graphs depicting the online encyclopedia’s sentiment context over a millennium.
This does not mark the first time a project has attempted to map Wikipedia entries. Previous attempts involved manual metadata entry, which resulted in a narrower scope of location data. In this case, SGI and Leetaru were able to identify and build connections based on every location and date found in Wikipedia’s four million pages.
To achieve these results, the entire English version of the Wikipedia dataset was loaded into the UV 2000’s memory, although no specifics were provided about how big a chunk of RAM that involved or how many processors were being utilized. The UV 2000 architecture is capable of scaling up to 4,096 threads, using Intel E5-4600 processors, and up to 64 TB of memory.
Once in memory, the Wikipedia data was geo- and date-coded using algorithms that tracked locations and dates in text. An average article included 19 locations and 11 dates. The resulting connections were then placed in a large network structure representing the history of the world. With all tags and connections established, visual analysis of the entire dataset could be generated in “near real-time.”
The in-memory application model gave Leetaru the ability to test theories and research historical data, in a way that has never been done before. “It’s very similar to using a word processor instead of using a typewriter,” he said. “I can conduct my research in a completely different way, focusing on the outcomes, not the algorithms.”