Visit additional Tabor Communication Publications
November 06, 2008
Challenge participants include OptIPlanet Collaboratory partners in the U.S., Korea, Japan, Australia, Russia and the Czech Republic
CHICAGO, Nov. 6 -- Among this year's Bandwidth Challenge finalists at Supercomputing 2008 (SC08) is a globally distributed, multi-site collaborative experiment to stream high-resolution content over high-speed networks.
University of Illinois at Chicago's (UIC) Electronic Visualization Laboratory (EVL), along with partner Sharp Laboratories of America, is streaming 4K and Full High-Definition (HD) video, audio and visualizations among three booths on the SC08 show floor in Austin, two Midwestern universities, and several research institutes in Korea, Japan, Australia, Russia and the Czech Republic, to create a sustained global teleconference.
The participating partners already have persistent collaboration spaces in place, consisting of network-connected tiled display walls and common middleware. This cyberinfrastructure is enabling researchers to tackle 21st-century problems--ranging from the origin of the universe to global climate change--by accessing and sharing one or more high-resolution images and animations as well as conducting HD video teleconferences with collaborators worldwide.
The enabling technology is EVL's Scalable Adaptive Graphics Environment (SAGE), middleware for streaming ultra-resolution visualizations, multi-channel audio, laptop content, and HD video camera feeds, from one or more sources, over multi-gigabit networks to a tiled display. SAGE Visualcasting is an applications-driven technique to enable distance collaboration by simultaneously replicating and sending multiple visual and audio streams to multiple, variable-sized tiled displays.
The SC Bandwidth Challenge, a major annual forum for showcasing leading-edge, international, networked applications, is a friendly yet spirited competition. Finalists compete from the SC show floor during a scheduled hour-long slot, and the winning application gets bragging rights for a year.
As one of this year's finalists, EVL and its collaborators will initiate the Global Visualcasting experiment in the San Diego Supercomputer Center booth, where EVL is again participating with its partners from University of California, San Diego. EVL will send gigabit streams of data to SARA in the Dutch Research booth and to KISTI in the KISTI Supercomputing Center booth on the show floor, University of Michigan and UIC. Up to five additional participants will join at will, including Masaryk University in the Czech Republic, KISTI/GIST in Korea, University of Queensland in Australia, Osaka University in Japan, and the Space Research Institute in Russia. All are members of the OptIPlanet Collaboratory, a group of 36 sites around the world committed to building a persistent networked visualization and collaboration environment based on tools and techniques developed during the National Science Foundation-funded OptIPuter project.
The Bandwidth Challenge session will be visualized on Sharp's new 4K (4096x2160) 64-inch LCD display prototype, a single panel four times the resolution of HDTV. Given researchers are already building ultra-high-resolution tiled displays for the office to deal with the scale and complexity of today's data, Sharp sees a market for 4K or higher-resolution display and display systems in the research laboratory of the future.
The networks supporting this challenge include CiscoWave, Pacific Wave, National LambdaRail, Michigan LambdaRail, CESNET, TransLight/StarLight, GLORIAD-Russia, KREONet2, AARNet and JGN2plus.
About Electronic Visualization Laboratory, University of Illinois at Chicago
The Electronic Visualization Laboratory (EVL) at University of Illinois at Chicago (UIC) is a graduate research laboratory specializing in the research and development of networked, high-resolution visualization, collaboration and virtual-reality display hardware and software systems, and the design and implementation of international networking infrastructure. It is a joint effort of UIC's College of Engineering and School of Art and Design, and represents the oldest formal collaboration between engineering and art in the country offering graduate MS, PhD and MFA degrees. EVL has received worldwide recognition for developing the original CAVE and ImmersaDesk virtual reality systems, and, more recently, the GeoWall, the 105-Megapixel LambdaVision tiled display and the Varrier autostereoscopic display. EVL is a founding member of StarLight and the Global Lambda Integrated Facility (GLIF), and was a lead institution of the NSF-funded OptIPuter project. A list of OptIPlanet Collaboratory institutions can be found at www.evl.uic.edu/cavern/optiplanet/. For more information, visit www.evl.uic.edu.
About SAGE: Scalable Adaptive Graphics Environment
SAGE is middleware for streaming and managing ultra-high-resolution visualizations and high-definition (HD) video on scalable tiled displays. SAGE uses distributed rendering clusters connected by gigabit networks to support on-demand, real-time collaborative work sessions. SAGE's Visualcasting service enables multi-point collaboration, whereby the visualizations and HD video streams are replicated and sent to multiple sites to enable researchers to simultaneously communicate with each other and share high-resolution visualizations. Visualcasting is application-centric, enabling researchers to dynamically connect to one or more sites for collaborative sessions; this is an important advancement over traditional network multicasting, which is not automatically supported by today's networking infrastructure and requires network engineering to implement. For more information, visit www.evl.uic.edu/cavern/sage.
Source: Electronic Visualization Laboratory, University of Illinois at Chicago
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.