Visit additional Tabor Communication Publications
December 02, 2005
While the first digital, filmless hospital was created just seven years ago, PACS is now an accepted technology, offering radiologists and other clinicians the ability to retrieve, share, and remotely access complex, two-dimensional scan data. The amount of information that must be processed, however, has grown exponentially, taxing the ability of standard desktop workstations to process and display the data in a timely fashion.
Radiologists today are experiencing "slice overload;" it is simply not possible to efficiently view thousands of single images in a reasonable amount of time. As a result, more clinicians are now looking toward volume reconstructions, rather than a multitude of static scans, as an efficient and optimal use of the entire scanned data set. With volume exploration capabilities, researchers can examine scans much more accurately and faster, discovering anomalies that would simply not be apparent through a multitude of two-dimensional views.
"We're undergoing a revolution in CT scanning as a digital input modality," said Robert Cooke, Fuji's executive director of marketing, network systems. "Volume exploration will make much more accurate diagnoses possible, creating great benefits for science and ultimately the patient."
Fuji and SGI are creating a system that eliminates the digital bottlenecks of expanding data, making rapid, economical volume exploration a reality. To accomplish this, data no longer will be rendered on a radiologist or doctor's desktop but rather at a centralized 3D graphics server. Using SGI's shared-memory and single system image architecture integrated with multiple ATI FireGL graphics processor units (GPUs), data processing tasks can be divided between GPUs to minimize rendering time and maximize image quality.
"We are proud of the impact that ATI FireGL workstation graphics accelerators are having today in powering the next generation of medical imaging solutions," said Dinesh Sharma, director of Workstation products, ATI Technologies. "Together with Fuji and SGI, ATI is pleased to be playing a key role in bringing to the medical community new capabilities that will dramatically improve the diagnostic process for medical professionals and their patients."
The Silicon Graphics Prism visualization system brings the following benefits to the new Fuji volume exploration PACS solution:
-- Short response time. Radiologists and clinicians can start interacting
with the data within seconds because rendering is local to where these
large data sets are stored. SGI Visual Area Networking (VAN) technology
sends just the reconstructed voxels to the radiologist's desktop while
data still resides on the server.
-- Scalability in data handling capability. Large data sets can be loaded
in main memory due to the 64-bit architecture. System resources such as
CPUs, I/O, memory, storage, and graphics, can be independently expanded
as the hospital's needs grow.
-- Using dynamic load balancing, the radiologist is not limited to the
texture capacity of a GPU and can utilize the scalability of the
Silicon Graphics Prism architecture to load more studies for the best
diagnosis. Dynamic load balancing does not limit radiological studies
to the capacity of modern day GPUs.
-- Scalability in number of users. Multiple users can share the same
system due to scalable architecture.
-- Scalability in rendering quality. Modern GPUs coupled with scalability
allow high-quality rendering algorithms to be deployed. This way a
volume can be interactively rendered by a user and "tumbled" without
the resolution sacrifice all other 3D systems make. This is a key
enabling technology so that diagnoses can be made with volumes when
there is no a priori knowledge of where the radiologist needs to look
for potential disease processes.
-- Maintains existing workflow. The FUJIFILM and SGI solution maintains
the standard diagnostic Synapse workflow. Scans are accessed through
the standard Internet Explorer web browser interface, and data is
transferred through the IP network. Physicians use their existing
monitors and drives; no equipment upgrade or system re-education is
necessary. And because the Silicon Graphics Prism system will reside
alongside Synapse, the two will integrate seamlessly. Researchers can
access volume data using the same workflow techniques with which they
are already familiar.
The Silicon Graphics Prism system brings Visual Area Networking (VAN) technology to diagnostic scan analysis, making it possible to rapidly render and transmit volume exploration data to virtually any desktop workstation without sending the data across the network.
VAN through SGI OpenGL Vizserver software enables the transfer of rich data between the Silicon Graphics Prism and a thin client. To keep the files small, VAN technology transmits only the pixels of the rendered graphic, rather than the raw data itself. As a result, VAN technology can operate on virtually any type of client, including laptops, workstations and, eventually, even PDAs.
"The combination of Silicon Graphics Prism visualization system and Fuji's Synapse PACS solution provides hospitals with a very cost-effective, powerful and flexible centralized system. As technology grows customers will be able to leverage the latest innovations in compute and visualization without changing the entire PACS infrastructure," said Afshad Mistri, senior manager of Advanced Visualization, SGI.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.