High Performance 3D Image Reconstruction Platforms

By Nicole Hemsoth

October 26, 2007

by Prof. Dr. Marc Kachelrieß, Professor for Medical Imaging, Institute of Medical Physics (IMP), University of Erlangen-Nuremberg, Germany and Olivier Bockenbach, Systems Engineer, Mercury Computer Systems, Berlin, Germany

High-resolution tomographic scanners and other 3D technologies provide a number of compelling advantages for diagnostic medical imaging. However, 3D modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are creating ever larger volumes of data, increasing the need for faster and bigger servers, higher network bandwidth, workstations with large memory and fast graphics, as well as advanced diagnostic software.

Advanced 3D Multi-Slice CT scanners generate up to more than 2000 projections per second, thus increasing the need for high-performance platforms that allow for the reconstruction and processing of medical imaging data nearly in real-time. High-performance systems, such as those based on the Cell Broadband Engine processor technology, allow for the implementation of advanced analytical and statistical CT reconstruction algorithms (and more specifically backprojection algorithms), thus enhancing image quality while keeping the X-Ray exposure of the patient as low as possible.

backprojection.JPG

Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand placed on the memory subsystem. In the past, solving this problem has led to the use of digital signal processors and the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to the memory through dedicated high-speed busses. More recently, attempts have also been made to use Graphic Processing Units (GPUs) and the Cell processor.

However much these architectures differ, they all share common properties that make them attractive for the implementation of backprojection algorithms: the relative balance of high memory bandwidth and processing capabilities. On the other hand, harnessing the power of such devices necessarily also involves decision making with regard to the computational precision, the handling of the signal dynamics and the allowed approximations one can consider for making efficient use of the processing power of these devices.

Once a valid solution for the implementation of a 3D backprojection algorithm has been found for the device under consideration, there are major tasks to be addressed before the algorithm-device pair can be successfully deployed in a medical Image Reconstruction System (IRS).

The backprojection is a considerable processing step, but a 3D image reconstruction algorithm also includes pre-processing, filtering and post-processing steps. Various devices considered for accelerating the backprojection reveal themselves to be more or less usable for other tasks, requiring more or less assistance from other co-processing units. The implementation of the complete reconstruction pipeline may require the combination of several different devices, influencing the design, implementation and maintenance costs. However broad the spectrum of possible solutions may be, clinical and hospital operating conditions place another constraint on the IRS. These requirements are mainly directed towards processing speed, in order to match the hospital workflow. Nevertheless, the operating environment also includes processing density, as well as power and cooling requirements. The expected level of availability also places another constraint on the IRS.

Finally, more and more attention is being paid to the costs of the IRS, not only during the design but also over the complete life cycle of the IRS. Hence, there are multiple aspects that need to be taken into consideration, such as the life cycle of the underlying technology, the rate at which new devices are introduced and the end of life, together with the level of compatibility that is offered among devices from the same vendor or product family.

Hardware platforms

The aim of this investigation is to implement a 3D cone-beam perspective backprojection algorithm for the Cell processor and to benchmark its performance against other alternatives, such as PC-, FPGA- or GPU-based implementations. Four different platforms were selected:

  • the reference platform for image quality is on a standard PC with a single Xeon processor clocked at 3.06 GHz and a front bus side with 533 MHz
  • the PCI Express Cell Accelerator Board from Mercury Computer Systems
  • the PCI VantageRT-FCN board from Mercury Computer Systems
  • the G70 GPU from NVIDIA.

Implementation principles

The reconstruction of the volume can be implemented in many different ways. For instance, one could choose processing of the voxels following x, y or z as the primary processing axis. The z direction offers the most interesting properties in terms of optimization of processing and was therefore chosen as the primary processing axis. Similarly, processing the projections to reconstruct the global volume makes best use of the hardware resources (e.g. registers for processors, BlockRAM for FPGAs) when it is carried out on sub-volumes of regular shape, such as the cube. For the above mentioned reasons, the overall reconstruction method is based on the division of the volume to be reconstructed into slabs, each slab being processed as a small cube.

The complete reconstruction of a slab requires all projections. However, each slab does not require complete full-sized projections. The surface needed to reconstruct a slab is contained in a rectangular shape. When the side of the reconstruction volume is parallel to the detector plane, the projection of the slab of these slices is a rectangle. At other projection angles, the projection of the slab is a hexagon. The height of the hexagon is greatest when the diagonal of the slices is perpendicular to the detector plane. The height of the hexagon is also greatest at the top and bottom of the reconstruction volume. The software has been designed to take advantage of these properties for the reconstruction of the complete volume.

Rectification-based or hybrid methods are in use to speed up the backprojection process [1]. These have demonstrated significant performance gains over direct methods. The advantage consists in utilizing the re-sampling and bilinear interpolation of the projection data to realign to an ideal detector geometry, in order to use only a nearest neighbor approach during the backprojection and thus save a significant number of cycles per point.

PC reference platform

The PC-based code implementation is of the hybrid type in regard to first performing a detector alignment, based on up-sampling and bilinear interpolation, followed by a voxel-driven backprojection based on a nearest neighbor interpolation. The proposed platform is capable of backprojecting 512 projections onto a 512^3 volume in 3.21 minutes.

FPGA platform

A complete description of the first implementation can be found in [3]. This platform performs the reconstruction in fixed point mathematics and is capable of backprojecting 512 projections onto a 512^3 volume in about 25 seconds.

GPU platform

Graphical image processing requires extraordinary data re-sampling capabilities. Therefore, most modern GPUs assist the processing elements with specialized circuitry for performing interpolations. It thus makes little sense to attempt to accelerate the backprojection through the use of hybrid methods. This platform performs the reconstruction in floating point mathematics and can backproject 512 projections onto a 512^3 volume in about 37 seconds [4].

Cell platform

Implementing a Feldkamp backprojection on the Cell processor consists in distributing the tasks between the processing elements of the processor [2]. Using the PPE as a manager of the reconstruction process while the SPEs are dedicated to performing the real reconstruction task is the approach that was selected for this investigation. This platform is capable of backprojecting 512 projections onto a 512^3 volume in about 17 seconds.

Image quality

The different implementations were tested using the same data set and the reconstructed clinical quality volumes obtained for each of these. For this investigation, a mouse scanned with a TomoScope 30s micro-CT scanner (VAMP GmbH, Erlangen, Germany) was used.

All implementations give images of clinical quality. However, slight differences were observed between the different reconstructed volumes. The deviations from the reference results have different origins.

FPGA implementations suffer from the lack of floating point support. In traditional implementations on earlier FPGAs, all of the computation has to use a fixed point representation with the inherent limit imposed by the width of the multipliers; 18 bits for the Virtex-2 Pro. Attempts for using floating point on FPGAs [5] have given interesting results, but the required accuracy has still to be demonstrated for the backprojection.

However close this may be to IEEE standards, the floating point processing on an NVIDIA GPU still shows some deviations with respect to the standard. The effect on the reconstruction results takes the form of incorrect handling of exceptional cases, such as Not A Number (NaN).

The Cell implementation suffers to a certain extent from inaccuracies related to the computations of estimates instead of real values for operators such as divide, square root and exponential. The estimates turn out to be accurate, up to the 6th digit after the floating point. However small the difference with exact results may be, the end effect is seen in the reconstructed volume.

However, image quality takes on a different meaning when it is evaluated in connection with reconstruction speed. For example, it takes approximately the same time for a Cell processor to reconstruct a 1024^3 volume as for a PC to reconstruct a 512^3 volume. Provided that the detector allows for this increased resolution, high performance can offer better resolution of the volume and in any case better image quality.

Performance

The Cell processor offers the best performance when compared with all other designed architectures. The Cell processor and the GPU we have selected are among the most recent technologies available on the market. The Virtex-II is not among the most recent FPGA packages, and the PC reference platform has more powerful successors with the dual-core and quad-core processors.

The newest Virtex-4 and Virtex-5 versions can run at clock speeds of around 500MHz, almost five times faster than the version we have investigated. Furthermore, Xilinx proposes designs to implement DDR-2 interfaces on the Virtex-4 and Virtex-5 chips, thus giving the same increase of 5x in performance for the memory subsystem. Without considering the improvements in the current state of the newest FPGAs, an increase in the performance factor of 5x is the absolute minimum to be expected.

The improvement in performance obtained with the newest dual-core and quad-core architectures is more difficult to estimate. The way in which the I/O and memory access resources are shared and distributed depends on the design of the processor. Nevertheless, with even the best case of a 4x performance improvement, a quad-core system does not come close to the performance of a Cell processor or a GPU, not to mention the newest FPGAs, considering that the reconstruction a standard quad-core system is at least three times slower than a Cell processor and 6 times slower than a modern GPU.

Software complexity

As pointed out in the software implementation section of this article, the most obvious and stable implementation has been realized using a PC. The Cell processor offers a multi-computing platform which is altogether comparable to multi-computers, such as those developed by Mercury Computer Systems with RACEWay, RACE++ and RapidIO. Even though all GPU boards support OpenGL and DirectX, the level of efficiency for different GPU boards varies considerably, even when these are from the same manufacturer. The result is that performance is not predictable across GPU board generations. The implementation section also shows that the coding of reconstruction algorithms is very much more difficult on FPGAs, mainly because these do not offer floating point operators and operators such as multiply, divide, sine and cosine. These functions must be coded as application-dependent Look-up Tables (LUTs).

System integration

The GPUs are intended as the graphical processor companion in every PC. Therefore, most modern PCs can accommodate the presence of a modern GPU, with respect to power supply and cooling. Consequently, all reconstruction hardware that fits in the same (cooling, power supply) envelope can be hosted in the same host PC. The Cell Accelerator Board – a high performance accelerator card based on the Cell BE processor – has been designed to fit into this envelope and can be hosted in any modern PC. FPGA-based boards are subject to the concept of the designer. FPGAs traditionally draw less power than high-clocked devices, such as a GPU or the Cell processor, and are in any case easier to cool.

Conclusion

All basic building blocks, i.e., GPU, FPGA and the Cell processor, are available and can deliver the appropriate image quality, however with varying degrees of effort. Depending on the evaluation criteria, the optimal choice between FPGAs, GPUs, multi-core based PCs and the Cell processor may differ. However, the Cell processor offers a fully programmable architecture, accessible from high-level programming languages such as C. Its application in the gaming industry suggests long-term availability of the parts for realistic field deployment and maintenance also in hospitals.

References

[1] Riddell, Cyril and Trousset, Yves, Rectification for Cone–Beam Projection and Backprojection, IEEE Transactions on Medical Imaging 25(7): 950-962, July 2006

[2] H.P. Hofstee. Power efficient processor architecture and the Cell processor. Proceedings of the 11th International Symposium on High-performance Computer Architecture, Feb. 2005

[3] I. Goddard, M. Trepanier. High Speed cone-beam reconstruction: an embedded system approach. SPIE Medical Imaging Proc., 4681:483-491, 2002.

[4] K. Müller, F. Xu. Accelerating popular tomographic reconstruction algorithms on commodity PC graphics hardware. IEEE Transactions on Nuclear Science, (3):654-663, 2005.

[5] R. Andraka, F. Xu. Hybrid Floating Techniques Yields 1.2 Gigasample Per Second 32 to 2048 point Floating Point FFT in a single FPGA. HPEC Proceedings, 2006.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This