For decades, musicians have been using sound synthesizers to generate audio to replace or complement acoustic instruments. However, some types of complex south synthesis have not been possible on traditional CPUs. Now, sound researchers are turning to GPUs to give them the processing power needed to take on tougher audio challenges.
Bill Hsu and Marc Sosnick-Pérez explore some of the newer GPU techniques being used for synthesizing sounds in an article titled “Finite difference-based sound synthesis using graphics processors,” which was recently published in the Association for Computing Machinery’s online publication, acmqueue.
Due to the lack of computing power, sound synthesizers were forced to use fairly rudimentary techniques to create sounds in real time, the authors write. This includes using compute simple waveforms, using sampling and playback, and applying spectral modeling techniques to model wave forms. A common thread among these techniques is that “they work primarily with a model of the abstract sound produced by an instrument or object, not a model of the instrument or object itself,” Hsu and Sosnick- Pérez write.
As computing power increased, researchers discovered they could create audio waveforms in an entirely new way: by simulating the physical natures and properties of objects and instruments themselves. After a detailed numeric model of the object or instrument is created, it can then be “played” as it would be in the real world.
“By simulating the physical object and parameterizing the physical properties of how it produces sound,” the authors write, “the same model can capture the realistic sonic variations that result from changes in the object’s geometry, construction materials, and modes of excitation.”
Several techniques exist to create the numerical models of objects and instruments, including one, called the finite difference approximation method, which is said to generate very good sound. However, this approach appears too computationally intense to run on CPUs, hence the interest in using GPUs to exploit multi threaded architectures and a high degree of data parallelism.
In their paper, Hsu and Sosnick- Pérez compare how well CPU- and GPU-based systems perform sound synthesis using the finite difference approximation method. The pair used their own software package, called the Finite Difference Synthesizer, in the tests. FDS simulates a vibrating plate (think drum) and runs in a CUDA environment on Mac OS and Linux.
While the results varied, the GPU-based systems consistently outperformed the CPU-only systems. In some cases, a GPU-based system was able to deliver acceptable CD-level sound quality on a two-dimensional grid (think cymbal) that was nearly 50 percent bigger than what the CPU-based system could handle.
There are several caveats to using GPUs with this method, according to the researchers. The first is something called kernel launch overhead, and manifests as a potentially significant delay. The second is that a limit on the number of threads may restrict how the simulation is mapped to the GPU. The third has to do with a potential inability to synchronize the execution of threads. Some of these problems are more apparent on older GPU architectures, and are less a concern on newer architectures, such as NVIDIA Kepler.
Despite the challenges, the future of using finite difference approximation methods and GPUs to model physical objects and instruments for real time audio synthesis appears to be bright.