Today AMD’s John Fruehe reminisced about last year when his company announced the “What Would You Do With 48 Cores” challenge to highlight possibilities that the Opteron 6100 Series processors might bring into the world. He noted that there were, not surprisingly, some rather strange suggestions, but one company that responded did catch his eye due its unique slant on opening 3D rendering to an entirely new market.
When high definition volume rendering company, Fovia, landed on Fruehe’s radar, it led him to explore the computational processes behind taking a bunch of ones and zeros and “magically” rendering it into a feature-length animated film or television show. Looking beyond these purposes, he discovered that high definition, 3D rendering has just as much to do with digital information as it does data.
Fovia has developed its own 3D volume renderer that has moved beyond the confines of television and film. Companies like GE, Pfizer, and NASA are among a few organizations that have found better ways to analyze data—in high definition.
As Fruehe noted today: “Real-time decisions like medicine and security can be greatly boosted by delivering larger and more detailed data. Doctors often have to make decisions about treatment in a short period of time, sometimes because of costs and sometimes because a life is hanging in the balance. Clarity, as well as speed, matter.”
He claims that Fovia is on the bleeding edge of refining data analysis with its high definition volume rendering software, in part because it is an ideal platform for more cores. He says that “with every additional core, you are able to display and interact with more data in the same amount of time.” Thus, in other words, if a user wants to dive in deeper for more depth and clarity (as well as performance) throwing cores at such a problem will actually work quite well.
If this interests you, the original piece has the results of a test run that AMD asked of Fovia wherein they were asked to run a rendering job on variable core combinations to gauge performance at each core count level. If you’re feeling too lazy to click, let’s just say that the parallel tasks were handled with near linear scaling.
The results indicated that even with five highly demanding (usage-wise) clients all accessing the server simultaneously the server was able to scale the load evenly across all of the cores.
As Fruehe concluded:
When you compare the difference in performance per core, you see that at 48 threads running on 48 cores that you are at 93.77% of the per-core performance of a core when only 8 threads are running. Truly amazing scalability – with every core you throw at the problem you are receiving almost pure linear scalability, truly a feat both for the platform and for the software.
While this was all in the context of AMD Opteron 6100 Series processors, the concept of high definition data rendering that scales well is worth a mention—no matter what the processor in question might be.