NICE DCV, our high-performance, low-latency remote-display protocol, was originally created for scientists and engineers who ran large workloads on far-away supercomputers, but needed to visualize data without moving it. When NICE DCV was born in 2007, gigabit Ethernet was what the cool kids had running between buildings (really cool kids had gigabit to the desktop). Off campus, pricey long-haul commercial links formed the connective tissue of the internet, but were still measured in tens of megabits per second. Domestic broadband connections in large cities were around 300 kilobits/s, and Netflix delivered movies on DVD though the US Postal Service.
This made visualization at a distance hard. Yet, we pursued it, because around half of the human brain is dedicated to interpreting visual information making your eyeballs easily the highest bandwidth, lowest latency input device your brain has. What you do with the information after that is for the psychologists (and poets) to ponder, but the existence of microchips, spacecraft and vaccines is evidence enough that the pursuit was worth it. And from necessity, came invention.
DCV was able to make very frugal use of very scarce bandwidth, because it was super lean, used data-compression techniques and quickly adopted cutting-edge technologies of the time from GPUs (this is HPC, after all, we left nothing on the table when it came to exploiting new gadgets). This allowed the team to create a super light-weight visualization package that could stream pixels over almost any network. So lean, in fact, that with reasonable bandwidth most users couldn’t tell that the data and the supercomputer were hundreds, or sometimes thousands of miles away. Nonetheless, physics limits how far apart the two can be before the speed of light becomes a factor.
Fast forward to the 2020s, and a generation of gamers, artists, and film-makers all want to do the same thing – only this time there are way more pixels, because we now have HD and 4k (and some people have multiple), and for most of them, it’s 60 frames per second, or it’s not worth having. Today we have around 12x the number of pixels, and around 3x the frame rate compared to TV of circa 2007. Fortunately, networking improved a lot in that time: a high-end user’s broadband connection grew around 60x in bandwidth, but the 120x growth in computing power really tipped the balance in favor of bringing remote streaming to the masses. Still, physics remains, meaning the latency forced on us by the curvature of the earth and the speed of light, is still a challenge.
The final element for this also started to fall into place in 2007: the new Amazon Web Services offered a compute service (Amazon EC2) and internet-based storage (Amazon S3). They were new and still finding their way, and today have over two-hundred siblings in the AWS services catalogue, spanning the globe in 80 Availability Zones (with even more to come). We still haven’t beaten physics, but we’re making up for it by building our own global fiber network and adding more machinery (and in local and wavelength zones) to get closer to more customers as soon as we can.
Read the full blog here to learn more about how you can stream your 3-D HPC applications by streaming pixels, and not transferring data, with NICE DCV.
Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.