Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Since we published that post on this blog channel, we’ve been asked by several customers whether all this efficient pixel-pushing could lead to outbound data charges moving up on their AWS bill. These are the “data-out” fees you see on your bill each month. They’re metered on the data flowing out of the cloud across the internet, and are typically quite small, often falling into the free tier.
Usually, the best answer to any question you might have about the cloud is “just give it a try”, since the cost of experimentation is small. Since we heard this question from several customers in a short space of time, we decided to try it on your behalf, and share the details with you in this post. The bottom line? The charges are unlikely to be significant unless you’re doing the kind of intensive streaming that gamers do, and there are easier optimizations (like AWS Instance Savings Plans) that will have more impact.
Background
You might recall that – using a new transport called QUIC (RFC 9000) – DCV is able to mask even more of the effects of distance, so end-users running complex and graphically-intensive applications in the cloud feel like they’re just across campus from the data center. They don’t see “buffering” messages, and the video stream doesn’t stall when there’s ad hoc congestion on the internet somewhere.
The kinds of things that impact streaming performance vary. Latency, bandwidth, packet-rates, and reliability all factor into whether a user will notice that the connection between their desktop and the server is anything less than perfect. But these are network supply-side factors. The demand-side is about how many pixels we try to push down a connection of varying (and probably unpredictable) integrity. DCV works to optimize this equation by only moving pixels from parts of the screen that have changed and only retransmitting fragments of frames (caused by lost packets) when it’s necessary.
The combination of these kinds of optimizations are what let us continuously innovate ahead of voracious pixel-generating industries, like gaming or live streaming. You can reasonably expect to stream 4K gaming content up to 60 frames per second (FPS) over a decent domestic-grade internet connection to your house. In the broader scheme of things, this is amazing, and it relies on many technology advances in the last 20 years that far outstrip the growth in network bandwidth, which you might have assumed was the primary factor.
Predicting these in advance sometimes feels like guess work for many customers using the cloud for the first time. That’s because in a traditional, on-premises, environment you pay a single, up-front (and often quite large) fee to have an always-on internet connection for your whole data center. It costs you money, whether you’re using it or not, and you must know many months (or years) in advance how fat that pipe needs to be to satisfy all your users (and you probably never will). The cloud was built to reinvent all of that, and in doing so the mantra ‘pay only for what you use’ became the operating principle. If you’re wondering why we don’t have a data-in charge: it’s got a lot to do with the fact that data movement over the internet is an incredibly lop-sided equation. Just measuring data-out pretty much covers it.
Our setup
Given that DCV is used by a diverse set of customers, we needed to simulate several environments to make sure we weren’t misrepresenting anyone’s usage pattern. We settled on testing a range of screen resolutions from 1024×768 (Standard Definition, or SD), 1920×1080 (High Definition, or HD) and 3840×2160 (4K). The difference is around 5x the number of pixels. As you scale through that range, however, you’ll find that applications and GPU boards capable of pushing 4K are also likely running at greater framerates – including 60 FPS for the most intensive scenarios.
Those scenarios had to vary, too. Our starting point was a simple document-editing or slide-preparation session using Microsoft Office. Next, we simulated a CAD/CAE environment using Paraview to manipulate a complex 3D structure undergoing fluid dynamics analysis. Finally, we stressed everything (including our home broadband connections) by streaming 4K highly-animated content using content from YouTube, and some game benchmarks that’re widely used in the industry to punish GPUs (we used Heaven and Superposition in our tests).
The advantage of the game benchmark is that it simulates the kind of action frequently seen in a game environment –tens or hundreds of objects are moving at the same time in all directions in some thrilling moment of the adventure.
As a baseline, we ran all our tests on an Amazon EC2 g4dn.xlarge instance, which has a single NVIDIA T4 GPU, 4 vCPUs and 16 GB of RAM. Your choice of instance should match the intensity of the graphics performance you need in your application. It’ll also change if you’re sharing the GPU between multiple users or streams, which you can do with DCV. You might do this if you’re running a video streaming service rather than an engineering design company. You can see our results below and make your own judgements about how you depart from this baseline.
Read the full blog to see test results across a range of scenarios, resolutions and frame rates using NICE DCV.
Reminder: You can learn a lot from AWS HPC engineers by subscribing to the HPC Tech Short YouTube channel, and following the AWS HPC Blog channel.