Setting Up CUDA in the Cloud
Among the many informative articles on the Amazon Web Services website is this tutorial that describes how to implement CUDA and parallel programming in the AWS Cloud.
The guide, which was posted by AWS Technology Evangelist and Strategist Jinesh Varia, outlines the steps involved in setting up NVIDIA’s CUDA development environment on top of Amazon EC2 Cluster GPU instances. The process relies on the AWS CloudFormation template.
The entire tutorial it outlined in five fairly-simple steps and the first step is to “get an account if you don’t have one” so current AWS account holders can check that task off their list. The remaining steps are to Create a Key Pair; Launch the Virtual Server; Configure Remote Access to the Desktop of the Virtual Server; and Clean Up.
Varia also reminds users to terminate their EC2 instances when they are finished with their GPU clusters in order to avoid further charges.
When you are done with your development work, you should stop or terminate the EC2 instance hosting your virtual server to avoid accrruing [sic] additional charges. You can choose either to stop your EC2 instances or to terminate the CloudFormation stack.
Stopping the instances gives you the option of restarting them at a later time. While the instances are stopped, you will not be charged for compute time, but you may be charged for storing the instance configuration and state.
For an introduction to CUDA 5, General Manager of GPU Computing Software at NVIDIA, Ian Buck, shares a brief overview of the key technologies in this video.