When Time Is of the Egress: Optimizing Your Transfers

By Andrew Kaczorek and Dan Harris

July 31, 2012

Traditionally running scientific workloads in AWS provides a diverse toolkit that allows researchers to easily sling data around different time zones, regions, or even globally once the data is inside of the infrastructure sandbox. However, getting data in and out of AWS has historically been more of a challenge. The available resources are still evolving and those pesky laws of physics tend to get in the way. Considering the rise of enterprises utilizing cloud for larger data and compute needs and the complexities that come with it, we thought it would be helpful to offer tips on optimizing ingress and egress transfers.

Within scientific computing there is a massive disconnect from theoretical conversations and the real world of data movement. We recently performed a data transfer to Amazon’s Elastic Compute Cloud using their Import/Export service. The service allows customers to mail in data on physical media which is then placed into a S3 bucket or EBS volume of their choice. As an experiment to compare this transfer to network-based transfer mechanisms like multi-stream upload to S3, we recorded all the time it took to prepare and ship the drive to Amazon.

There were several steps to transfer the 317 GBs of DNA sequence data into EC2:

  1. Installed AWS Import/Export command line tools.

  2. Created an Import job using AWS command line tools including a manifest and signature.

  3. Realized that the drive is an ext3 file system (and mounting ext3 on OS X is non-trivial).

  4. Created an Ubuntu virtual machine.

  5. Mounted the drive on the Ubuntu VM and wrote the signature file and manifest to the drive.

  6. Physically labeled the drive with a transfer ID that was provided by the registration process.

  7. Packaged and addressed the drive with a specific address that was to be used for the shipment.

  8. Headed to the local FedEx and shipped the drive overnight.

  9. Waited….

  10. Viewed completed transfer logs.

The next step had us moving the data from S3 to an EC2 instance to use it in a computation run. Direct to EBS snapshot is an option, but due to its higher costs as an image of the drive, the unknowns associated with the newness of the feature, and the constrains to the specific content of the file system, we decided against it.

Table of Shipping and Transport Times:

Prepare Drive

3 hr (concurrent with other project work)

Drive Shipped

4:12 PM EST (FedEx log)

Drive Arrives IAD

3:20 AM EST (FedEx log)

Drive Arrives at Amazon facility

9:45 AM EST (FedEx log)

Drive accepted by Amazon

1:13 PM EST (I/E toolkit log)

Data transfer begins

5:40 PM EST (I/E toolkit log)

Data transfer completes

9:17 PM EST (I/E toolkit log)

Here is a summary of the entire activity:

Total time to transfer 317GB

32 hours

Extrapolated total time to transfer 1TB

39.8 hours

Throughput of active AWS transfer

199 Mbps

Active AWS transfer of 317GB

3.6 hours

Extrapolated active AWS transfer of 1TB

11.4 hours

Overall throughput of 317GB transfer

22.5 Mbps

Extrapolated overall throughput of 1TB transfer

57.2 Mbps

This import job was compared to the results on some recent multi-stream upload tests performed with an envy-inducing 5 Mbps upload speed compared to 1 Mbps.

File Size

Transfer Time

Avg Speed

250 MB – one thread

413 seconds

.605 MB/sec (4.84 Mbit/sec)

250 MB – 30 threads

412 seconds

.606 MB/sec (4.84 Mbit/sec)

1 GB – one thread

1,695 seconds

.604 MB/Sec (4.83 Mbit/sec)

1 GB – 30 threads

1,693 seconds

.605 MB/sec (4.84 Mbit/sec)

We were able to saturate upload bandwidth and ingress at customer sites, which have much higher outbound data rates in the 50 Mbps range. Further, if there’s a bottleneck for delivering data over the wire it’s on the source end and not on the EC2 end of the line.

The results showed that 50 Mbps of upload speed could saturate a company’s network therefore throttling transfer at 70 percent total bandwidth for an outbound rate 35 Mbps. Interestingly, the transfer speed is faster than the Import/Export service. This shows that almost 500 GB could be moved in the same time it took to transfer by shipping the drive. This drive wasn’t filled to capacity and the theoretical Import/Export throughput would use a full drive by extrapolating the time to load 1TB. Loading that extra data would take about 8 more hours and increase the throughput of the Import/Export approach to 58 Mbps. The rate could also increase if the time it takes to prep the drive was reduced.

What we found from our experiment is that the nature of your workflow should be considered when deciding which transfer method to use. If producing a constant flow of data at a rate that matches the allotted upload bandwidth, streaming over the network is a better option. On the other hand, if there is a large, pre-existing data set and no time to wait for it to upload consider using Amazon’s Import/Export service.

Initiating a transfer entirely in software and having the data eventually make its way into the cloud without getting up from your desk is not always practical. For example, a 317 GB payload would take approximately 30 hours to transfer to AWS if using the Import/Export job approach and 30 days to import 1 Mbps uplink was saturated 24/7. Given a typical enterprise uplink of 50 Mbps, the tables would be turned. Let’s not forget non-technical factors involved in the use of the Import/Export approach such as the hassle handling USB drives, cardboard, packing tape, and cranky shipping depot employees.

Lastly, if the over-the-wire transfer is projected to take longer than a business week, use an AWS Import/Export job instead. AWS Import/Export is an extremely viable way of managing the ingress and egress of data until bandwidth becomes more ubiquitous and plentiful.

Editor’s Note: The original byline was incorrectly attributed to Cycle Computing CEO Jason Stowe.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural Users Group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

Imagine if all the atoms in the universe could be added up into a single number. Big number, right? Maybe the biggest number conceivable. But wait, there’s a bigger number out there. We're told that Go, the world’s Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural Users Group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

Imagine if all the atoms in the universe could be added up into a single number. Big number, right? Maybe the biggest number conceivable. But wait, there’s a Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This