One of the most fertile research areas to dig for solid cloud computing use cases lies in the life sciences.
This week an article in Scientific American looked at the ways scientists in this field are making use of the cloud. While the author points to a number of case studies highlighting the value of Amazon’s cloud in particular, there was also some discussion about the current limitations of cloud computing resources for biosciences researchers.
The author spoke with Giles Day, the managing director of cloud computing at San Francisco-based Distributed Bio, which provides informatics consulting services to a range of life sciences companies. In his view, clouds are not always an appropriate choice for some clients. As he told Larry Greenemeier:
“Let’s say you’re producing terabytes of data that takes a relatively short amount of time to compute…In that case, you’re going to spend an awful lot of money and time shifting data into the cloud to gain a very small reward on the actual compute time.”
This particular problem is one that many outside of the life sciences are running up against, especially in research computing where the problems generally involve similar data sizes. To work around this issue, Day suggests that a hybrid model of cloud computing tends to work the best, wherein researchers keep some aspects of their data-intensive work in-house whereas other data that is more manageable can be shipped off to the cloud, thus freeing up vital physical resources.
As Day stated, “The perfect scenario for using the cloud in biotech is to outsource small amounts of data into the cloud that require a massively parallel computing system for processing and then have the results of that processing returned.”
As for those massive bandwidth bottlenecks, Day reminds researchers that despite modern technology, we still can’t break the laws of physics.