Visit additional Tabor Communication Publications
June 26, 2008
Forget curing cancer, solving global warming, or unraveling the origin of the universe. They've finally found the real killer app for supercomputing: advancing chocolate science. The United States Department of Agriculture, Mars Inc., and IBM have gotten together to sequence the genome of the cacao plant (Theobroma cacao) -- the origin of cocoa and chocolate.
The IBM team at the T.J. Watson Research Center in Yorktown Heights will use a Blue Gene supercomputer and its expertise in computational biology to map and analyze the cocoa genome. The whole project is expected to take approximately five years, at which point plant breeders should have a much better understanding of what makes the cacao trees tick.
Apparently the plants are subject to a variety of diseases, pests, and environmental hardships in their tropical homeland, so if breeders had access to the decoded cacao genome they could manipulate the plant's genetic traits to increase production. Global supplies of cocoa have been shrinking lately due to drought and disease outbreaks. At the same time, demand is increasing due to all the positive press about the health benefits of chocolate.
With the exception of a small amount of cacao grown in Hawaii, North America is not in the chocolate growing business. But other U.S. agricultural interests -- almonds, raisins, peanuts, and so on -- are partially dependent on chocolate confections. Mars, of course, has a huge interest in ensuring future cacao supplies. It's annual revenue in 2007 was $25 billion, and is said to be investing $10 million in the genome project. For its part, Mars intends to make the research results freely available through the Public Intellectual Property Resource for Agriculture, a group that supports agricultural innovation for humanitarian and small-scale commercial purposes.
Although Mars and the USDA didn't mention it, a possible hidden agenda in mapping the cacao genome is its application in genetic engineering. While the general public is currently suspicious of GMOs (genetically modified organisms), the potential value of transferring chocolate-making genes into temperate climate crops could be huge. Even with modern agricultural techniques and advanced breeding, tropical agriculture tends to be difficult to sustain for a variety of reasions. Being able to expand chocolate production into temperate agricultural regions would create a whole new business model for companies like Mars. And while plans for chocolate soybeans are probably not yet on the drawing board, that might be just the kind of GMO that the public could sink their teeth into.
Posted by Michael Feldman - June 25, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.