Visit additional Tabor Communication Publications
July 05, 2011
Private and public sector water irrigation systems are getting a boost from high-test computing as they harness custom weather data to create smart watering systems that could save billions of gallons of water across the nation.
According to a recent article in Scientific American, a number of companies and municipalities have relied on irrigation and sprinkler systems that would turn on and off and particular times during the day without human involvement. However, during periods of heavy rain when such systems wouldn’t be needed, sending a maintenance person around to the locations where sprinkler or irrigation systems were would be a lengthy process and considered a waste of effort since they would need to be reset again.
The article points to one example in a Silicon Valley school district where, in 2009, “the district installed new smart controllers that automatically adjust daily watering to the weather.” They describe how “each box, fitted with a microprocessor and antenna, receives local real-time weather information by satellite from the WeatherTRAK climate center supercomputer run by Petaluma California-based HydroPoint Data Systems.” This data then regulates the watering and irrigation systems, sometimes instructing them to run once in 11 days versus daily.
The article goes on to point to how this real-time data is being used to regulate and control water output in a way that goes beyond mere timing and watering intervals:
“With most sprinkler systems, property owners set the traditional controller—basically a timer—to irrigate at specific intervals. Often, too much water is lost to evaporation during hot weather or to runoff during cool weather, which can also carry chemicals into the local watershed or ocean. Because outdoor irrigation can suck up 50 percent or more of urban water consumption, smart irrigation services have caught on in drought-prone western states like California, where water prices are relentlessly rising. (Occasional big floods don't help the long-term problem.) HydroPoint now has more than 8,000 clients using 24,000 of its smart controllers, including Walmart, Coca-Cola, Hilton, Jack in the Box and the University of Arizona as well as the cities of Charleston, S.C., Houston and Santa Barbara.”
Full story at Scientific American
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.