Behind Iceland’s Green Cloud

By Nicole Hemsoth

August 17, 2010

We recently reported on the entrance of GreenQloud into the cloud space, a company which dubs itself as the “provider of the world’s first truly green public compute cloud” –a statement that, by the way, raised some hackles in the community following the announcement of its status as being first in the great green cloud frontier.

According to GreenQloud’s self-description, their IaaS cloud is “powered by 100 percent renewable geothermal and hydropower energy from Icelandic providers” There are powerful words embedded in their slogans, at least in terms of what they invoke in the environmentally-conscious user’s mind…and there is no doubt about it, this is a company that chooses its marketing lingo very carefully.

If we are to assume that GreenQloud is the first one to come up with the idea of laying claim to an Icelandic datacenter and powering it with geothermal energy, then there is something worth considering in this company’s announcement. The problem is, it’s difficult to find proof for who was the first in Iceland (or do they mean in general?), even though we can be relatively certain that unless something drastic happens (like, I don’t know, a major volcanic eruption or something equally catastrophic) that they will not be the last.

HPC in the Cloud recently conducted an email interview with GreenQloud’s CEO, Eirikur Hrafnsson about being the first in the space, energy and cost specifics, and of course, the disturbing possibilities inherent to storing one’s precious data on the same relatively small island where volcanic activity has become part of daily life.

HPCc: GreenQloud’s statement is that it is first in this niche space, but some argue that this is not the first time and other companies have pursued the same route. Do you have a proof point that what you are doing is unique and truly a first?

Hrafnsson: Although there are definitely some green web hosting companies in the world e.g. running on solar power plus carbon offsets we are 100% sure Greenqloud is the world’s first truly green public compute cloud. Truly green means we run on a 100% clean and renewable power grid. Currently Iceland is the only country in the world that can say that and we are its only cloud provider.

HPCc: How much cheaper is this model than regular electricity?

Hrafnsson:  I think you mean how much cheaper is our electricity vs. somewhere else or coal etc.

The price per kWh varies immensely over the world but energy prices are getting higher every year. With no dependency on fossil fuels we are immune to fluctuating fossil fuel prices and can get a 20 year foreseeable pricing contract that is below most prices in Europe and in the US.

That being said it’s not the cheapest because you can of course get temporarily cheap but non-sustainable energy in many parts of the world. So the energy prices are good and when you combine that with our free cooling because of the cool but tempered climate of Iceland we also get savings (less cooling cost) and better, more efficient, data centers.

HPCc: If it is cheaper then is it possible for you to pass savings on the ender user?  If not why not?

Hrafnsson:  We will pass the savings to the end user. Greenqloud is not a premium service, the cheaper energy is one of the reasons why we can do that. There are other future and present savings involved in using Greenqloud as well. Greenqloud will be the first of the clouds to transparently show everyone its total energy usage by displaying a live counter on Greenqloud.com. We then break the energy usage down for each user for any of their virtual computing resources so they can watch their energy use for their carbon accounting.

This information they can use to avoid carbon taxes and save twice. Once for not having to buy carbon credits and secondly for avoiding the taxes. Then we top our energy galore by showing the user in understandable terms how much CO2 they have saved and will save that year according to their usage. In the US it has been recently suggested that a carbon tax starting at $21 should be taken up (carbontax.org). In the UK it has already started and will be implemented full force in January 2011. And with the European Union putting a requirement of 20% reduction of Greenhouse gasses across the board, there is a great incentive to use a truly green cloud.

So being green saves you money. Another thing that saves you money e.g. is our cloud data storage. If you are a customer that needs to deliver content to both the North American market and the European market with reasonable latency by using a service like S3 for example you would think that would be as simple as putting a file into one of the AWS availability zones. What many don’t realize is that files in S3 don’t get copied across borders. Meaning you have to pay for twice if you want to reach both audiences. Iceland is right in the middle of North America and mainland Europe and with our redundant multi terabit and low latency fiber cables to both of them you only have to put your data in one place.

HPCc: What does your geothermal paradigm mean for customers in the U.S., for instance? Why is this relevant to them?

HrafnssonSee above. But to add to that…

The ICT industry is now putting out 2% of the global co2 emissions (Gartner) equal to the airline industry but is growing much faster and could be one of the biggest polluters by 2020 (McKinsey). With the current growth rate of the Internet it is clear that we cannot solve the problem simply by using more efficient hardware. We have to attack the problem at the source, the energy source. So what can the user do? Well seeing public clouds are growing 5x faster than any other sector of IT (IDC) isn’t that the best place to start? So as a user you can make a difference by choosing a truly green cloud, not just because of your usage but also because then the industry might start moving towards the energy source solution.

HPCc: How will GreenQloud compete with Amazon, Azure or Google, since you have no name branding?  Is the “power of green” enough to drive significant customer adoption?

HrafnssonHow about direct clustered storage? Infiniband network and storage fabric. Better data protection laws than anywhere else? More choice in VM sizes? Enterprise monitoring built in? And being the first public cloud to have Amazon AWS compatible API’s so you can easily scale out to Greenqloud or switch? Name branding doesn’t come over night but we are working on that and we will get there fastest through strategic partnering and by getting our eco friendly message across.

HPCc: For many firms, you’re in the category of an off-shore company, what about security, what about compliance, what about disaster recovery?  After all, Iceland has had some, shall we say, volcanic activity–so what happens if the island blows? What happens to customer data?

Hrafnsson:  Off shore can be a plus e.g. for European companies that don’t want to host in the US and vice versa. We don’t expect to get financial institutes from day one but then again that’s not just the problem of being off-shore but simply that they don’t trust public clouds yet. Legal matters will come clear before we launch but in a nutshell we have equal status with EU laws now and are about to get even better data protection laws for our clients (http://immi.is). Iceland has a lower risk index then the US and the UK.

The eruption of Eyjafjalljökull was a great 100 year test. Absolutely nothing happened to our electrical grid, inland communications or network connections to the world. The majority of data centers are built in the south west part of Iceland – in the opposite direction of prevailing wind currents so the ash barely reached those sites and even if it had we wouldn’t have had any problems with it. Furthermore we use more than one data center for data safety. Iceland is not a small island, it is a little bit smaller than England, about the same size as Kentucky. Your data is safe here.

HPCc: What about the datacenters in Arizona powered by solar or the Microsoft power via hydroelectric energy – why and how is this essentially different from what GreenQloud is doing?

Hrafnsson:  Well show me a data center 100% powered by renewable energy that can easily grow with us and I would love to make an availability zone there. You might find the list is very short to empty e.g. take a look at the Greenpeace report on the top players. According to that none of the data center in the US are 100% renewable. The fact is they use less than 15% renewables on the average and that is mostly being skewed by the few data centers in North Carolina that got to take over the bankrupt factories there and because they get very good pricing, for now.

There are, however, new datacenters being built e.g. in Scotland that claim to be 100% green and hopefully they will start to pop up everywhere! We hope to grow the brand to other locations of course but we don’t really need to because we can also partner with other clouds to become their green availability zone.

HPCc: Similar vendors often have specific market and applications they target but it’s difficult to see who your targets are, outside of those who will be swayed by the power of green. What is your market or application focus?

Hrafnsson:  North America and Europe’s public cloud customers and customers targeting those audiences. From SMB’s to Enterprises. We also intend to target the educational market by peering with high speed university networks and by offering a HPC like infrastructure and high performance. Next year we will have a few surprises up our sleeves as well. But more on that later.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This