Finding a Way to Test Dark Energy

By Nicole Hemsoth

September 9, 2005

What is the mysterious dark energy that's causing the expansion of the universe to accelerate? Is it some form of Einstein's famous cosmological constant or an exotic repulsive force, dubbed “quintessence,” that could make up as much as three-quarters of the cosmos? Scientists from Lawrence Berkeley National Laboratory and Dartmouth College believe there is a way to find out.

In a paper to be published in Physical Review Letters, physicists Eric Linder of Berkeley Lab and Robert Caldwell of Dartmouth show that physics models of dark energy can be separated into distinct scenarios, which could be used to rule out Einstein's cosmological constant and explain the nature of dark energy. What's more, scientists should be able to determine which of these scenarios is correct with the experiments being planned for the Joint Dark Energy Mission (JDEM), which has been proposed by NASA and the U.S. Department of Energy.

“Scientists have been arguing the question 'how precisely do we need to  measure dark energy in order to know what it is?'” said Linder. “What we  have done in our paper is suggest precision limits for the measurements.  Fortunately, these limits should be within the range of the JDEM  experiments.” 

Linder and Caldwell are members of the DOE-NASA science definition  team for JDEM, which has the responsibility for drawing up the mission's  scientific requirements. Linder is the leader of the theory group for SNAP (SuperNova/Acceleration Probe), one of the proposed vehicles for  carrying out the JDEM mission. Caldwell, a professor of physics and  astronomy at Dartmouth, is one of the originators of the quintessence concept.

In their paper, Linder and Caldwell describe two  scenarios, one they call “thawing” and one they call “freezing,” which point toward distinctly different fates for our permanently expanding universe. Under the thawing scenario, the acceleration of the expansion will gradually decrease and eventually come to a stop, like a car when the driver eases on the gas pedal. Expansion may continue more slowly, or  the universe may even recollapse. Under the freezing scenario, acceleration continues indefinitely, like a car with the gas pedal pushed  to the floor. The universe would become increasingly diffuse, until eventually our galaxy would find itself alone in space.

Either of these two scenarios rules out Einstein's cosmological constant. In their paper, Linder and Caldwell show, for the first time, how to cleanly separate Einstein's idea from other possibilities. Under any scenario, however, dark energy is a force that must be reckoned with.

According to Linder, “Because dark energy makes up about 70 percent of the content  of the universe, it dominates over the matter content. That means dark  energy will govern expansion and, ultimately, determine the fate of the  universe.”

In 1998, two research groups rocked the field of cosmology with their independent announcements that the expansion of the universe is accelerating. By measuring the redshift of light from Type Ia supernovae, deep-space stars that explode with a characteristic energy, teams from the  Supernova Cosmology Project, headquartered at Berkeley Lab, and the High-Z Supernova Search Team, based in Australia, determined the expansion  of the universe is actually accelerating, not decelerating. The unknown force behind this accelerated expansion was given the name “dark energy.”

Prior to the discovery of dark energy, conventional scientific wisdom held  that the Big Bang had resulted in an expansion of the universe that would gradually be slowed by gravity. If the matter content in the universe  provided enough gravity, one day the expansion would stop altogether and the universe would fall back on itself in a Big Crunch. If the gravity from matter was insufficient to completely stop the expansion, the  universe would continue floating apart forever.

“From the announcements in 1998 and subsequent measurements, we now know that the accelerated expansion of the universe did not start until  sometime in the last 10 billion years,” Caldwell said.

Cosmologists are now scrambling to determine what exactly dark energy is. In 1917, Einstein amended his General Theory of Relativity with a cosmological constant, which, if the value was right, would allow the universe to exist in a perfectly balanced, static state. Although history's most famous physicist would later call the addition of this constant his “greatest blunder,” the discovery of dark energy has revived  the idea.

“The cosmological constant was a vacuum energy (the energy of empty space) that kept gravity from pulling the universe in on itself,” said Linder. “A problem with the cosmological constant is that it is constant, with the same energy density, pressure and equation of state over time. Dark energy, however, had to be negligible in the universe's earliest stages; otherwise, the galaxies and all their stars would never have formed.”

For Einstein's cosmological constant to result in the universe we see today, the energy scale would have to be many orders of magnitude smaller than anything else in the universe. While this may be possible, Linder said, it does not seem likely. Enter the concept of “quintessence,” named after the fifth element of the ancient Greeks, in addition to air, earth, fire and water; they believed it to be the force that held the moon and  stars in place.

“Quintessence is a dynamic, time-evolving and spatially dependent form of energy with negative pressure sufficient to drive the accelerating expansion,” said Caldwell. “Whereas the cosmological constant is a very  specific form of energy — vacuum energy — quintessence encompasses a  wide class of possibilities.”

To limit the possibilities for quintessence and provide firm targets for basic tests that would also confirm its candidacy as the source of dark energy, Linder and Caldwell used a scalar field as their model. A scalar field possesses a measure of value, but not direction, for all points in space. With this approach, the authors were able to show quintessence as a scalar field relaxing its potential energy down to a minimum value. Think of a set of springs under tension and exerting a negative pressure that  counteracts the positive pressure of gravity.

“A quintessence scalar field is like a field of springs covering every  point in space, with each spring stretched to a different length,” Linder said. “For Einstein's cosmological constant, each spring would be the same length and motionless.”

Under their thawing scenario, the potential energy of the quintessence field was “frozen” in place until the decreasing material density of an expanding universe gradually released it. In the freezing scenario, the  quintessence field has been rolling toward its minimum potential since the universe underwent inflation, but as it comes to dominate the universe it gradually becomes a constant value.

The SNAP proposal is in research and development by physicists, astronomers, and engineers at Berkeley Lab, in collaboration with colleagues from the University of California at Berkeley and many other institutions; it calls for a three-mirror, two-meter reflecting telescope in  deep-space orbit that would be used to find and measure thousands of Type Ia supernovae each year. These measurements should provide enough information to clearly point toward either the thawing or freezing scenario — or to something else entirely new and unknown.

Said Linder: “If the results from measurements such as those that could be made with SNAP lie outside the thawing or freezing scenarios, then we may have to look beyond quintessence, perhaps to even more exotic physics, such as a modification of Einstein's General Theory of Relativity to explain dark energy.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This