A Global Climate of Change

By Christopher Lazou

September 21, 2007

“Our civilisation is destroying itself because it is determined to disregard all limits in all areas.”
     -Dominique Bourg, philosopher of sustainable development

Global warming and its dire consequences have at long last permeated the special interest barriers and are at the centre of political debate. A recent EU Research magazine produced a special feature with the title: “Climate Change: We can’t wait any longer,” stating: “The 4th IPCC report was issued and adopted this spring amidst a blaze of publicity and debate. It summarises two decades of important multidisciplinary research and formally concludes that the symptoms of global warming due to human activity are all too real, and will inevitably progress faster than was previously thought. We must act.”

The “business as usual” model will inevitably increase global warming and consequently destruction, faster than previously predicted. The European Union (a high polluter) has taken onboard the IPCC conclusions and the findings of Nicholas Stern, an economist and author of a forceful report commissioned by the UK government on the cost of global warming. The EU already agreed to drastically reduce green house gas emissions by 20 percent between now and 2020.

The recent Asian Pacific Economic Community (APEC), which includes the USA, China, India, Australia — the biggest polluters — are however dragging their feet. This is likely to reduce the pressure on EU countries to deliver their targets. In the meantime, the devastation of property, infrastructure and threat to life continue unabated. Heat waves, forest fires, drought, flooding and hurricanes are becoming everyday news items.

It was in this climate that from Sept. 9-13 about eighty meteorologists and HPC experts from large-scale computing centres from eleven countries attended the bi-annual Computing in Atmospheric Sciences (CAS) workshop on the use of HPC in meteorology, held at the idyllic Imperial Palace Hotel, Annecy, France. The workshop was organised by the National Centre for Atmospheric Research (NCAR), USA. This excellent small and friendly workshop provided a tour de force in meteorological and computing techniques by active practitioners, some of them IPCC contributors, striving to maximise the latest HPC technology to refine and improve their climate prediction models.

Refinement of models is an urgent requirement for the development of realistic mitigation strategies to address the potential catastrophic consequences of global warming. These talks were augmented by speakers from broader scientific centres of excellence, like CERN, NERSC and ORNL, and from funding bodies such as the NSF, in the USA.

Most presenters came from sites in the USA with large Cray XT3/4, IBM Power5/6 and Blue Gene/L systems, while the European and Australian contingent included a strong representation from sites with large-scale parallel vector NEC SX-8 systems. The main HPC vendors described their upcoming products, their vision for petaflops computing and the technology advances needed for exaflops systems. What became abundantly clear during this workshop is that the “business as usual model” — be it in human activities as a whole or in developing computing technologies — is not a realistic option and radical new approaches are needed. This article highlights a few of the many climate and technology issues raised by presentations given at this workshop.

There were 38 presentations in three and a half days, some describing grid’s enabling potential for international collaboration, such as CERN and also within the community of climate system modelling. The talks were crammed with technical information on how to use parallel supercomputers for computation using mathematical models that describe climate/weather patterns over time. These were interspersed with weather maps and video pictures from simulations and compared with satellite pictures of actual weather events.

Why are meteorologists doing all this Earth System Modelling and what is the urgency? As stated above, dramatic flooding and other extreme events related to climate change are happening and frequently reported in the press and on television. For example, it has just been reported that satellite images show that the North West passage connecting the Atlantic and the Pacific oceans is free of ice, making it navigable for the first time since records were kept. Also, seventeen central African countries are currently flooded, with millions of peoples’ homes and crops devastated. “It is common knowledge that it is the countries of the south that stand to be hardest hit by global warming when at present it is the countries of the north that are the biggest polluters,” Nicholas Stern was reported as saying.

Climate simulations show that green house gases attributed to human activities are causing an increase in the Earth’s average temperature. Consequently, fires from intensely hot summers and flooding from heavy rainfall are becoming more common. These images of devastation and the economic aftermath are injecting a political dimension into the proceedings.

A number of talks dealt with prediction and mitigation strategies, to deal with devastating events such as flooding and hurricanes. The costs of these events are enormous. Hurricane Mitch caused the deaths of over 9,000 people in Nicaragua mostly from flooding and landslip.

Greg Holland from NCAR gave a stimulating keynote talk titled: “Anthropogenic Influences on Intense Hurricanes,” which focused not only on observed hurricanes, but also on the scientific evidence for causal attribution.

He explained that apart from direct impacts, indirect impacts arise from forecast uncertainty, design and preparations and imperfect knowledge of cyclone parameters. Coastal impacts from tropical cyclones include harbour damage, house and crop destruction, forest damage from wind, waves, flooding and landslips. Excluding the loss of life, the direct damage from hurricane Katrina was about $80 billion dollars and an additional $40 billion as indirect costs to the USA. About 95 percent of the oil and gas production in the Gulf was disrupted and about 150 oil rigs were lost. Tornadoes and flooding reached as far away as Quebec and people were displaced from most coastal states. Government recovery costs were $10-15 billion. The Consumer Price Index (CPI) impact was around 1.4 percent to 2.3 percent. The total CPI cost was estimated to be: $16 to 26 billion. The cost per household ranged from $140 to $230. Reduction in economic growth rate was about 1 percent, but this was compensated by a subsequent overshoot in the economy. In the USA, 50 percent of the population live within 50 miles from the coast and the cost for evacuating one mile of coast is about $1 million per day. Note that recovery from a storm impact takes years.

Greg went on to convey the IPCC position on hurricanes: “It is likely that increases have occurred in some regions since 1970; it is more likely than not a human contribution to the observed trend; it is likely that there will be future increasing trends in tropical cyclone intensity and heavy precipitation associated with ongoing increases of tropical SSTs [sea surface temperatures].” He demonstrated with detailed graphs that the bulk of the warming since 1970 is due to anthropogenic effects. He reviewed Atlantic SST and atmospheric modes of variation and demonstrated that these are not accounted for by natural variability. His conclusion was unequivocal: “Anthropogenic climate change is substantially influencing the characteristics of North Atlantic tropical cyclones through complex ocean-atmosphere connections and may be influencing other regions.”

The above view was reinforced by Dr. Warren Washington in his talk titled: “Computer Modeling of the 21st Century and Beyond.” From the 1970s, Warren sat on a committee that advised six U.S. presidents on climate issues. He is also heavily involved at NCAR in the Community Climate System Model (CCSM), which has produced one of the largest data sets for the IPCC fourth assessment. Echoing Greg’s sentiments, he said, “As a result of this and other assessments, most of the climate research science community now believes that humankind is changing the Earth’s system and that global warming is taking place.” He also told me that every question raised by sceptics was thoroughly reviewed and rigorously refuted. “Natural events do not explain global warming. The smoking gun is human emissions, and once included, the warming can be reproduced from year to year,” he stated.

According to Paul Crutzen, another IPCC participant: “The only criticism that could be made of the IPCC report is of it being too cautious.”

For HPC vendors the good news is that there is a lot of new work to be done requiring oodles of computing power. The computing requirements are for data assimilation, modelling internal oscillations, prediction of external forces and hurricane/climate feedback. A system delivering 200 teraflops of sustained performance would be appreciated and utilised today.

In fact, most sites pursuing climate change research are well endowed with computer resources. Three years ago they were lucky to muster 0.5 teraflops of sustained performance. Current procurements in progress have minimum requirements of around 10 terflops of sustained performance in phase one followed by at least twice that by 2009. The climate applications, of course, can use an order of magnitude more power now without waiting for the end of the decade. The tantalising fact is at least one system capable of delivering 200 teraflops of sustained performance with less than 9,000 processors will be available from Japan early next year, but it is unlikely to be sold in the USA.

In the next few years, the CCSM will be further expanded to include reactive troposphere chemistry, detailed aerosol physics and microphysics, comprehensive biogeochemistry, and ecosystem dynamics, and the effects of urbanization and land use change. These new capabilities will considerably expand the scope of earth system science that can be studied with CCSM and other climate models of similar complexity. Higher resolution is especially important near mountains, river flow, and coastlines. Full hydrological coupling including ice sheet is important for sea level changes. It will include better vegetation and land surface treatments with ecological interactions as well as carbon and other biogeochemical cycles.

For example, Dave Randal has a five-year NSF grant to study clouds using high-resolution models. Clouds are central to earth sciences, climate change, weather prediction, the water cycle, global chemical cycles and the biosphere. Dave stated: “We are being held back in all of these areas by an inability to simulate the global distribution of clouds and their effects on the Earth system.” The need for high resolution catapults this application into the realm of petaflops computing.

The computer requirements for the next generation of comprehensive climate models can only be satisfied by major advances in computer hardware, software, and storage. The classic climate model problems with supercomputer systems are: The computers (with the exception of vector systems) are not balanced between processor speed, memory bandwidth and communication bandwidth between processors, including global computational needs. They are more difficult to program and optimize; it is hard to get I/O out of computers efficiently and computer facilities need to expand archival data capability into the petabyte range. There is a weak relationship between peak performance and performance on actual working climate model programs.

Thus with sustained teraflops computing performance now on stream, meteorologists are moving from climate to Earth System Modelling (ESM). This is because feedback loops between climate, ecology and socio-economics are significant. Climate modelling is not possible without proper representation of these systems. Earth System Modelling is: multi-scale (time and space), multi-process, multi-topical (physics, chemistry, biology, geology, economy, etc.). which is both compute- and data-intensive. Some people claim it requires several orders of magnitude more computing power to tackle the problem. Petaflops and exaflops are therefore eagerly awaited.

Stefan Heinzel, director of Garching computing centre of the Max Plank Society (RZG), Germany, gave a keynote titled: “Toward Petascale Computing in Europe – A Challenge for the Applications and the Hardware Vendors.” He listed the current petaflops projects and their likely hardware architectures. The Riken project in Japan, the Cray Cascade and the IBM PERCS in the USA were all mentioned. Doing the math, he indicated that one petaflop of sustained performance would need hundreds of thousands of cores. Even for vector machines, it could be around 50 thousand cores. The question is how does one deal with O(50000) parallelism using local memory and MPI? What about the CPU memory gap, which keeps widening as CPUs grow much faster per year than the 7 percent increase of memory speed?

Transistors on an ASIC are still doubling every 18 months at constant cost, but in the last two years, neither AMD nor Intel announced significantly faster cores. Performance improvements are now coming from an increase in cores on a processor. Presently four cores are standard; soon this will be eight. Intel already announced an 80-core processor technology. IBM doubled the performance from the Power5 to Power6 reaching 4-5 GHz, but using 2 cores. The Power6+ could increase the frequency incrementally, but doubling the frequency of the Power7 is going to be difficult. After the year 2015 one envisages about 512 cores or more (nanocore).

Sequential applications are of O(1). There is no substantial performance increase delivered by faster cores. It is the same on the desktop and on HPC systems. The snag is that memory speed increases only seven percent per year, with no improvement of latency of the cache architecture or memory bandwidth. Current HPC applications can use O(K) MPI tasks, mapped to threads. “Classical” scaling can achieve not higher than O(3,000); while the Blue Gene/L can achieve O(30,000). Higher scalability in the range of O(M) requires new technologies in the processor and in the nodes. An SMP programming model between hundreds of cores requires hardware support for lock mechanisms, transactional memory for atomic updates, new micro architecture for latency hiding and pre-fetch hints, i.e., with “assist threads.” The memory bandwidth wall is the limiting factor for scalability of the multicore architecture.

The file and I/O system needs to support hundreds of thousands of clients. To solve the scalability problem, a low latency communication is required. One expects a huge number of files in a single file system — trillions of files with terabyte-per-second transfer rates. For robustness different techniques are also needed, since RAID6 is not an adequate I/O connection mechanism.

For petaflop systems, the operating system (OS) and middleware need to have awareness of massive parallelism. OS failures have to be reduced. OS jitter impacts can have dramatic performance degradations for applications. Lightweight operating systems or special reduced standard kernels should be considered. There is necessity for interrupt synchronization and dynamic management of various page sizes. Applications should adapt their page sizes dynamically, which should result in a reduction of boot time and time to load an application. Hierarchical concepts should be implemented to solve scalability issues; hierarchical daemon structures are required for supporting hundreds of thousands of clients and for queuing and monitoring systems.

And, of course, one hits the processor power wall. Smaller cores help, providing more operations per watt. A lower frequency also helps to increase the number of cores, i.e., more operations per watt. This implies higher scalability, but the ratio of sustained performance is an open question. Memory and huge caches use significant power too, as does the interconnect. The challenge is to optimize the power consumption of each component.

In summary, the use of multicore, and later nanocore, architectures implies a challenge for petascale applications. There is a need to hide the memory wall. Cache architecture and memory have to improve latencies and bandwidth. New synchronization mechanisms have to be realized with SMP parallelism becoming more important. Helper threads will support pre-fetch techniques, but the applications have to improve latency tolerance. The interconnect limit implies new programming techniques are needed. Increased power consumption is the most critical problem due to budget limitations. The necessity of reinventing parallel computing is an enormous challenge caused by the massive increase of cores in future architectures.

The Cray XT series and IBM Power and Blue Gene systems, as well as other vendors in the USA and vendors such as NEC and Fujitsu from Japan, are developing petaflops systems, but how effectively these systems can be utilised is an open question and fraught with a myriad of challenges.

During the last CAS workshop in 2005, a strong emphasis was placed on data management and the challenges it entails. This time, the emphasis was more on power used by supercomputers (carbon footprint) and the facility footprint (space) requirements.

In my view, global warming is the most pressing challenge of the 21st century, and we all need to reduce our carbon footprint and become carbon neutral. Our political leaders should be judged whether they are “fit for purpose” and then held accountable for the mitigation policies they enact. When Mr Alan Greenspan, the former federal reserves (U.S. central bank) chairman, says in his newly published autobiography that “the Iraq war is largely about oil,” he should have added that it is also an irresponsible way of avoiding making the economic decisions needed to start mitigating “an inconvenient truth,” global warming.

The workshop presentations are available on the NCAR Web site: http://www.cisl.ucar.edu/dir/CAS2K7/.

—–

Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. September 2007. Brands and names are the property of their respective owners.

For additional information, see the NCAR announcement on the eighth biennial session of Computing in Atmospheric Sciences (CAS2K7) at http://www.hpcwire.com/hpc/1791812.html.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This