South Africa Maps HPC Future as SKA Plans Take Shape

By Elizabeth Leake, STEM-Trek -- Photography by Lawrette McFarlane

March 1, 2016

The South African Center for High Performance Computing’s (SA-CHPC) Ninth Annual National Meeting was held Nov. 30 – Dec. 4, 2015, at the Council for Scientific and Industrial Research (CSIR) International Convention Center in Pretoria, SA. The award-winning venue was the perfect location to host what has become a popular industry, regional and educational showcase.

Exascale in the 2020s

With the Square Kilometer Array (SKA) being built in the great Karoo region, implications for SA and the HPC industry have captured the attention of a broad range of stakeholders. SKA will be the world’s biggest radio telescope, and the most ambitious technology project ever funded. With an expected 50-year lifespan, SKA Phase One construction is scheduled to begin in 2018, and early science and data generation will follow by 2020.

With only a few years in which to prepare for SKA, it’s not surprising that the CHPC conference has begun to feature in-depth data science and network infrastructure content, in addition to the meeting’s traditional HPC tutorials, workshops, plenaries, and student programs. Once again, the Southern African Development Community (SADC) HPC and Industry Forums were co-located with the CHPC meeting. An increased number of conference attendees from data science and high-speed network occupations added diversity in terms of gender, discipline and nationalities represented. All things considered, the annual CHPC meeting provides a wealth of learning opportunities, and brings professional networking to an emerging region of the HPC world.

Just how resource-intensive will SKA be?

Peter Braam (SKA Cambridge University, Parallel Scientific)
Peter Braam (SKA Cambridge University, Parallel Scientific)

It’s safe to say that SKA will set the “Big Data” curve. In his plenary address, Peter Braam (SKA Cambridge University, Parallel Scientific) described the range of instrumentation that will support the project, and how a combination of dedicated and cloud-enabled resources will fulfill its varied and complex missions. As for data, by 2020 SKA’s central signal processing array—50 times more sensitive than any current radio instrument—will generate 1 exabytes per day, and 100 exabytes per day by 2028. Its imaging function will produce 400 terabytes daily for worldwide consumption by 2020, and 10,000 petabytes per day by 2028. As for computational processing power, it will require 300 petaflops (archiving 1 exabyte) by 2020 and 10 times more by 2028.

Rudolph Pienaar (Boston Children’s Hospital, Harvard) noted that even before SKA data enters the infostream, by 2019 general network traffic will explode as a larger percentage of the population has access to networks; with an additional 30 percent more networked devices per capita and many more applications that are increasingly resource-intensive. Global IP traffic will reach 1.1 zettabytes per year in 2016, or 88.4 exabytes (one billion gigabytes) per month. By 2019, global IP traffic will reach 2.0 zettabytes per year, or 168 exabytes per month.

Most of the Karoo region and surrounding areas currently lack high-speed networks, and a reliable electrical supply. However, with ten wealthier countries and more than 100 organizations behind the project, SA expects to have the support it will need to host SKA. Additionally, many of SADC’s 15 member states will benefit from an improved power grid and access to high-speed networks, in addition to slipstream advantages such as workforce development opportunities and access to tools that support international research engagement.

CHPC National Meeting

Happy Sithole (CHPC/CSIR-SA)
Happy Sithole (CHPC/CSIR-SA)

The 2015 meeting was called to order by General Chair Janse Van Rensburg (CHPC) who introduced Director General Phil Mjwara (SA Dept. of Science & Technology). Dr. Mjwara expressed the vital role HPC will play in South Africa’s future. “A greater investment in cyberinfrastructure (CI) and supportive human capital development are essential for good governance, sustainability and commerce,” he said.

Mjwara shared 2015 highlights, including a milestone partnership between CHPC and SANRen. The agreement, reached in April, 2015, allows CHPC to engage with 155 tier-2 sites around the world that access the Large Hadron Collider at CERN in Geneva, Switzerland.

SA-CHPC New System FeaturesCHPC Director Happy Sithole described how their Cape Town center has grown since launching in 2007 when it supported 15 researchers with 2.5 teraflops of computational power. “That was a good start for South Africa, and CHPC was the only center on the continent at the time,” said Sithole. Since then, CHPC has expanded to meet demands. At the time of the 2015 conference, CHPC supported 700 researchers with 7,000 cores and 64 teraflops. “But, we could be fully subscribed with our three largest research projects,” he added.

Sithole was pleased to announce the addition of a new Dell system that will be operational in early 2016. Launched in two phases, the first is expected to achieve 700 teraflops and the second will add an additional 300 teraflops with GPU acceleration. “This brings a total of one petaflops of computational power to the region,” he added. Thirty percent of the system’s cycles will be available for lease by CSIR industry partners.

Thomas Sterling (Indiana University)
Thomas Sterling (Indiana University)

The student poster and cluster competition awards were presented Thursday evening following a plenary address by Thomas Sterling (Indiana University). Sterling is executive associate director and chief scientist at the Center for Research in Extreme Scale Technologies (CREST) and is best known for his pioneering work in commodity clusters as “the father of Beowulf.” He noted the accelerated progress being made by CHPC and pioneer projects in region, and credited Sithole for blazing the trail on behalf of industry, science and education.

“The new CHPC Dell and Mellanox system will provide world-class compute capability to prepare the emerging southern African region for exascale computing in the next decade,” said Sterling.

Plenary Highlights

The opening plenary address by Merle Giles (National Center for Supercomputing Applications, U.S.) was titled “HPC-Enabled Innovation and Transformational Science & Engineering: The Role of CI.” As the leader of NCSA’s private sector program, Giles explained HPC’s macro and micro economic impact on the economy, with the federal investment defining macro-, and micro-economic impact coming from universities and industry.

Giles described the “R&D Valley of Death,” or the funding gap that exists between the theoretical and basic research led by universities (or startups), and the production-commercialization phase effectuated by industry. “Somewhere in the middle, before optimization and ‘robustification,’ great ideas tend to crash and burn for lack of funding,” he said. “When this happens, it adversely affects how the public perceives science and technology spending, in general,” he added. Greater emphases on investment and tech transfer will help bridge this gap.

Merle Giles (NCSA Private Sector Program - U.S.)
Merle Giles (NCSA Private Sector Program – U.S.)

Giles said President Obama initiated an important discussion in January, 2015, during his state of the union address. By explaining how precision medicine and the curing of common diseases are enabled by fast computers and data science, he demystified the concept so that average citizens could understand why HPC is important to them. He then announced the nation’s strategic computing initiative in August. “Through this steady dialogue, Obama is laying the groundwork for future support of a greater public investment in advanced CI,” Giles said.

He added that all sectors—public, private and industry—must share the responsibility for preparing the workforce. “And, by the way, coding is no longer the new literacy; modeling and the ability to master data are more important as HPC becomes more accessible, and the long tail of science engages a broader range of practitioners,” he added. He concluded with facts from an International Data Corporation study that found for every dollar invested in HPC, $515 dollars in revenue and $43 dollars of profit are generated. Feasibility should be addressed, but it’s often overlooked. No single company, agency or government can fund everything. A sustainable program requires collaboration and a holistic approach to spending.

Peter Braam (SKA Cambridge University, Parallel Scientific) presented “HPC Stack—Scientific Computing Meets Cloud.” Braam explained how cloud-augmented cluster services satisfy a greater spectrum of administrative and production applications than earth-bound HPC can on its own. By provisioning clusters with containers, or cloud-enabled virtual machines (VMs), it’s possible to facilitate core services, such as identity management, storage, scheduling, and monitoring. “We discovered there are generally two groups of users: people performing operations, and developers who are experimenting in pursuit of discovery. To support the latter, we realized it’s more feasible to deploy 100 small virtual cloud-enabled environments than 100 HPC systems.” Cambridge, CHPC and Canonical Ltd are pursuing this study.

Simon Hodson (CODATA - France)
Simon Hodson (CODATA – France)

Thursday’s plenary was delivered by CODATA Executive Director Simon Hodson, and was titled “Mobilizing the data revolution: CODATA’s work on data policy, data science and capacity building.”

Hodson suggested that a 2012 report by The Royal Society titled “Science as an open enterprise” made a profound impact on the industry when it stated that data underpinning research findings must be openly available. “Transparency, credibility and reproducibility are the keys to gaining public confidence; data must therefore be accessible, assessable, intelligible, usable, and reusable,” he said.

The European Union’s Horizon 2020 project included, word-for-word, excerpts from the report that suggested principal investigators are guilty of scientific misconduct if their data aren’t accessible. Most European and U.S. agencies now require data generated by research they fund to be liberated, and many have issued policies regarding open research data. Leading industry journals and digital repositories have also adopted the practice, including Dryad, GenBank and TreeBASE.

Hodson addressed data citation best practices, and financing strategies to support sustainable data repositories. He presented the findings of a joint declaration of data citation principles by the CODATA-ICSTI Task Group on Data Citation Standards and Practices and FORCE11 (Future of eScience Communications and Scholarship). “Make data available in a usable way, so they can be recognized and properly credited,” he said, referring to a CODATA paper published in the Journal of Data Science titled “Out of Cite, Out of Mind: The Current State of Practice, Policy, and Technology for the Citation of Data.”

Hodson announced a series of 2016-17 Data Science Summer Schools that are co-sponsored by CODATA and the Research Data Alliance (RDA). The two-week programs will cover software and data carpentry, data stewardship, curation, visualization, and more. CODATA-RDA earmarked $60,000 for travel funding for registrants who lack institutional support. The first will be held in Trieste, Italy on August 1-12, 2016. The International Center for Theoretical Physics (ICTP) will support up to 120 students, and 30,000 Euros are allocated for student travel. Students from low and middle-income countries (LMICs) will be given priority, and additional sponsors are welcome.

Rudolph Pienaar (Boston Children's Hospital, Harvard)
Rudolph Pienaar (Boston Children’s Hospital, Harvard)

Health data concerns were addressed by Rudolph Pienaar (Boston Children’s Hospital) with his presentation titled “Web 2.0 and beyond: Leveraging Web Technologies as Middleware in Healthcare and High Performance Compute Clusters; Data, Apps, Results, Sharing and Collaboration.”

Pienaar argued that even today, most medical clinical applications follow 1990’s vintage interfaces and patient medical data is housed in locked-down silos that don’t facilitate cross-comparison. From an integration perspective, medical data is dead on arrival – the only common central point to all medical data in a hospital is the billing department. It is largely impossible for physicians or researchers to make meaningful comparisons of data housed in different silos. The system was designed to facilitate billing, and to protect patient privacy, but it impedes research and diagnostics. Pienaar believes that the current infrastructure is not at all well suited to handle the amount and complexity of data that is coming down the pipe in the next five years.

That’s when ChRIS was born. The Boston Children’s Hospital Research Integration System (ChRIS) is a novel browser-based data storage and data processing workflow manager. Developed using Javascript and Web2.0, ChRIS anonymizes personal information, and includes a plug-in to accommodate clusters of collaborators that could, potentially, be located around the world. By employing a local virtual machine on the desktop, ChRIS hides the complexity of data scheduling on an HPC by keeping everything local. The system also uses Google real-time API services that facilitate to enable real-time true image sharing and collaboration. Multiple parties can interact with the same medical visual scene, each in their own browsers, and all views, slices, are updated in real time – using a design approach that is not screen sharing, but more akin to how multi-player 3D gaming works.

SADC HPC Forum

SADC Forum 2015The SADC HPC Forum has been meeting since 2013 when the University of Texas, U.S. donated a decommissioned HPC system called “Ranger” to CHPC. More than 20 Ranger racks were divided into several mini-Rangers that were placed in eight centers in five SADC states. Each mini-Ranger forms a footprint for human capital development (HCD) and the initiative has launched an important public-private dialogue that SADC hopes will inspire support for future expansion.

Tshiamo Motshegwa (U-Botswana, SADC Forum Chair)
Tshiamo Motshegwa (U-Botswana, SADC Forum Chair)

Ms. Mmampei Chaba (Dept. of Science & Technology, SA) welcomed everyone on behalf of the host country, and Ms. Anneline Morgan (SADC Secretariat) introduced Forum Chair Tshiamo Motshegwa (U-Botswana).

Forum delegates and international advisers further refined the SADC collaborative framework document that last year’s forum attendees had begun to draft, and a final version will be presented to the SADC Ministry. There were several newcomers; delegates who hadn’t attended in the past, and representatives from countries that were entirely new to the forum, including Mauritius, Namibia and Seychelles. All were eager to know how their countries could contribute to and engage with the shared CI.

Everyone was eager to discuss best practices relating to data sharing across national borders; an uncomfortable concept for those who haven’t managed research data. The international advisers explained that the ability to facilitate the secure, seamless and reliable transfer of data among SADC sites, and beyond, is crucial to the project’s success. To that end, following the meeting, Motshegwa began to plan a cybersecurity and data conference scheduled for mid-April, 2016 in Gaborone, Botswana. U.S. and European cybersecurity experts are invited to participate. They will share roughly 50 years of collective experience with the facilitation of secure data across peered networks, international computational systems and interfederated CIs.

SADC HPC Forum delegate questions frame future discussions and HCD programming. While technology training has been the focus, each site is encouraged to add education, outreach, communication, and external relations skills to their teams. By building teams that include a combination of technical and soft skills, they will be better prepared to support and sustain their CI programs for the future.

The CHPC National Meeting, co-located meetings and student programs would not have been possible without the generosity of sponsors IntelDellAltairSeagateEclipse HoldingsMellanox, and Bright Computing. We appreciate their continued support,” said Conference Program Planner Happy Sithole (CHPC).

Presentations and videos from the awards evening are available on the CHPC website.

2016 CHPC National Meeting

Please join us in East London, South Africa for the 2016 CHPC National Meeting, and SADC HPC Forum Dec. 5-9, 2016.

Photo by Adel Groenewald
Photo by Adel Groenewald

Plan to spend extra time while visiting this beautiful region, and be sure to pack your hiking boots! It will be summer in South Africa, and you’ll find some of the best hiking, bird watching, fishing, swimming, and horseback riding in the world. East London is South Africa’s only river port city. The nearby Wild Coast region has miles of unspoiled, white beaches framed by thick forests that give way to steep, rocky cliffs. The area is steeped in Xhosa tradition, and their farms are scattered along the coast. It’s sparsely-populated, and it’s likely you’ll meet more cows than people. With a favorable international currency exchange rate, you’ll only be limited your energy and time.

 

About the Author

ElizabethLeake-headshotElizabeth Leake is the president and founder of STEM-Trek Nonprofit, a global, grassroots organization that supports scholarly travel for science, technology, engineering, and mathematics scholars from underrepresented groups and regions. Since 2011, she has worked as a communications consultant for a variety of education and research organizations, and served as correspondent for activities sponsored by the eXtreme Science and Engineering Discovery Environment (TeraGrid/XSEDE), the Partnerhip for Advanced Computing in Europe (DEISA/PRACE), the European Grid Infrastructure (EGEE/EGI), South African Center for High Performance Computing (CHPC), the Southern African Development Community (SADC), and Sustainable Horizons, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

Here’s What a Neural Net Looks Like On the Inside

February 15, 2017

Ever wonder what the inside of a machine learning model looks like? Today Graphcore released fascinating images that show how the computational graph concept maps to a new graph processor and graph programming framework it’s creating. Read more…

By Alex Woodie

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

February 9, 2017

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This