SC13 Wrapup: Supercomputing’s Top Themes

By Nicole Hemsoth

November 24, 2013

For those of us who traveled to Denver for SC13, it’s now back to “normal” as the year in high performance computing begins its slow descent into relative silence before a fresh start in 2014.

Sitting down to plow through the plethora of new items to pluck for a top announcements article seemed impossible without first discussing some of the broader trends and themes—they beg to be heard. The hard news breakdown can be found here, but context is everything during a time of flux and each one of our newsy picks embodies at least one of these.

Outside of some of the vendors and organizations who had a great showing last week, there are a few topics and specific machines worth mentioning as topical “best in show” picks.  Forgive the rare “personal pronouning” I’m about to do for once, but with so many great conversations with you all last week, it’s hard to leave those experiences out.

For now, we shall begin this thematic breakdown with the topic that you expected…but with some (possibly) unexpected details about its relative weight during the show…

Exascale

Let me guess…you probably saw this topic at the top of the list and said, “well, of course”… While this might not be a surprise because of its meaning for the HPC community (in terms of research and commercially-driven technology development, funding drive, and competitive appeal), in some ways this topic wasn’t the star of the show.

smnetworkLet’s just be honest here. Ever since China topped the Top 500 charts with what some in the U.S. are calling its “insurmountably” high performance system, the momentum and excitement around the “race” seems to have cooled. It’s hard to get excited about a dash to a finish line when there are thousands of yards between the runners.

But it’s just a matter of timing and technology refreshes, say many. The introduction of some innovative processor, memory and interconnect technologies, especially around 2015, are set to breathe new life into the race, spawning a new set of runners and adding some major ripples into what appears, for now anyway, to be very still waters. In the meantime, it’s slow and steady toward the goal.

This topic of exascale on the U.S. front was not without its own news announcement, however. Early in the week we broke word of a new investment in exascale technologies, this time from the Department of Energy’s Office of Science and the National Nuclear Security Administration (NNSA). The organization awarded $25.4 million in R&D contracts to “accelerate the development of next-generation supercomputers.”

This new funding effort rests under the DoE’s “DesignForward” initiative, which is a follow-on to the wider exascale ambitions put forth by the FastForward project. As one might imagine, it involves a number of the “usual suspects” for this sort of project. AMD, Cray, IBM, Intel’s federal division, and NVIDIA are all going to “work to advance extreme-scale, on the path to exascale, computing technology that is vital to national security, scientific research, energy security and the nation’s economic competitiveness.”

The emphasis of the DesignForward contracts is on the development of interconnect technologies that are architected with energy efficiency, high bandwidth and I/O capabilities. According to project leaders, “Under the new contract, Intel will focus on interconnect architectures and implementation approaches, Cray on open network protocol standards, AMD on interconnect architectures and associated execution models, IBM on energy-efficient interconnect architectures and messaging models and NVIDIA on interconnect architectures for massively threaded processors.” They note that, “The vendors will collaborate with DOE’s Exascale Co-design Centers to determine how changes in the system architectures will affect how well the scientific applications perform.”

Notice the lack of urgency in the language there… “working to advance”… “on the path to exascale”… but after all, it’s the thought (and money) that counts, right? And there are many who are counting. Counting down to the reality, counting up the number of government dollars that have been pushed toward the efforts, and counting on the fact that the investments will be returned to the public following the sustained focus on supercomputing—some are even counting by twos to keep up with the continued push-back on the projected year.

Interestingly, the technical program’s emphasis on exascale shared the stage with a few other topics of more contemporary appeal, most notably Hadoop (more on that in a moment). Still, the challenges on the energy, programming, reliability and other fronts were explored in great detail by a number of key presenters and served as the topical backdrop for many of the larger conversations and innovations.

hadoopelephantHadoop and Big Data

Let’s all agree that these are not the same thing, even if they are generally lumped into the same conversations.

In fact, this week the resounding sentiment I picked up from numerous non-vendor conversations is that HPC has always been about data and yes, that data has always been big.

While many seem to feel that the attention around big data is driven by the vendor and commercial user communities, there’s no doubt that the tooling—both on the systems and software fronts—are definitely worth the attention this community is starting to pay to it. And shouldn’t the big data folks be looking here too, because after all…

If your definition of big data revolves around complex datasets (structured versus unstructured), or of data use that needs to think beyond (or even before) MPI, or if there’s just plain too much of it and a way to manage/store it (off to tape, in memory, in a cloud somewhere) is a challenge, there was likely a lot at SC13 for you. Again, it’s not just about the Hadoopery that so often serves as focal point.  We will hit on a few of the specific announcements around “big data” in the news edition of our SC13 wrapup, but it’s fair to say that every vendor had a story—and often a solid one—about how to manage massive, complex datasets.

With that said, aside from the larger trend of categorizing “big data” as a natural part of HPC (or the reverse, depending on who you ask), Hadoop and MapReduce were at the core of almost as many sessions that emphasized exascale in the session title or description. Further, many vendors saved their key announcements for the supercomputing show, even if the audience was tuned for a wider world of technology users. Intel expanded on its Hadoop distro in detail, Cray and others emphasized the role their boxes play for Hadoop workloads with customized hooks, all the storage vendors danced a strange little dance with the topic (when they weren’t busy spinning Lustre around), and Adaptive Computing and others made announcements around how their tech can play nice with the tech world’s biggest buzzword since “big data” itself hit the show floors. It is dizzying, isn’t it?

Actually, some of the most compelling of those “big data” stories were from those you might not expect (or hear as much from). This is especially true on the “orchestration” and management front. Traditional workload management software, for example, is doing double-duty (and managing to double its reach for the first time outside of “pure” HPC and into the enterprise) by being robust enough to scale to cope with some dramatic data demands. We talked at length with Univa, Adaptive Computing, and even a smaller company from France, SysFera, about what they’re doing at the orchestration level to make management of complex data more practical for both scientific and commercial environments. Again, more on that during our news recap.

locationHPC: It’s Not Just Academia Anymore..

To this you could probably argue that it never was, depending on your perspective and current place of employment on the academia/commercial spectrum. But this year, perhaps (far) more than ever, most of what we were hearing from those who are “traditional” HPC vendors is that there is an ever-increasing demand for their goods and services outside of the expected quarters.

The concept of “productizing HPC” is really taking off and there are a few vendors who seem to manage this split very well while others struggle wrapping their unique technologies around a message that kicks a much broader appeal. But let’s face it—now, more than ever, companies with large-scale infrastructure concerns (and that’s almost anyone whose business success hinges on adept data wrangling) are looking to tried and true technologies that are proven at massive scale. And who are they going to learn this from? HPC.

From the largest systems on earth, the most robust software to manage all that iron and the breed of applications, tools and support ecosystems that have been purpose-built and designed to run at mind-boggling core count (and throw in a dash of acceleration) is finally sounding its wake up call to the rest of the world. The era of broader application of the technologies all of you folks are developing have a home…look around. And let me assure you, this isn’t a shameless plug when I tell you, but HPCwire spun out a new publication this year called EnterpriseTech exactly because of these reasons. HPC is growing up and out—we don’t see a need to divide the community into two pieces (scientific vs. commercial) but the expansion of supercomputing technologies into the types of mainstream large-scale environments is happening fast and deserved a more focused outlet that directs its attention to the wider world of these technologies you folks are developing, refining and leading as they trickle down the enterprise ladder. It’s cool. Plain and simple.

We handed out a couple of Editor’s Choice Awards this year simply due to companies’ unique ability to expand some traditional supercomputing technologies into far wider markets. Notable winners there include Cray (which has captured some compelling enterprise customers and managed to take its messaging as a “supercomputing company” into a bigger plain by listening to the market), SGI (which has managed to fine-tune a message and product line that balances supercomputing/HPC with a much wider commercial appeal), and Univa (which boasts massive commercial growth of a technology based on HPC efforts via GridEngine). We watched as other companies, including Penguin Computing, tweaked its offerings by listening to what’s going on at the hyperscale/large-scale shops that are asking for Open Compute designs that are backed by the perceived reliability of a company that’s built large-scale systems. IBM and NVDIA hooked up in an effort to expand GPU computing to a wider group of potential users. Even tape storage vendors, especially SpectraLogic, have found new life in catering to an expanding array of commercial needs with new tooling. It’s fun to watch, isn’t it?

This is certainly not to say that at SC13 and those ahead scientific computing won’t take the topical cake. But this is to say that these tools are going to see an explosion of interest, adoption and hell, for that matter, press from the wider world of technology. HPC has arrived.

So with so much momentum, potential and exploration possible, this begs another question entirely—one that is its own “top topical pick” from the show…

Where Are All The Startups in HPC?

Seriously.

Each day, the news feed here at HPCwire HQ is flooded with “big data” vendor announcements of x-million dollars in series A funding for your typical, often rather vague and difficult-to-determine competitive angle-based tooling. More database vendors than one can shake a stick at. And why? Because “big data” is sexy. Don’t ask me why, but in a very all-encompassing, hopelessly generalized, technologically fleeting sort of way, it just is.

The real question for you many innovators out there is how do we bring the sexy back? To HPC, that is, because there was a day when this was all very fancy and special and, yes, sexy.

Dazzling scientific simulations? Yep, we have those. Dramatic feats of massive scale? Check. Theoretical technologies being developed in stealth mode. Ab-so-lutely! .. So where is the missing link? We’re going to be exploring that throughout 2014. Every hyper-hyped technology lately got its start because it scaled, because it was big, and because it powered the unfathomable. You, holy halls of scientific computing at the national lab scale, have something to learn from them, they’ll say. But they are well aware that you have a great deal to offer. MPI, Lustre, GPU computing—these are filtering in, trickling down from supercomputing mountain. Look out, world!

As the wider vendor and user world wakes up to the fact that the HPC community has been doing the truly awe-inspiring work before the Hadoop elephant was ever stuffed and it’s always been about “big data” on this side of the fence, we’re going to be here to catch that news and push it out. HPC needs investment. These technologies are the only thing proven at large scale. This is our year—send me your stories, your stealth mode progress, your ideas, your vision—and let’s share HPC with the rest of the world. I have a feeling that none of us have ever been the “cool kids” (sorry if that’s inaccurate, but I know a lot of you—ha!), but this is our chance to take over the technology lunchroom. Know what I mean?

Forward-Looking Processors/Accelerators

If you stay tuned tomorrow for the announcement/news based SC13 wrapup we’ll shed more light on the processor and accelerator new picks front, but suffice to say, there were some great “looking ahead” announcements from some surprise vendors, including Convey Computing and Micron.

We sat down for a close-knit briefing with Intel to discuss some of the specifics of the Knight’s Landing chip, which has the potential to shake up the HPC processing ecosystem, watched NVIDIA roll out more power with its K40, and as noted above, drew in our breaths at some of the neat ideas coming from new processor outliers, including Micron (please do read this) which has done something really interesting with exploiting the inherent parallelism of memory, and Convey, which took a noteworthy dip in the specialty processor pool.

Although it doesn’t necessarily fit neatly into the mix, there was a lot of talk about quantum computing at the show. And of course, wild speculation about whether or not this “thing” from D-Wave can technically be called such given the entanglement questions. Again, this is an issue we’ll explore more in 2014, but suffice to say, the mainstream media has picked up on this idea in a big way, so expect a plethora of (creatively inaccurate and under-researched/informed) material about this topic. We’ll do what we can to stretch our brains in the coming year to deliver some perspective from its primary research leaders at D-Wave, Google, Lockheed Martin and others.

It’s Lustre’s Year to Shine

Lustre marks a great example of an HPC-born technology that is bound for great things in the larger enterprise world. A few of the forward-looking vendors are taking notice of this momentum and adding it to their offerings for reasons that scale past the orders they’re taking from X National Lab or university.

It seemed to make sense to mention it here because it was such an important part of many vendor offerings and more important, conversations with the very few potential end users who were cruising the floor shopping solutions (that’s another topic—where are all the end users at this show and how do we reel them in?). In the news edition of the SC13 wrapup jabber that will come out tomorrow evening the vendor spotlight will be on these announcements.

Denver Has Awesome Beer

That is all.

And Now, Talk Amongst Yourselves…

Please send along your thoughts (for publication or fun) about a few other topics that we noticed, including:

  • The range, depth and scale of the technical sessions is something to behold. For sys admins to center directors, it was hard to find something that wouldn’t appeal to someone. Impossible. Kudos to the SC committee who puts these sorts of things together.
  • How many storage vendors are there exactly? And how to differentiate?
  • Did you notice a difference in the show’s size or “bling” due to government shutdown?
  • Those student cluster kids are outstanding. Will you hire them?
  • Who had the best booth in terms of demonstrations?
  • Did it seem like there were more young people milling about than usual (or I am just so old now that everyone under 35 looks 25?).
  • Denver has awesome beer. New Orleans (SC14) is a better place to drink it.
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This