Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

By John Russell

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Francisco, one would be tempted to dismiss its claims of inventing something revolutionary called “pattern discovery” in contrast to conventional pattern recognition. The HPC community is wary of black box claims in which spectacular results are presented or promised without revealing the underlying technology.

Pattern Computer, flying under the radar as Coventry Computer for the past couple of years, is the brainchild of technologist and entrepreneur Mark Anderson who has assembled a team including some very familiar HPC names –  Michael Riddle, chief systems architect at Pattern (Autodesk founder), James Reinders, systems architect, (Intel), Irshad Mohammed, software development engineer (Fermilab), Ty Carlson, CTO (Amazon and Microsoft), and Eric Greenwade, technical fellow (Microsoft, LLNL, LBNL, LANL) to name just a few.

Add to that two very impressive alpha clients – Larry Smarr, Calit2 director, a man with access to rather substantial HPC resources, and Lee Hood, founder of the Institute for Systems Biology and developer of early automated DNA sequencing machines used in the Human Genome Project – who discussed in glowing terms the early results from and potential impact of Pattern Computer’s technology in bioscience.

It’s hard to dismiss such a lineup, however chary Pattern Computer is about revealing technical details. In a nutshell, Pattern Computer says it has developed an approach to exploring data that permits very high dimensionality exploration in contrast to the pairwise approach that now dominates. It has also figured out how to do the calculations more efficiently with existing hardware architecture organized specifically for this kind of data exploration. Exceedingly complicated network layers are not required. Fancy math and software, and clever hardware architecture are.

Here’s what Anderson told HPCwire in a pre-briefing:

“A simple way to put this in historic terms would be to say if you look at the entire history of human-computer interactions until now, essentially what you are seeing, I think, is we tell the computers what we want and then the computers come back with what we have asked for in a better and better means, faster and more accurate and more of it. What we are hoping now to see is a true inflection point that moves from that kind of relationship to one in which we tell the system we want something and the system brings back something we have never expected before, don’t even understand why it is there. And then of course the question becomes why is it there, and we will actually be able to tell people why it is there. So these will be true discoveries.”

“We may never expose everything we know, all of our crown jewels. We are going to trade secret more than patent to protect our most important secrets. You can assume that we have found new ways of using that hardware. And we have a lot of proprietary mathematics and software to do that – which we do. You can probably assume that over time things will get more and more complex and that there will be more and more hardware, unique hardware involved. But basically we are trying to make use of the most advanced [hardware now available]. So we really like non-von Neumann chips as an example and we think that the heterogeneous chip architecture is the only way to go.”

In the release accompanying the launch, Pattern lays out its claim thusly, “Pattern discovery is an emerging category – an extension of the machine learning field – that distinguishes itself by using both supervised and unsupervised learning. While pattern recognition solutions are widely available, pattern discovery uniquely identifies previously hidden, higher-order correlations in vast datasets without instructions as to where or what to look for.”

IBM SyNAPSE TrueNorth Array, circa 2015

There’s a lot to digest here. It’s not clear how much similarity exists between Pattern Computer as announced today and nascent plans formulated in 2015 which planned to use IBM’s TrueNorth neuromorphic chip (see GeekWire article, New startup building ‘desktop supercomputer,’ seeking big breakthroughs using chips that work like the human brain). The latter design, also called Pattern Computer, was a result of a challenge issued and solution sketched out during the October 2015 Future in Review (FiRe) Conference, owned by Anderson. Many of the same people are involved now.

According to a Pattern Computer spokesman, “That was merely the beginning for Coventry/Pattern Computer. What’s being announced is fully realized and ready for additional deployment, featuring more advanced computer systems, a data center, headquarters and partnerships in place — all developed over the past few years in stealth.” Coventry was founded in 2016, is headquartered in San Juan Island, Washington. Headcount is under 50. The company declined to name its investors.

Today’s event was labelled Splash 1 and focused on the company’s basic capabilities and their application to bioscience as a demonstration use case. James (Ben) Brown, department head, molecular ecosystems biology, LBNL, and chair, environmental bioinformatics, University of Birmingham, UK, was instrumental in helping Pattern Computer develop its biomedical practice. Brace for other Splashes around different domains advised Anderson.

“This is a universal system, so it doesn’t care what arena you’re in or what silo it’s in or what type of data it looks at. As far as we can tell it is completely not religious about that,” said Anderson. “These Splash waves will have different types of companies with them. So this first wave is biomedical. Each one will be completely different from the prior one, partly because we want to show that it’s able to do that work but also because I think it establishes an important truth in design of computing where one doesn’t have to be on a highly-supervised, and then finely-tuned algorithm to that exact science, but in fact one can use a general approach and have deep success.”

The intent is to sell “discovery” as a service. “We really don’t want to be box sellers,” said Anderson. Just provide the data set you think that represents the problem. “You would have an area expert of your own who we would work with, a PI of some kind. We have people who do the ingest of the data and they would work with that person. Once we have it we’ll take it from there, and come back and show you what we have discovered and help you understand what that means to you.”

Sounds a bit magical, which it isn’t and is not the impression Pattern Computer wants to convey. Still, the tight-lipped posture will likely spur some skepticism well as efforts by many in the HPC community to uncover the technical details. Fundamentally, said Anderson, Pattern Computer has developed a new way to look at problem space – a method that relies more on leveraging high dimensionality rather than huge data sets, exhaustive iteration, or very many layered network training.

“We can do very high dimensional analysis, essentially n dimensional analysis where most folks are dealing with pairwise functions,” said Anderson. “We’ll be talking on the 23rd about two fields. One is cancer. The other is personalized medicine. In both cases, and in very short periods of time, we’ve been able to make discoveries and in each case it is not by doing what you might guess. It’s not by running against [a data set of] 10,000 instead of 5,000. We are not using that kind of tool kit. But we have been able to look at things which are very high dimensional.

“I think you know the usual stuff, using those tools of yesterday giving single pairwise information on genetic contribution to cancer. People struggle with getting beyond that. We can do, so far up to six, and have actually done much higher numbers. We can take 20,000 variables and reduce them to the six that matter and then actually understand the dynamic relationships between those six. No one as far as I know has ever done that before. We are working with teams who are oncology teams now, academic and institutional.”

Working with a well-known and heavily investigated breast cancer database, Anderson said Pattern Computer team did a first run on the database and “found a druggable discovery in about 24 hours.”

The proof points offered today are impressive. Smarr, of course, is a long time HPC pioneer who in recent years has been investigating the human microbiome including developing novel computational tools. Anderson said, “Larry had been using other HPC tools. We were able in a very short time, about a week, to do runs against the data that had already been exposed to others and find new things for him to help him create a new hypothesis and research angle, and find out literally new dynamics of disease description.”

Hood has explored virtually every aspect of life sciences technology. His centerpiece concept is what he calls P4 Medicine (Predictive, Preventive, Personalized and Participatory) which in broad terms would use blood biomarkers and digital technology to characterize a person’s health including genetic and environmental factors. Done in a timely way, the hope is P4 can drive research, clinical, health maintenance issues.

Compressing the details of Hood’s (P4) , Smarr’s (IBD and microbiome), and Brown’s (breast cancer) individual work discussed is challenging. A description of Smarr’s work is available on the Pattern Computer website. It was clear high dimensional analysis allowed each of them to gain new, sometimes unexpected insight and that doing that requires a special platform. For example, it was noted that the time required to identify all the interactions of six genes in a 20,000-gene set would take at least 25 years on very high end HPC resources. Pattern Computer reported completing the task in one day and the results led to new actionable insight in the cancer work.

Pattern Computer’s strategy is to use project results and respected third-party testimonials such as these – rather than a detailed explanation of its technology – to attract users. How viable that approach is, time will tell. In any case, said Anderson, the Pattern Computer method requires purpose-built architecture.

“One thing we are going to talk about [at the launch] is why couldn’t you try to run this on an HPC system today? Why bother with redesigning the entire stack. We actually have come up with a mathematical proof of why it is so hard. And the numbers are rather astonishing. We think it’s somewhere between 10^20thand 10^40thcalculations, the numbers of cycles are so high, you couldn’t do it even at Livermore [National Lab]. It’s just too much. We have found ways of reducing the problem so we can deal with very high numbers of variables and yet not have to do what you would normally have to do on a supercomputer.”

Pattern Computing currently has adequate computing resources, according to Anderson, but plans to scale up. “We are already doing runs against databases that are usually done on a supercomputer or a cluster. [Our] datacenter is not huge but it works. It’ll get bigger. At some time in the future we might be free to talk a little bit more about that architecture but it will be different from what you have been used to seeing.”

We’re left with something of a black box quandary, but the highly credentialed technical team and early users convey credibility. It will be interesting to watch how the company fares.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

From Deep Blue to Summit – 30 Years of Supercomputing Innovation

This week, in honor of the 30th anniversary of the SC conference, we are highlighting some of the most significant IBM contributions to supercomputing over the past 30 years. Read more…

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This