Kinetica Now Accessible as a Service on Microsoft Azure

October 14, 2021

ARLINGTON, Va., Oct. 14, 2021 — Kinetica, the database for time and space, is now easily accessible as a service on Microsoft Azure, giving organizations real-time contextual analysis and location intelligence on massive data sets with reduced computing infrastructure and lower costs.

Organizations across industries rely on Kinetica’s vectorized database to analyze data from sensors and machines in real time where other technologies can’t keep up. For instance, one of the largest global retailers uses Kinetica to deliver dynamic, real-time inventory replenishment across its entire supply chain; several of the largest global telco’s use Kinetica to optimize network planning with coverage visualizations; and the Defense Department uses Kinetica to monitor airborne threats over North America.

“Vectorization is ideal for IoT use cases where streaming geospatial and time-series data gets fused with other static or steaming data at speed. It historically required exotic hardware and specialized–and scarce–skills, putting it out of reach to all but the largest and most well-funded organizations or government entities,” said Nima Negahban, founder and CEO of Kinetica. “With Kinetica’s vectorized database now available as-a-service on Microsoft Azure, that has all changed. Any organization can harness the power of Kinetica for IoT initiatives — and it can be deployed in minutes.”

Kinetica on Microsoft Azure is fully managed by Kinetica, integrated with Microsoft Azure monitoring, and equipped with a modern user interface for ease of use. It provisions in minutes, streamlines data ingestion, and delivers seamless analysis to provide exceptional time-to-value. Its consumption-based pricing lets users pay as they go, choosing between vectorized CPU pricing and GPU pricing.

Breaking the Space-Time Barrier

IoT data is forecasted to reach 73 zettabytes by 2025, according to IDC, while a recent study by Deloitte estimates that 40% of IoT devices will be capable of sharing location in 2025, up from 10% in 2020. This makes data with both a time-series and spatial component the fastest growing category of big data this decade. Adoption of geospatial data is starting to become more widespread across the business sector. Emerging high value use cases that leverage continuous readings over time with geospatial coordinates include proximity-based marketing, smart grid operations management, environmental remediation, contact tracing, spatial determination of health outcomes, connected car services, fleet optimization, and others.

Data across time and space presents organizations with three fundamental challenges:

  • Sensor and machine data is extremely large and fast moving when compared to the first generation of big data sets that were mostly human generated interactions with the web
  • The value from this data comes from fusing data together for context through geo and temporal joins versus traditional primary and foreign key relationships
  • Insights come from machine learning with advanced capabilities for geospatial and time series analytics

The current generation of massively parallel processing (MPP) databases for big data analytics, such as BigQuery, Cassandra, and Snowflake, simply weren’t designed to handle the speed, unique data integration requirements, and advanced spatial and temporal analytics on data across time and space. Past approaches to harnessing value from this kind of data have fallen short, resulting in decisions not being made fast enough, a lack of critical context, and sub optimized insights. On top of that, costs are excessive due to inefficiencies with respect to both development efforts and compute costs.

Taking Vectorization From Extreme to Mainstream

Data-level parallelism, or vectorization, accelerates analytics exponentially by performing the same operation on different sets of data simultaneously, for maximum performance and efficiency. Data level parallelism is particularly adept at functions required to perform advanced calculations across time and space, such as windowing functions, predicate joins, graph solving and others. Vectorization underpins AI initiatives and high-performance computing, but these vectorized computing processors have yet to enter the mainstream for analytics projects such as route optimizations, real-time risk assessment, and visual mapping.

Kinetica’s new offering changes that. By harnessing the built-in vectorization capabilities of the latest generation of chips from Nvidia and Intel in the cloud, and making it available as a service, Kinetica now delivers orders of magnitude faster processing compared with traditional cloud databases.

At a top financial services firm, for instance, a 700-node Spark cluster running queries in hours took seconds on 16 nodes of Kinetica. At a top retailer, 100 nodes of Cassandra and Spark were consolidated into eight Kinetica nodes. A big pharmaceutical company achieved identical performance between an 88-node SQL on Hadoop cluster and a 6-node Kinetica cluster in Microsoft Azure.

“Kinetica’s fully-vectorized database on Microsoft Azure Marketplace significantly outperforms traditional cloud databases for big data analytics,” says Jeremy Rader, GM, Enterprise Strategy & Solutions for the Data Platforms Group at Intel, “and now does so at the same speed as a GPU but at a fraction of the cost by harnessing data-level parallelism using the built-in Advanced Vector Extensions (AVX-512) of our latest 3rd Gen Intel Xeon Scalable processors.”

“We’re pleased to offer Kinetica as a fully vectorized database as-a-Service on Microsoft Azure,” says Ramnik Gulati, Director Product Marketing Databases at Microsoft. “Kinetica is the database for space and time, and is now available to more customers and markets with their release into the growing Azure Marketplace ecosystem.”

“Accelerated computing is key to breakthroughs in machine learning, data science, visualization, simulation, and computer-aided design,” said Scott McClellan, senior director at NVIDIA. “Kinetica’s new as-a-service offering on Microsoft Azure enables enterprises to easily speed up the databases that power their work with NVIDIA GPUs.”

Kinetica is available immediately on the Microsoft Azure Marketplace, with as-a-service in AWS available later this year. To learn more or to try Kinetica risk free, visit Kinetica on Azure.

About Kinetica

Kinetica helps many of the world’s largest companies solve some of the world’s most complex problems across time and space, including the US Air Force, NORAD, USPS, Citibank, Telkomsel, MSI, OVO, and Softbank, among others. Kinetica is the first fully vectorized database running at scale in the cloud. Organizations across the public sector, financial services, telecommunications, energy, healthcare, retail, automotive, and beyond use Kinetica to load and analyze fast-moving data simultaneously, delivering instant insight. Kinetica offers flexible deployment, pricing, and support models across private and public clouds. Kinetica has a rich partner ecosystem, including Dell, HP, IBM, NVIDIA, and Oracle and is privately held, backed by leading global venture capital firms Canvas Ventures, Citi Ventures, GreatPoint Ventures, and Meritech Capital Partners. For more information and to try Kinetica, visit kinetica.com.


Source: Kinetica

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a province in Pavia, Italy), and delivered “as-a-service” via H Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire