NCSA Welcomes 2019-2020 Faculty Fellows

May 21, 2019

May 21, 2019 — The National Center for Supercomputing Applications (NCSA) has named seven new Faculty Fellows for the 2019-2020 academic school year. The NCSA Faculty Fellowship is a competitive program for faculty and researchers at the University of Illinois at Urbana-Champaign which provides seed funding for new collaborations that include NCSA staff as integral contributors to the project.

THE BIG PICTURE: MEDIA, CAPITAL AND NETWORKS OF INFLUENCE

Faculty Fellow: Rini Mehta (Dept. of Comparative and World Literature, College of Liberal Arts & Sciences)
NCSA Collaborators: Kalina Borkiewicz, Sandeep Puthanveetil Satheesan, Luigi Marini

Abstract: The Big Picture proposes to build a map of the global network of media, corporate power, and political influence that was engendered by globalization and which in turn continues to shape our world in the 21st century. The world that we inhabit today is caught in the interstices of political, economic, and cultural forces that operate in between fault lines of nations, regions, and ideologies. Visual technologies and arts are as much complicit in manufacturing and manipulating history as they are involved in disseminating history as it unfolds. Our project will connect realpolitik and representation to capture a media history of the current times, in a global context. Using methods gleaned from the ontology-based models such as FOAF (friend of a friend), The Big Picture will bring together media studies, history, statistics, data curation, and advanced visualization to produce a dynamic interface accessible through a website. This project will partner with the PI’s Global Film History from the Edges project that has been granted $150,000 by the University of Illinois Presidential Initiative for the Advancement of Humanities and the Arts.

PREDICTING INTERNATIONAL FOOD SECURITY CRISES: A DATA-DRIVEN APPROACH

Faculty Fellow: Hope Michelson (Dept. of Agricultural and Consumer Economics, College of Agricultural, Consumer, and Environmental Sciences)
NCSA Collaborators: Liudmila Mainzer, Aiman Soliman

Abstract: In food crises, faster and more accurate evaluation and response can save lives and resources. Methods currently in use to predict such crises have limitations that delay and impede humanitarian response: they are not model-driven, and they do not engage the full scope of available data. Because government policy-makers and Non-Governmental Organizations often fail to recognize specific food insecure populations, scarce resources to mitigate hunger can arrive too late and in the wrong places. In many parts of the world, crises of this sort are on the rise, requiring improved methods to identify their scale and scope. Developing and deploying an effective early warning system is urgent, given the expectation that climate shocks disrupting agricultural production and market functioning will increase in frequency and severity in coming decades.

We propose to develop and test a new model-driven method for predicting food crises across the world. Dr. Michelson’s previous work (Lentz, Michelson, Baylis and Zhou, 2018) demonstrates that we can improve prediction of food security crises by exploiting publicly-available high-frequency, spatially-resolved data. The proposed collaboration with NCSA’s Data Analytics Group and the NCSA Genomics Group will take this research to the next level: developing new data sources for predicting and applying state of the art machine learning techniques to the prediction problem.

DISCOVERY OF ADVANCED NANODIELECTRICS THROUGH AI-ACCELERATED MULTISCALE SIMULATION FROM FIRST PRINCIPLES

Faculty Fellow: Yumeng Li (Dept. of Industrial and Enterprise Systems Engineering, The Grainger College of Engineering)
NCSA Collaborators: Erman Guleryuz

Abstract: The research objective of this proposal is to create a new generation artificial intelligence-enabled multiscale simulation framework and its enabling techniques, which would directly root in first principles theory, for a comprehensive understanding of the cross-scale multiphysics phenomena in dielectric polymer nanocomposites. In addition to features like easy processing and light weight, polymer nanocomposites demonstrate a great potential in realizing highly enhanced combined properties to meet the needs of advanced dielectrics in applications from energy storage to power delivery. However, the current lack of understanding in fundamental mechanisms leading to the property enhancement necessitates modeling of complex phenomena (ranging from nanoscale to macroscale) using a high-performance multiscale simulation framework. The new multiscale simulation framework employs artificial intelligence (AI) for an effective integration of first principle calculations, physics-based atomistic simulations and data-driven predictive analytics, thereby concurrently leveraging high accuracy of first principles calculations and high efficiency in AI enabled predictive data analytics. Built upon the PI’s research experience on both multiscale simulation and polymer nanocomposites, this proposal will focus on four research thrusts to address grand challenges in developing the new framework: 1) developing machine learning potentials for the interface based on high-throughput first principles calculations, 2) characterizing nanoscale local interfacial electro-mechanical-thermal properties using AI-accelerated atomistic simulations, 3) predicting macroscale electro-mechanical-thermal properties considering the interface effects, and 4) model validation of the proposed multiscale simulation framework.

ENABLING LONG-TERM REUSE OF EXPERIMENTAL AND COMPUTATIONAL DATASETS ON PROTEIN DYNAMICS

Faculty Fellow: Diwakar Shukla (Dept. of Chemical and Biomolecular Engineering, The Grainger College of Engineering)
NCSA Collaborator: Luigi Marini

Abstract: Modern molecular simulations of proteins on high-performance computing resources such as Blue Waters generate extensive atomistic-detailed information about protein dynamics, which could be leveraged for obtaining insights about molecular origin of human diseases, design of therapeutics, bioengineering of plants. However, the key challenge is to convert the terabytes of biomolecular dynamics data generated on supercomputers into a format accessible to an experimental researcher. In this proposal, we present an approach that not only generates suggestions for optimal experiments based on simulation data (e.g. for validation of simulations) but also integrates the existing experimental and simulation information to generate comprehensive models of protein dynamics that are missing from the current literature. We have developed algorithms that provide an approach that maximizes information gain for the design of experiments given simulation data. We propose to work with NCSA collaborators to implement a cloud-based platform and a user interface for this proposed service. NCSA will benefit from working on this project by gaining more expertise in applying cyberinfrastructure in the realm of biomolecular dynamics. The biggest impact of the proposed study is that it provides an accessible tool for experimental researchers to help harness the knowledge hidden in the big protein simulation datasets generated using Blue Waters and other high performance computing resources. This work will have a transformative impact on how protein science is conducted by experimental and computational research groups.

HIGH-PERFORMANCE, MULTI-OBJECTIVE, AND MULTI-PHYSICS DESIGN OPTIMIZATION OF NEXT-GENERATION, PATIENT-SPECIFIC IMPLANT SCAFFOLD AT SCALE

Faculty Fellow: X. Shelly Zhang (Dept. of Civil and Environmental Engineering, The Grainger College of Engineering)
NCSA Collaborator: Erman Guleryuz

Abstract: With recent advances in tissue engineering, the design and fabrication of implant scaffolds have become emerging areas of research, as the traditional implants fail to fulfill required functionalities for specific patients. While topology optimization offers a promising method for scaffold design, existing studies have limited capabilities of addressing multiple design scenarios and fine control of the porosities to achieve highest performances. To address these challenges, the proposed research aims to create a high-performance, multi-physics, and multi-objective topology optimization framework for the design of next-generation patient-specific scaffolds implant scaffold with enhanced multifunctionality. The proposed formulation addresses both mechanical and mass transport design requirements using multi-objective formulations and simultaneously controls the location, size, and shape of porosities through local constraints. To successfully realize the high complexity of the scaffold structures, the proposed research requires large problem size (hundreds of millions of degrees of freedom) and 3D multi-physics simulations, which must rely on massively parallel supercomputers. The PI will work closely with NCSA to develop highly scalable algorithms and high-performance computational frameworks for efficient optimization and to utilize the large-scale supercomputers in order to achieve ultra-high-resolution designs.

The proposed work will be built upon an open-source parallel code based on PETSc suite of libraries. Through a proof-of-concept benchmark on Blue Waters, the workflow was successfully tested and showed great scalability. The supercomputing infrastructure and domain experts at NCSA will provide essential support for the success of this project. The state-of-the-art methods created in this research will carry a great potential to contribute to the synergy between NCSA and the members of NCSA’s Industry Program from the life sciences sector. With the optimized structures developed through this project, patients implanted with the optimized scaffolds would be benefited from better functionality, better clinical results, and ultimately contribute to better health and living conditions.

THE WAR ON PROFESSIONAL EXPERTISE: THE GLOBAL SPREAD OF ONLINE MYTHS ABOUT MEDICINE AND HEALTH

Faculty Fellows: Kevin Leicht (Dept. of Sociology, College of Liberal Arts & Sciences), Brant Houston (Dept. of Journalism, College of Media)
NCSA Collaborator: Loretta Auvil

Abstract: The spread of dubious or downright false information (sometimes referred to as “fake news”) is a growing social, cultural and scientific dilemma, and the situation is especially troubling when it comes to information about medicine and public health. The most recent manifestation of the real world consequences of dubious medical information is the spread of measles and its link to anti-vaccination websites and memes. But that is only the most recent manifestation—others include the peddling of conspiracy theories and fake cancer cures, organized misinformation about stem cell research, and the spread of dubious claims about alternative medicines. There is further evidence that some of this dubious information is deliberately produced for financial gain or to fuel cultural discord.

The purpose of this project is to examine the routes through which medical misinformation spreads in the news and social media. The research will examine medical misinformation in four areas; (1) vaccinations, (2) cancer cures, (3) the spread of the Ebola virus, and (4) the safety of contraception. Misinformation is defined as information that is publicly available and disseminated that is not supported or actively contrary to established medical advice. For this fellowship, we will use the considerable news resources of the Cline Center archive and then use the resources and expertise of NCSA to (1) explore methods for searching and applying models to identify relevant new articles on our selected healthcare topics, (2) develop a model for identifying dubious and false information in these articles, (3) rendering the data suitable for quantitative analysis, and (4) aiding the principal investigators in conducting the analysis. The pilot research from this fellowship will form the basis for a much larger research grant to be submitted to the Knight Foundation or the National Institutes of Health.

ABOUT NCSA

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.


Source: NCSA

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

DNA Data Storage Innovation Reduces Write Times, Boosts Density

September 20, 2019

Storing digital data inside of DNA has been an idea since the 1960s, and recent developments have addressed some of the obstacles facing its scaled implementation. Now, researchers at the Technion-Israel Institute of Technology and the Interdisciplinary Center Herzliya have crossed another major milestone by using new techniques to store 10 petabytes of data in one gram of DNA. Read more…

By Oliver Peckham

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit systems up and running and a 53-qubit system expected to go Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

AWS Solution Channel

A Guide to Discovering the Best AWS Instances and Configurations for Your HPC Workload

The flexibility and heterogeneity of HPC cloud services provide a welcome contrast to the constraints of on-premises HPC. Every HPC configuration is potentially accessible to any given workload in a well-resourced cloud HPC deployment, with vast scalability to spin up as much compute as that workload demands in any given moment. Read more…

HPE Extreme Performance Solutions

Intel FPGAs: More Than Just an Accelerator Card

FPGA (Field Programmable Gate Array) acceleration cards are not new, as they’ve been commercially available since 1984. Typically, the emphasis around FPGAs has centered on the fact that they’re programmable accelerators, and that they can truly offer workload specific hardware acceleration solutions without requiring custom silicon. Read more…

IBM Accelerated Insights

Rumors of My Death Are Still Exaggerated: The Mainframe

[Connect with Spectrum users and learn new skills in the IBM Spectrum LSF User Community.]

As of 2017, 92 of the world’s top 100 banks used mainframes. Read more…

The European Processor Initiative’s Ambitious Vision of the Future

September 19, 2019

With the EuroHPC program well underway, much of the European Union’s ambition to be a leader in the exascale era rests with the European Processor Initiative (EPI). The project – which has a budget of roughly €160 Read more…

By Oliver Peckham

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

The European Processor Initiative’s Ambitious Vision of the Future

September 19, 2019

With the EuroHPC program well underway, much of the European Union’s ambition to be a leader in the exascale era rests with the European Processor Initiative Read more…

By Oliver Peckham

When in Rome: AMD Announces New Epyc CPU for HPC, Server and Cloud Wins

September 18, 2019

Where else but Rome could AMD hold the official Europe launch party for its second generation of Epyc microprocessors, codenamed Rome. Today, AMD did just that announcing key server wins, important cloud provider wins... Read more…

By John Russell

Dell’s AMD-Powered Server Line Targets High-End Jobs

September 17, 2019

Dell Technologies rolled out five new servers this week based on AMD’s latest Epyc processor that are geared toward data-driven workloads running on increasin Read more…

By George Leopold

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

IDAS: ‘Automagic’ HPC With Training Wheels

September 12, 2019

High-performance computing (HPC) for research is notorious for having steep barriers to entry. For this reason, high-tech disciplines were early adopters, have Read more…

By Elizabeth Leake

Univa Brings Cloud Automation to Slurm Users with Navops Launch 2.0

September 11, 2019

Univa, the company behind Grid Engine, announced today its HPC cloud-automation platform NavOps Launch will support the popular open-source workload scheduler Slurm. With the release of NavOps Launch 2.0, “Slurm users will have access to the same cloud automation capabilities... Read more…

By Tiffany Trader

High Performance (Potato) Chips

May 5, 2006

In this article, we focus on how Procter & Gamble is using high performance computing to create some common, everyday supermarket products. Tom Lange, a 27-year veteran of the company, tells us how P&G models products, processes and production systems for the betterment of consumer package goods. Read more…

By Michael Feldman

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

AMD Verifies Its Largest 7nm Chip Design in Ten Hours

June 5, 2019

AMD announced last week that its engineers had successfully executed the first physical verification of its largest 7nm chip design – in just ten hours. The AMD Radeon Instinct Vega20 – which boasts 13.2 billion transistors – was tested using a TSMC-certified Calibre nmDRC software platform from Mentor. Read more…

By Oliver Peckham

TSMC and Samsung Moving to 5nm; Whither Moore’s Law?

June 12, 2019

With reports that Taiwan Semiconductor Manufacturing Co. (TMSC) and Samsung are moving quickly to 5nm manufacturing, it’s a good time to again ponder whither goes the venerable Moore’s law. Shrinking feature size has of course been the primary hallmark of achieving Moore’s law... Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Nvidia Embraces Arm, Declares Intent to Accelerate All CPU Architectures

June 17, 2019

As the Top500 list was being announced at ISC in Frankfurt today with an upgraded petascale Arm supercomputer in the top third of the list, Nvidia announced its Read more…

By Tiffany Trader

Top500 Purely Petaflops; US Maintains Performance Lead

June 17, 2019

With the kick-off of the International Supercomputing Conference (ISC) in Frankfurt this morning, the 53rd Top500 list made its debut, and this one's for petafl Read more…

By Tiffany Trader

A Behind-the-Scenes Look at the Hardware That Powered the Black Hole Image

June 24, 2019

Two months ago, the first-ever image of a black hole took the internet by storm. A team of scientists took years to produce and verify the striking image – an Read more…

By Oliver Peckham

Chinese Company Sugon Placed on US ‘Entity List’ After Strong Showing at International Supercomputing Conference

June 26, 2019

After more than a decade of advancing its supercomputing prowess, operating the world’s most powerful supercomputer from June 2013 to June 2018, China is keep Read more…

By Tiffany Trader

Qualcomm Invests in RISC-V Startup SiFive

June 7, 2019

Investors are zeroing in on the open standard RISC-V instruction set architecture and the processor intellectual property being developed by a batch of high-flying chip startups. Last fall, Esperanto Technologies announced a $58 million funding round. Read more…

By George Leopold

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This