The Importance of Team Science at XSEDE15

By Faith Singer-Villalobos, Communications Manager, Texas Advanced Computing Center

August 7, 2015

“I’m continuously inspired by her passion, her commitment and her innovative approaches for advancing research, education and the recruitment and retention for a larger and more diverse community of practitioners,” said Scott Lathrop, XSEDE director of Education and Outreach, as he introduced Dr. Ann Quiroz Gates to the podium at the 4th annual XSEDE15 conference.

First and foremost, Dr. Gates is a professor and chair of the Computer Science Department at The University of Texas at El Paso (UTEP). Importantly, she also directs the NSF-funded Cyber-ShARE Center of Excellence established in 2007. The mission of Cyber-ShARE is as follows:

To advance and integrate cyber-enhanced, collaborative, and interdisciplinary education and research through technologies that support the acquisition, exchange, analysis, integration of data, information and knowledge to solve complex problems.

Among her many other accomplishments, Dr. Gates leads the Computing Alliance for Hispanic Serving Institutions, which focuses on the recruitment, retention and advancement of Hispanics in computing; is a founding member of the National Center for Women in Information Technology; won the 2015 A. Nico Habermann Award and the 2010 Anita Borg award for Social Impact; and she was named by Hispanic Business Magazine as one of the Top 100 Influential Hispanics in 2006.

Her passions are clearly collaborative research and diversity.

“In the last two decades, there has been a surge in investments in large scale team science projects,” Gates said. “The term team science denotes a team of diverse members who conduct research in an interdisciplinary manner. The term convergent research is also often used in this context. The success of working in large, diverse teams are influenced by a variety of factors that impact efficiency, productivity and overall effectiveness.”

In her plenary talk at the XSEDE15 conference, Gates discussed what some of the experts are researching in this exciting and growing field. Her project, Cyber-ShARE, is an example of team science (aka collaborative science). “Cyber-ShARE is an interdisciplinary team across computer science, geological and environmental science. We support interdisciplinary research and collaborations across campus (at UTEP) that broaden interdisciplinary research.”

More and more research is being conducted on the importance of team science. When talking about team science Dr. Gates refers to the National Research Council’s definition of bringing together small teams and larger groups of diverse members to conduct research in an interdependent manner. There are a number of approaches in which team science can be done that can work within and across disciplines; there are also a number of terms to describe what level a team can be at in this continuum, including:

  • Transdisciplinary: integrate and transcend disciplinary approaches to generate fundamentally new conceptual frameworks, theories, models and applications
  • Interdisciplinary: integrate information, data, techniques, tools, perspectives, concepts and theories across disciplines, working jointly
  • Multidisciplinary: incorporates two or more disciplines working independently

“A team science approach is needed because of the complexity of the scientific and social challenges we’re facing in this world,” Gates said. “Addressing complex problems requires contributions from different disciplines, communities and professions.”

There is evidence in the form of publications and patents that large, diverse team efforts result in greater productivity, reach, innovation and scientific impact. “Certainly this arises from the ability of the members to draw on each other’s diverse expertise. Diversity influences how decisions are made and can positively impact the group’s effectiveness.”

However, diversity also brings challenges. Gates broke them down into three major groups: 1) Knowledge negotiation and communication; 2) Shared resources; and 3) Team effectiveness.

“Problems exist around knowledge negotiation and communication such as lack of a common vocabulary and inability to communicate about research goals and integrate the solutions around the research problem. Also, oftentimes the teams are geographically dispersed so shared resources or lack thereof must be considered. In addition, being able to identify expertise and organizational boundaries brings about challenges. Misalignment of goals can also lead to conflict. Disciplinary boundaries evolve reflecting the changing nature of goals over time,” Gates said.

So, how do you work in a group with a large number of team members?

It requires communication, coordination and high positive interdependence — members working together to accomplish a shared task. As a result, there has to be strong leadership that can assign and facilitate interdependent tasks that integrate the unique talents of the individual members to accomplish shared goals.

The NSF Extreme Science and Engineering Discovery Environment (XSEDE) project is a great example of team science. The project supports the ability of a very large team dispersed around the world to use advanced digital resources and services that are critical to the success of science.

Gates points to the XSEDE Industry Challenge program as an example.

The XSEDE Industry Challenge program brings together researchers, scientists and engineers from academia and industry with interdisciplinary backgrounds, deep knowledge in disciplines, and technical and professional skills. The program is intended to establish a new model for cooperative and collaborative research between industry and academia that transcends traditional disciplinary boundaries.

XSEDE believes with inter-industry research there is potential for future economic and societal benefit within both the industrial and academic worlds.

Gates agrees with XSEDE’s view and notes the need for more support of organizations such as XSEDE that have invested in promoting virtual, interdisciplinary communities and projects.

Team science is crucial for the success of projects that involve students, particularly those from underrepresented groups, who wish to become researchers or computer scientists. The Affinity Research Group (ARG) Model identifies students who have the capability but maybe not the competence to be involved in research. The model focuses on developing the social and team building skills needed to be successful researchers and encompasses many of the best practices recommended by experts in team science.

“The premise here is to change the culture by preparing students to effectively work in teams. Students are our future workforce — this work has been published in the Journal of Engineering Education.”

The essential elements of the ARG model are as follows:

  • Establish core purpose
  • Structure positive interdependence
  • Practice promotive interaction
  • Teach professional skills
  • Ensure individual accountability
  • Reflect on how well or poorly the group performs

“You have to work on teaching the skills,” Gates explained. “You can’t assume that students know what they need to know to work effectively. Members of a team must know what their individual role is and how it maps back to the bigger goals and sub-goals.”

In essence, to learn is to become a member of a practicing community imparting tools, language, knowledge and skills and to develop a deep commitment to the work and each other’s success. “Learning takes place in meaningful and authentic activity,” according to Gates. “The work of each individual makes a local contribution as well as a global contribution. Expert participants serve as models for professional practice for novices imparting the community’s values, tools, language and knowledge and skills through the everyday work and interaction. They develop a deep commitment to the work and each other’s development and success.”

Team science is about how the national science community can become more inclusive in what it does, and there is a lot of work being done in the science of team science. Gates concluded by emphasizing that the role of diversity in team science is extremely important and extends to age, gender, ethnicity and culture.

A PDF on “Enhancing the Effectiveness of Team Science” is available for download at http://www.nap.edu/catalog/19007/enhancing-the-effectiveness-of-team-science.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high-end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This