A Virtual Conference for a Volatile World

By Cathy Davidson

April 22, 2010

This week HASTAC (an acronym for Humanities, Arts, Sciences, and Technology Advanced Collaboratory or “haystack”), a network of networks now 4,500 strong, put on Virtual HASTAC, one of the first international all-virtual conferences to use just about all of the virtual technologies available to us in 2010. Given that a volcano in Iceland has now caused the greatest air traffic stoppage since 9/11 and that many predicted the H1N1 flu this past winter would do the same, HASTAC 2010: Grand Challenges and Global Opportunities could not have been more timely. If this cloud of volcanic ash does not go away soon, more and more conferences, business meetings, and other events will need to be scheduled virtually. HASTAC 2010 offers us an excellent preview of how it can happen.

First, it took planning. Our HASTAC team at the University of Illinois organized it all, in an exciting collaboration among many units including the Institute for Computing in the Humanities and Social Sciences (iCHASS) and the National Center for Supercomputing Applications (NCSA). That sounds like a lot of acronyms but what it adds up to is collaboration across all of the imaginable areas of an enormous university. Led by HASTAC Steering Committee member and one of our founders Kevin Franklin, the team in Illinois next sent out a call for papers and selected a full three-day conference roster of over fifty presentations, some of them prepared in advance and some of them performed “live” online. Needless to say, if we had had 150 or so presenters flying in from all over the world, plus the 400 conference registrants, a volcano in Iceland would have shut down the conference. I imagine a lot of well-made plans were scuttled this weekend for this very reason.

The one that HASTAC co-founder David Theo Goldberg and I gave, “The Future of Thinking,” started off the round of presentations. Here’s how we made it happen. David and I had an hour-long bicoastal conversation orchestrated while he sat against a backdrop of books in Irvine, CA, and I did the same at Duke University, and, using iCHAT, Sheryl Grant, of the University of North Carolina, interviewed us on topic of our book, The Future of Thinking (MIT Press, 2010). It turned into a surprisingly lively and live-feeling video conversation. Although none of us was in the same room when we taped it, it plays with the interactive quality of a face-to-face event — but at a fraction of the cost. The technology itself cost less than a hundred dollars. Since we wanted it to look professional, the expert videographers in each place and the excellent editors at the John Hope Franklin Center at Duke took additional care with the final product. They added logos and credits, adjusted the sound, and in other ways made it professional. If you add in all of the total labor costs, the video would still be less than $300.

If David, Sheryl, and I had met in the same place, stayed a day for the filming, and so forth, the technology costs would be the same, plus we would have lost three days in bicoastal travel, and we would have paid airfares, hotel costs, meals, and all the rest. The cost for just this one-hour presentation could well have come to over $2000. Magnify that by fifty presentations, some with as many as four or five presenters, coming from eight countries and the travel and housing costs of the whole conference, live, would have been upwards of $50,000, without a single speaker receiving an honorarium or the other costs of actual rooms, banquets, hospitality, and the rest. To add to the virtual bottom line, once the technology was put into place to host all of the videos made by the various participants, it is there, archived, and can be viewed by anyone, subsequently, who registers to the site. So far, 400 people have done so. That also means anyone can go back and watch again.

Topics at Virtual HASTAC represented our network’s interdisciplinary expansiveness. This is not a land of academic turf wars and dancing on the head of a minute disciplinary pin. At HASTAC, we eschew the petty academic infighting and we all contribute, voluntarily, to what we believe is the foundational message of the humanities: what it means to be human in the twenty-first century — including all the history that entails! HASTAC charges no dues and you can become a member simply by registering on the www.hastac.org Web site and you can begin blogging, suggesting ideas, and contributing today. HASTAC Central, located at Duke University, is a communications hub. You can announce your event on our calendar or tell us your news. HASTAC Scholars, 130 graduate and undergraduate students all over North America and beyond, are the “eyes and ears” of HASTAC reporting on events in their area. And the area is broadly and loosely defined.

At Virtual HASTAC, papers included technology ones that explained how specific technologies from tele-immersive environments to cloud computing function and what they can accomplish for you, in your intellectual life, in your community. Others talked about pedagogy and how it makes no sense to have a lot of toys and then teach in the same old way. Another was about restoring the world’s oldest copy of Homer’s great epic, The Iliad, that, for the last hundred years, has been disintegrating in an archive in Venice. Now it is not only being digitized but new visualization technologies help us see what is on the page but faded to the naked eye while “crowdsourcing” allows us to have many eyes look and interpret together. Another was about recreating digitally the archive of Soweto, 1976, a powerful historical moment lost in the tumult of political revolution.

There were art presentations, experimental films, and a concert. Mobile Voices, a project based in L.A., showed how it was using technology for literacy and social activism that extended from college students to Chicano/a day laborers. The Berkman Institute at Harvard organized a four-country panel (US, UK, Netherlands, Bulgaria) on matters of urgency, from the way data from the internet explodes traditional social sciences methodologies to an urgent protest against a UK bill that seriously limits internet and WiFi access.

Because it was virtual, the conference could be as expansive as HASTAC itself. Anyone could choose what to participate in, and what to ignore. And “participation” isn’t just watching. Alongside the videos, the developers of the still-beta technology Google Wave hosted an impressive, continuous three-day long Wave where anyone could engage in real-time chat, several different participants at a time, including from any where in the world. At the session David and I did on the “Future of Thinking,” I spent an hour online responding to in-depth responses to our prerecorded conversation, with follow-up questions, suggestions, and further thoughts offered by fifteen or twenty people who typed not only to me but to one another. This conversation was also archived so anyone can go back later and look at that too. There were some glitches from an overloaded system but it was still lively, engaged, and interactive. But the biggest impediment wasn’t from any beta technology. David himself, at the time, wasn’t able to be on line. Why? He was in an airport in Amsterdam, and, like the rest of the world, anxiously waiting out a catastrophic volcano.

There were also presentations in the 3D virtual environment Second Life. One panel, led by HASTAC Scholar Ana Boa-Ventura, was on dance and performance and participants were welcomed into Second Life. If you wanted, you could pick up a free t-shirt for your avatar, created by HASTAC member Liz Dorland. Those experienced in SL helped the Newbies, both in the virtual world and using Google Wave. My favorite moment was when Fiona Barnett, Director of the HASTAC Scholars, typed to Jen Guiliano (the amazing graduate student responsible for organizing so much of the conference): “Sorry, Jen, I think I just stepped on you!”

That made for a lot of laughter but also was a stellar moment for reminding us that we were all part of a very interesting experiment. Since HASTAC had taken on the responsibility for communicating to a larger world, we were using Twitter, Facebook, as well as the HASTAC blogs with RSS feeds to get out the word. We were IM’ing and of course we were using YouTube. That is a host of technologies and, at one point, I found myself watching the conference sessions on my desktop, contributing to a Google Wave conversation on my laptop, and tweeting to our followers using my iTouch. When the telephone rang, I was paralyzed for a moment!

We were as exhausted at the end of Virtual HASTAC as conference organizers ever are. We had a hilarious post-conference sigh of relief at the end when I, Jen, Fiona, and Pam Fox (one of the developers of Google Wave, and based in Australia) were joking about sharing a HASTAC cocktail we would call The Wave and Fiona posted our favorite current music video, the amazing “Tightrope” by Janelle Monae, and we all were dancing in our actual spaces, on two continents and four cities, and laughing about it, together, on line using Google Wave.

That may not be a “real” conference, but it beats waiting out the volcano at your local airport. We happen to think it’s the conference of the future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This