Conference Report: SC15

My week in Austin started out cold and rainy.
My week in Austin started out cold and rainy.

This last week, I had the privilege to attend the biggest annual supercomputing conference in north America, SC. I was one of about ten students studying high performance computing (and related fields) who were funded to go by a travel grant from the HPC topical group of the Association for Computing Machinery. It was a blast, and I learned a ton.

I haven’t had much time to write up any science results, so I figured I’d give a few brief highlights of the conference, if I could.

Vast Scale

SC15 was by far the biggest conference I’ve ever attended. There were more than 10,000 people registered… and the scale showed. The plenary talks, attended by most of the conference, were like rock concerts, complete with stage lighting and huge crowds.

The plenary sessions for SC15 were like rock concerts.
The plenary sessions for SC15 were like rock concerts.

And in addition to the technical program, there was a massive exhibition, with booths manned by scientists, government organizations, and corporate vendors—anyone with an interest in supercomputing. I spent a long time at the NASA booth chatting with the scientists about their research.

The SC15 exhibition is quite impressive
The SC15 exhibition is quite impressive

A Focus on the Future of Supercomputing

The high-performance computing community is currently working hard to prepare for the next generation of supercomputers, the so-called exascale machines, which will turn on in five years or so. These machines will be orders magnitude faster and more parallel than current systems. And although this brings opportunity, it also brings huge challenges.

How do you run a program on a supercomputer when it’s so large that a component fails every day? How do you write programs that can take advantage of all that computing power? To do so, you essentially need to write many many programs, each of which is running on a different piece of the supercomputer. (We do this already, but it will be much harder on exascale machines.)

About half of the talks and panels I attended were discussing these problems. Lots of people have different approaches. For example, I attended a tutorial on a programming library called HPX, which uses the concept of a future—a promise to return some data after calculating it—to express how to write parallel programs. I also attended a session on Charm++, which tries to treat each part of a parallel program as an independent creature which can talk to and interact with different parts of the program. Both of these ideas are designed to help people deal with ultra-parallel programs.

Highlight: Alan Alda

The plenary speaker on opening night was the Alan Alda, the actor. Alda is a major science advocate. In his talk, he not only argued strongly for the need for science communication, but he also argued for his vision of how that should be done. Alda felt that scientists need to be trained as communicators who can read their audience and bring the subject matter to them. To this end, Alda has started an organization that trains scientists to be better communicators: The Alan Alda Center for Communicating Science.

It was a very good talk. I didn’t know about the center, but now I want to take one of those classes!

I took this picture from the Alan Alda Center's website. Presumably it is a scientist learning to communicate.
I took this picture from the Alan Alda Center’s website. Presumably it is a bunch scientists learning to communicate.

Highlight: Reduced Order Modelling

One of the most interesting talks I saw by far was the talk on “reduced order modelling.” The idea is this. Suppose you’re an engineer and you want to use computer simulations to help you design whatever it is you’re designing, like an airplane. Unfortunately, the simulation of air flow over the body of the craft takes a long time… hours or days on a supercomputer. So, change one thing and wait hours to see what happens. Not very useful for design. How do you handle that?

Well, a new class of techniques try to answer this. Basically, the entire set of possibilities can be represented by splicing together the results of just a few simulations… enough to get a representative idea of what’s going on. The techniques that do this are called “reduced order modelling” and this is exactly how gravitational scientists are using numerical models of gravitational waves to make predictions about what gravitational wave detectors like LIGO will see.

Stanford professor Charbel Farhat gave a very nice overview talk of the methods and their industrial applications.

reduced order modelling
Reduced order modelling means that an engineer designing this plane could get near instant feedback about how it behaves. Credit: David Ansallem

More?

By necessity, I am leaving many amazing talks, workshops, and panels out of this article. But hopefully it gave you a taste for what SC15 was like. I may have more to sayin the future. But I think that’s all for now.

2 thoughts on “Conference Report: SC15

  1. Hello Jonah

    Glad to hear that you attended the conference and wrote down your experiences. Its nice reading your articles and blog, so will keep up with your posts. I am a software programmer and a Physics Major and really into research nowadays.

    Sanjeeb

    1. Thanks for reading, Sanjeeb. I’m glad you liked it! If you have any questions, feel free to contact me.

Comments are closed.