The Geodetic Effect: Measuring the Curvature of Spacetime

gravity probe B in curved spacetime
Figure 1. A promotional image created by P. Eekels for Gravity Probe B (GP-B). It shows the GP-B satellite orbiting Earth in a spacetime distorted by Earth’s rotating mass.

A couple of weeks ago, I described the so-called “classical tests of general relativity,” which were tests of early predictions of the theory. This week, I want to tell you about a much more modern, difficult, and convincing test: A direct measurement of the curvature of spacetime. It’s called the geodetic effect. This is the eighth post in my howgrworks series. Let’s get to it.

We know from general relativity that gravity is a distortion of how we measure distance and duration. And that we can interpret this distortion as the curvature of a unified spacetime. When particles travel through this curved environment, they attempt to travel in straight lines. However, the straightest possible path is curved, and this gives the illusion of an acceleration, which we perceive as the force of gravity.

Triangles

General relativity provides us with a very geometric interpretation of gravity, so it is perhaps not surprising that there are measurable geometric consequences of the theory. For example, the presence of curvature changes the way angles work. To see how, let’s consider the humble triangle, like the one shown in figure 2.

a humble triangle
Figure 2. A humble triangle. The sum of interior angles (angles a, b, and c) of every triangle is always 180 degrees.

In flat space (i.e., what we’re used to), the interior angles of a triangle (that’s angles a, b, and c in figure 2) always add up to 180 degrees, no matter the triangle. It’s a theorem, a mathematical fact. (Although it’s not that hard to prove.)

But what about if the space is curved? To answer this question, we first have to define what a triangle is in a curved space. Intuitively, a triangle is a collection of three straight lines. However, as we discussed, in a curved space, there may not be any straight lines! But we must make do with what we have. So in curved space, we build triangles out of lines that are as close to straight lines as possible. These are called geodesics.

As a brief example, let’s try and construct a triangle on the surface of the Earth, which is definitely curved. On the surface of the Earth, the geodesics are segments of the great circles, which are the largest possible circles you can draw on the Earth. The equator is one great circle, as is every line of longitude. The lines of latitude are not. One possible triangle is shown in figure 3.

triangle on the earth
Figure 3. One possible triangle on the surface of the Earth. The edges are segments of the great circles.

But there’s a funny feature of our triangle in figure 3. Can you spot it? Each of the angles a, b, and c are ninety degrees! So the sum of the interior angles is 270 degrees! In flat space, the total sum of interior angles of a triangle is always 180 degrees. But in curved space, this isn’t the case. And indeed, the curvature controls the sum of the interior angles in a triangle. This amazing fact is summarized in figure 4. The difference between the sum of the interior angles of a triangle and 180 degrees is called the deficit angle.

a comparison of triangles
Figure 4. In flat space, the sum of interior angles of a triangle adds up to 180 degrees. Not so for curved space.

What this means is that we can use deficit angles to measure curvature.

Fingers.

In the case of the Earth, we can measure the interior angles of a triangle by simply walking around it with a protractor (or gigantic version thereof).

So imagine that you are a cartoon character on a cartoon world painted with lines and letters as in the illustration. You are at angle a. You are so small that the world looks flat and the lines look straight (and you’ve been told they are straight). You have an arrow. Your assignment: walk along the lines from a to b to c to a, as shown in figure 6. while keeping the arrow pointing in the same direction (it’s pointing toward b now, as shown in figure 5). You set off along ab, carefully keeping the arrow pointing b-ward. You get to b and note that your arrow points at right angles to bc. So you set off along bc, carefully keeping the arrow at right angles to your path. You get to c and note that your arrow is pointing backward along ac. You set off along ac carefully keeping the arrow pointing to your rear. Arriving at a you note that the arrow is now at right angles to ab. (For experts, we are using your arrow as  a stand-in for a tangent vector.) Moving the arrow in this way is called parallel transport.

the beginning
Figure 5. To measure the deficit angle of the triangle, we start by aiming an arrow from angle c to angle b.
gyroscopes man, how do they work?
Figure 6. We carry our arrow around the edges of the triangle. We find that, quite naturally, the arrow rotates.

Notice something? The arrow rotates ninety degrees, as shown in figure 7! It’s no coincidence that the amount that the arrow rotates is the same as the size of the deficit angle. It turns out that parallel transport is one way to define curvature. (For experts, I am exploiting a special case of the Gauss-Bonnet theorem.)

ninety degrees!
Figure 7. If we parallel transport our arrow along all of the edges of the triangle, we find that it’s rotated by ninety degrees.

Spacetime and the Geodetic Effect

So, we we know that we can use parallel transport on the surface of the earth to extract the curvature of the Earth. Let’s go back to general relativity. Can we use the lessons we learned on the surface of the Earth to calculate the curvature of spacetime due to the mass of the Earth? The answer is a resounding yes!

It turns out that in a curved three-dimensional space, a gryoscope works exactly like your arrow stuck on a curved surface. In other words, a real-world gyroscope is a good stand-in for a three-dimensional tangent vector.

If we take our gyroscope and put it in orbit around the Earth, the direction it points will rotate due to the curvature of spacetime sure to the mass of the Earth. I show an extremely exaggerated version of this in figure 8. In reality, the rotation of the gyroscope is not detectable by eye. This rotation is called the geoedetic effect.

Figure 8. An exaggerated representation of the geodetic effect. A gyroscope placed in orbit about the earth precesses due to the curvature of space around the Earth.

Frame Dragging

The geodetic effect as I’ve described is actually one of two effects that cause our gyroscope to precess. The other effect is called frame dragging and it comes from the rotation of the Earth. Intuitively, as the Earth rotates, it drags spacetime with it, causing additional curvature as space and time mix into each other.

A real explanation of frame dragging is a bit too technical for me to get into now, although it’s something I’d like to cover in the future. So for now, I’ll just mention that this is a related effect that we have to take into account. As figure 9 shows, the geodetic effect rotates the gyroscope in the polar direction, whereas frame dragging rotates it in the azimuthal direction.

Figure 9. The geodetic effect and the frame dragging effect rotate the gyroscope in different directions. Image due to the Gravity Probe B collaboration.
Figure 9. The geodetic effect and the frame dragging effect rotate the gyroscope in different directions. Image due to the Gravity Probe B collaboration.

Gravity Probe B

So, this geodetic deviation stuff is all well and good. It’s a nice idea. But is it really measurable? Can we really do this? The answer is a resounding yes! The most direct measurement was made by Gravity Probe B, shown in figure 10.

Gravity Probe B before launch
Figure 10. The Gravity Probe B satellite before launch.

Gravity Probe B is a real experiment almost exactly like the theoretical one I just described. They made ultra-precise gyroscopes and put them on a satellite, which orbited the Earth. And then they watched the gyroscopes precess. The gyroscopes on Gravity Probe B, are real marvels of engineering, by the way. At the time they were created, the quartz spheres used in the gyroscopes, shown in figure 11, were the most perfect spheres ever created by humans. They deviate from perfect spheres by no more than 40 atoms in thickness.

EINSTEIIIIIN
Figure 11. A close-up of part of the gyroscope in Gravity Probe B. It’s being used as a lens for the picture of Einstein in the background. Credit: Gravity Probe B collaboration.

Gravity Probe B measured the geoedetic effect with an accuracy better than 0.03% and the frame dragging effect with an accuracy of better than 1%. Both measurements agreed perfectly with the predictions of general relativity. As amazing as it sounds, we can directly measure the curvature of spacetime.

Lunar Ranging

Gravity Probe B was not the first experiment to measure the geodetic effect. Although it may not be as aesthetically perfect, any spinning object in space can serve as a gyroscope, so long as we can keep track of the axis about which it rotates. For example… what about the moon?

During of the Apollo missions (11, 14, and 15), astronauts planted reflectors (basically fancy mirrors) on the lunar surface. The one from Apollo 11 is shown in figure 12. This allows us to shoot lasers at the moon (yes you read that right—a ground station is shown in figure 13) and have them be reflected back at us. And this in turn, let’s us measure all sorts of things: the distance between the Earth and the moon, the rotation of the moon, the precession of the moon’s axis of rotation, and more.

moon mirror!
Figure 12. The lunar reflector planted during the Apollo 11 mission. Image due to the NASA Apollo archive.

These hugely important measurements are called lunar ranging measurements, and we can learn an awful lot from them. For example, they told us that the moon is moving away from the Earth at a rate of 3.8 centimeters per year. They also told us a lot about the makeup of the moon and allowed us to test the strong equivalence principle, which is a foundational idea behind general relativity.

MOON LASER!
Figure 13. Lunar ranging in action, as viewed from the ground facility at NASA Goddard Space Center. Source: NASA Picture of the Day.

You may also recognize some of the properties I listed as exactly what we need to measure the geodetic effect. And indeed, the lunar ranging experiments did it long before Gravity Probe B, and to similar accuracy. (Gravity Probe B was designed to be much more accurate, but it had some problems with the gyroscopes discovered after launch.)

LAGEOS

We can apply the same principle to human-made satellites as well. And in fact, we custom-built some satellites precisely for this purpose, the LAGEOS satellites, one of which is shown in figure 14.

reflecting sphere
Figure 14. The LAGEOS 1 satellite. Image courtesy of NASA.

The LAGEOS satellites are basically just reflecting spheres sent into space that we can bounce lasers off of. The LAGEOS satellites have actually been used not only to measure geoedetic deviation, but also frame dragging. The original claim is that they only measured frame dragging to an accuracy of about 10%. However, many people are still trying to use the satellites to extract a more accurate measurement, perhaps one that can even rival Gravity Probe B.

Concluding Thoughts

Through general relativity, Einstein provides us with a purely geometric interpretation of gravity. Measurements of gravitational redshift like the Pound-Rebka experiment, which directly measure the distortion of distance due to gravity, are one direct measurement of the geometry of spacetime. But parallel transport and geodetic deviation provide us with another direct measurement, one that makes the curvature of space and time manifest. And that’s very satisfying.

I should note that these experiments have only been performed in situations where gravity is weak. The Earth’s gravity holds us on the surface, but it is far from the most extreme situations…. situations like black holes and neutron stars. Even though these experiments agree with Einstein, we shouldn’t use them to rule out the possibility that general relativity fails for extremely massive objects. We need different experiments for that. And I plan to tell you about one of those in the near future.

Related Reading

This is the seventh part of my series on general relativity. Here are the first parts:

Further Reading

Here is the relevant peer-reviewed and popular material for frame dragging, the geodetic effect and measurements thereof.

Parallel Transport on the Surface of the Earth

I introduced the geodetic effect by describing parallel transport on the surface of the Earth. I learned this material from Differential Geometry and its Applications, by Oprea.

The Geodetic Effect and Frame Dragging

The original calculations of the geodetic effect are in german. However, a translation and modern analysis of the work is available here. Unfortunately, it’s behind a paywall.

Lunar Ranging

There is quite a lot of literature on tests of general relativity using lunar ranging. It’s almost a field into itself. Therefore, I figured the best thing to share with you would be these two review articles, which are free to read and both quite good.

LAGEOS

  • Here‘s the original paper measuring frame dragging using the LAGEOS satellites.
  • Here‘s a Nature News article on the discovery.
  • Here‘s a review of measurements of frame dragging using satellites. (This is a preprint, but it was published in Space Science Reviews.
  • Here‘s a proposal to measure frame dragging with an accuracy of 1% using laser ranging.

Gravity Probe B

  • Because lunar ranging beat it to the geodetic effect and LAGEOS beat it to frame dragging, there’s some controversy about the merit of Gravity Probe B. It’s summarized in this Nature News article.
  • The press release for the results of Gravity Probe B is available on youtube. You can find it here.
  • The Gravity Probe B results paper is available here, but it’s behind a paywall.
  • The journal Classical and Quantum Gravity has released a focus issue on Gravity Probe B, with many free-ro-read papers on the subject. You can find it here.
  • Of particular interest is the summary paper, found here.

Acknowledgements

Thanks to Reddit user John_Hassler for corrections.

 

Posted in Astrophysics, Physics, Relativity, Science And Math | Tagged , , , , , , , , | 1 Comment

Book Review: Beyond the Galaxy

Beyond the Galaxy
Figure 1. The cover of Ethan Siegel’s new book,

Earlier this year, I was asked to review Ethan Siegel’s, upcoming book Beyond the Galaxy, shown in figure 1. I got an advanced copy and dug in and I really loved what I found. With Ethan’s permission, I wanted to repost my review here so you can all read it.

The Review

The history of science is filled with ideas that were once compelling, but have since been ruled out by empirical evidence. Ethan Siegel’s Beyond the Galaxy understands this fundamental truth of science. With eloquence and clarity, Siegel tells us the story of the universe, from the (inferred) cosmic inflation and the Big Bang at the very beginning to the (predicted) Big Freeze at the very end. In the process, Siegel also tells us the story of the sciences that study the universe.

Many authors describe how we learned that obsolete ideas like geocentrism and tired light are wrong. However, Siegel also illustrates why these ideas were widely accepted in the first place: They explained the world well. Given the limited knowledge of their time, they were good ideas.

And modern science is just as susceptible to revision. Siegel speculates about how our current best ideas may be wrong and enthusiastically describes the mysteries that plague us. In doing so, Siegel teaches readers about both the study of cosmology and the nature of scientific inquiry. Beyond the Galaxy is one of those rare books that not only communicates scientific ideas, but communicates what science itself is all about.

Whet your appetite?

As far as I’m aware, you can’t buy Ethan’s book yet. But here’s the links anyway.

Posted in Physics, Science And Math | Tagged , , | Leave a comment

The Holometer

Figure 1. A member of the holometer team works on the device. Image due to Fermilab.
Figure 1. A member of the holometer team works on the device. Image due to Fermilab.

You may have heard the buzz about the holometer, shown in figure 1, before. It’s a giant laser interferometer, much like those used to search for gravitational waves, designed to detect quantum fluctuations in the fabric of spacetime. At least, that’s the claim. The holometer just released a preprint of their first science paper. And of course,  a Fermilab press release appears in Symmetry Magazine.

The article is good, and I recommend you read it. And the holometer experiment is good, interesting science. But I have to say, I’m extremely annoyed by how much the holometer team is overselling their experiment. The scientific paper is honest, but the press surrounding the experiment really oversells it. And I blame the science team, or at least the leader of the team.

Pixelization

The headline of the symmetry magazine article is “spacetime is not pixelized!” But a more accurate title would be: “one model of quantum gravity that predicted quantum fluctuations much larger than anybody else believes has been ruled out! There are many more models not ruled out!” In particular, the model in question is a heuristic description by the head of the experiment, Craig Hogan. And as physicist Sabine Hossenfelder describes, it’s not at all convincing. Hogan also argues that there’s a connection between his model and the holographic principle, but I’ve never understood it.

Also, let me be clear that, when Hogan says “spacetime is pixelized,” he means there’s a fuzzyness due to the Heisenberg uncertainty principle. He doesn’t mean spacetime is discrete, which is a different approach to quantum gravity.

Utility as Proof-of-Concept

The holometer team also argues that their experiment is a proof-of-concept that interferometry can be used to test quantum gravity, but I’m extremely sceptical that any model, other than Hogan’s, can be tested. Sorry, I’m about to get a little technical.

The authors claim a sensitivity of ~10^{-20} m/√Hz. But the Planck scale, where we expect quantum fluctuations of spacetime, is many orders of magnitude smaller… Like 10^{-32} m/√Hz. It takes a very special model, basically Hogan’s model, to predict quantum fluctuations of spacetime on a directly measurable scale.

Other Applications?

Although I don’t think the holometer will ever be a good test of quantum gravity, it may be a useful tool for searching for other stuff. At the 2015 Midwest Relativity Meeting, I saw a talk by a member of the holometer team describing her work. She’s using the holometer data to search for so-called exotic sources of gravitational waves: things that probably don’t exist, like cosmic strings. This is good, valuable research. We don’t think this stuff is out there, and we probably won’t find anything, but it’s worth looking anyway.

Related Reading

If you enjoyed this post, you may enjoy some of these other posts.

  • In Distance Ripples, I describe how gravitational waves work.
  • In Quantum Geometry, I describe my own research in quantum gravity, called Causal Dynamical Triangulations.

Small Corrections

Thanks to Leo Stein‘s corrections, I have changed the content a little bit. Leo points out that it’s a bit unfair to compare the holometer to LIGO, because they are built to measure different frequencies. I also claimed that cross-correlating detectors is questionable, but Leo points out that the noise sources in the holometer frequency range are less likely to cross-talk between detectors as those for LIGO.

Posted in Physics, Quantum Mechanics, Relativity, Science And Math | Tagged , , , | Leave a comment

Classical Tests of General Relativity

Abanonded steam engine in Uyuni train cemetery
Figure 1. An abandoned steam engine with Einstein’s field equations painted on it. Image due to Jimmy Harris.

Last Wednesday, November 25, was the 100 year anniversary of general relativity. It was the precise day that Einstein presented his field equations, shown in figure 1, to the world. In celebration of this anniversary, today I present to you some of the early triumphs of general relativity, classical predictions of the theory that have been precisely tested and where theory has exquisitely matched experiment. This is the sixth instalment of my howgrworks series. Let’s get started.

The Perihelion of Mercury

Before Einstein, we believed that the motion of planets in the solar system were governed by Kepler’s laws of planetary motion. These can be derived from Newton’s laws of motion and universal gravitation, but Kepler discovered them before Newton, through the power of careful observation.

Kepler’s laws are very good. To an extremely good approximation, they do describe the motion of a single planet orbiting the sun. They tell us that the orbit of a planet around the sun has the shape of an ellipse, like that shown in figure 2. The point where the planet is closes to the sun is called the perihelion of the planet.

an elliptic orbit
Figure 2. The orbit of a small planet around the sun, The orbit is elliptical. The orbit of Mercury, and most planets, is actually ALMOST circular. So the eccentricity of the orbit is greatly exaggerated in this image. (Credit due to Wikimedia commons contributor Brandir.)

But planets don’t actually follow perfect, closed, elliptical orbits. The ellipse slowly rotates, or precesses over the course of time, meaning that the perihelion of the planet moves over time, as shown in figure 3.

precessing Kepler orbit
Figure 3. In actuality, a planet doesn’t quite obey Kepler’s laws. The ellipse isn’t quite closed, so the perihelion of the orbit itself moves around the sun over long periods of time. (Image due to Wikimedia commons contributor WillowW.)

This precession is caused by a number of factors. The gravitational influence of other planets, for instance, contributes. Indeed, for most planets, the gravity of the rest of the solar system adequately explains the precession. But there’s one planet where this isn’t the case: Mercury. If you work out the numbers, the precession of Mercury’s orbit is too big to be explained only by the gravitational effects of the other planets.

For several decades, the perihelion of Mercury was a mystery. People hypothesized that there was a tenth planet (or dwarf planet) in the solar system closer to the sun than Mercury, called Vulcan. The planet Vulcan turns out not to exist, but its legacy lives on, as shown in figure 4.

Spock
Figure 4. Live long and prosper, Vulcan.

When Einstein developed general relativity, he knew about the mystery of the perihelion of Mercury, and he believed he had the answer. Einstein worked out some of the corrections to a Keplerian orbit due to general relativity, and found they perfectly predicted Mercury’s perihelion.

General relativity was off to a great start!

The Deflection of Light

But Einstein wasn’t content with explaining observations we’d already made. He wanted to make a prediction. Ever since Newton proposed his law of universal gravitation, people have wondered if gravity should effect the path of a beam of light. In 1784, the brilliant Henry Cavendish calculated the gravitational pull of a planet on a small particle moving at the speed of light. At the time, people didn’t know that photons are massless, but it turns out not to matter. The change in the path is independent of the mass of the particle.

General relativity also predicts that gravity should bend light, but for very different reasons. In general relativity, a massive object distorts spacetime itself, and light simply takes the straightest path. You have to work through the numbers, but if you do, you discover that this means light bends twice as much in general relativity as in  Newtonian gravity.

But how to test this prediction? Einstein proposed that we could use the sun itself. The sun should bend the path of starlight from the stars behind it. This is an example of gravitational lensing, which I’ve discussed before. Of course, usually the light from the sun masks any starlight one might wish to observe. But during a total solar eclipse, like the one shown in figure 5, the moon completely obscures the sun, and the stars should be visible.

2012 total solar eclipse
Figure 5. The 2012 total solar eclipse, viewed from Australia. (Image credit: NASA.)

After hearing Einstein’s prediction, Sir Arthur Standley Eddington lobbied strongly for an expedition to test it. Eddington saw the expedition both as an exciting scientific opportunity and as a way to heal the wounds of the first world war, which was still raging when he began lobbying for the expedition. After a long legal battle, Eddington avoided the draft and was granted a grant of 1000 pounds sterling for the expedition. As gravitational physicist Clifford Will writes in his article on the event:

The decision reeks of irony: a British government permitted a pacifist scientist to avoid wartime military duty so that he could go off and try to verify a theory produced by an enemy scientist.

But in March 1919, Eddington set sail for an island off the coast of Guinea and his collaborator, Andrew Claude de la Cherois Crommellin sailed to northern Brazil. Each expedition aimed to observe the deflection of light in 1919 total solar eclipse. This is a very difficult measurement to make and, due to inclement weather, the Crommelin team was unable  to make a definitive measurement. The Eddington team, however, was able to make a definitive measurement, one that confirmed Einstein!

Eddington’s result was widely publicized, and it sky-rocketed Einstein to fame. In 1919, as now, general relativity captured the public’s imagination. After Eddington announced his result, the Illustrated London News ran a major spread, describing the experiment, shown in figure 6.

The Illustrated London News
Figure 6. The Illustrated London News describes the Eddington expedition.

In science, it is not enough to explain observed phenomena, one must make a new, testable prediction. And Einstein’s prediction of the bending of light did just that. It was this victory that convinced the scientific community, and the world, that general relativity was right.

Gravitational Redshift

At the core of Einstein’s theory is the idea of gravitational redshift. If you take a beam of light at the surface of the earth, and somehow transport it up to the top of a tower, it will appear redder in colour than it did at the surface. This is because spacetime is stretching out as we move away from the surface and the wavelength of the light is stretching out with it. I’ve described this in great detail before.

It turns out that we can actually experimentally test this prediction! In 1959, Robert Pound and Glen Rebka used atomic nuclei in a crystal lattice to measure the wavelength of a beam of x-ray light, which they transported from the top of a tower, shown in figure 7 in a Harvard physics building to the basement, 74 feet below. This is the now famous Pound-Rebka experiment.

robert pound
Robert Pound at the top of the tower for the Pound-Rebka experiment. Image taken from Pound’s review article, Weighing Photons.

Of course, even with an extremely precise understanding of the light and the atomic transitions, this experiment would be impossible without some clever tricks. The experiment deserves an article all by itself. So for now, I will point you to this wonderful article in Physical Review. Suffice to say Pound and Rebka’s experiment confirmed Einstein’s predictions in exquisite detail.

And Many More…!

There are many more tests of general relativity, and I plan to tell you about them in another post. So Stay tuned!

Related Reading

This is the sixth part of my series on general relativity. Here are the first parts:

Further Reading

I’m far from the first person to write about this.

  • Ethan Siegel has a great article on the Perihelion of Mercury and the bending of light.
  • Brian Koberlein has an excellent article on Eddington’s measurement during the total solar eclipse.

Sources

For the brave or very interested, here peer-reviewed articles of interest. You will notice a preponderance of articles by Clifford Will. This is no accident, as he’s the world’s foremost expert on experimental tests of general relativity. (He’s also Canadian! Go Canada!)

 

Posted in Physics, Relativity, Science And Math | Tagged , , , , , , | 2 Comments

Conference Report: SC15

My week in Austin started out cold and rainy.
My week in Austin started out cold and rainy.

This last week, I had the privilege to attend the biggest annual supercomputing conference in north America, SC. I was one of about ten students studying high performance computing (and related fields) who were funded to go by a travel grant from the HPC topical group of the Association for Computing Machinery. It was a blast, and I learned a ton.

I haven’t had much time to write up any science results, so I figured I’d give a few brief highlights of the conference, if I could.

Vast Scale

SC15 was by far the biggest conference I’ve ever attended. There were more than 10,000 people registered… and the scale showed. The plenary talks, attended by most of the conference, were like rock concerts, complete with stage lighting and huge crowds.

The plenary sessions for SC15 were like rock concerts.
The plenary sessions for SC15 were like rock concerts.

And in addition to the technical program, there was a massive exhibition, with booths manned by scientists, government organizations, and corporate vendors—anyone with an interest in supercomputing. I spent a long time at the NASA booth chatting with the scientists about their research.

The SC15 exhibition is quite impressive
The SC15 exhibition is quite impressive

A Focus on the Future of Supercomputing

The high-performance computing community is currently working hard to prepare for the next generation of supercomputers, the so-called exascale machines, which will turn on in five years or so. These machines will be orders magnitude faster and more parallel than current systems. And although this brings opportunity, it also brings huge challenges.

How do you run a program on a supercomputer when it’s so large that a component fails every day? How do you write programs that can take advantage of all that computing power? To do so, you essentially need to write many many programs, each of which is running on a different piece of the supercomputer. (We do this already, but it will be much harder on exascale machines.)

About half of the talks and panels I attended were discussing these problems. Lots of people have different approaches. For example, I attended a tutorial on a programming library called HPX, which uses the concept of a future—a promise to return some data after calculating it—to express how to write parallel programs. I also attended a session on Charm++, which tries to treat each part of a parallel program as an independent creature which can talk to and interact with different parts of the program. Both of these ideas are designed to help people deal with ultra-parallel programs.

Highlight: Alan Alda

The plenary speaker on opening night was the Alan Alda, the actor. Alda is a major science advocate. In his talk, he not only argued strongly for the need for science communication, but he also argued for his vision of how that should be done. Alda felt that scientists need to be trained as communicators who can read their audience and bring the subject matter to them. To this end, Alda has started an organization that trains scientists to be better communicators: The Alan Alda Center for Communicating Science.

It was a very good talk. I didn’t know about the center, but now I want to take one of those classes!

I took this picture from the Alan Alda Center's website. Presumably it is a scientist learning to communicate.
I took this picture from the Alan Alda Center’s website. Presumably it is a bunch scientists learning to communicate.

Highlight: Reduced Order Modelling

One of the most interesting talks I saw by far was the talk on “reduced order modelling.” The idea is this. Suppose you’re an engineer and you want to use computer simulations to help you design whatever it is you’re designing, like an airplane. Unfortunately, the simulation of air flow over the body of the craft takes a long time… hours or days on a supercomputer. So, change one thing and wait hours to see what happens. Not very useful for design. How do you handle that?

Well, a new class of techniques try to answer this. Basically, the entire set of possibilities can be represented by splicing together the results of just a few simulations… enough to get a representative idea of what’s going on. The techniques that do this are called “reduced order modelling” and this is exactly how gravitational scientists are using numerical models of gravitational waves to make predictions about what gravitational wave detectors like LIGO will see.

Stanford professor Charbel Farhat gave a very nice overview talk of the methods and their industrial applications.

reduced order modelling
Reduced order modelling means that an engineer designing this plane could get near instant feedback about how it behaves. Credit: David Ansallem

More?

By necessity, I am leaving many amazing talks, workshops, and panels out of this article. But hopefully it gave you a taste for what SC15 was like. I may have more to sayin the future. But I think that’s all for now.

Posted in Science And Math | Tagged , , | Leave a comment

Bruno Maddox and the Magnet: A Story of Misconceptions

Insane Clown Posse certainly wonders how magnets work.
Insane Clown Posse certainly wonders how magnets work.

This week the ever-inquisitive Gary Matthews pointed me to a 2008 article for Discover Magazine by Bruno Maddox, claiming that physicists cannot explain how magnetism works, and that they are in denial about it. I encourage you to read the article. Maddox is wrong—dead wrong—but his argument displays a number of common misconceptions about science. And I’d like to address some of them. The most important misconceptions Maddox displays are that of first cause, of classical intuition, and of distrust of the abstract. Let’s get started.

(DISCLAIMER: The opinions in this article are my own. I will be describing very little real science here… just philosophy.)

The Misconception of First Cause

Early in his article, claims that nobody can explain how a magnet works and that nobody seems to be particularly bothered by this.

For one thing, as far as I can tell, nobody knows how a magnet can move a piece of metal without touching it.

Maddox writes

And for another—more astonishing still, perhaps—nobody seems to care.

I want to talk about the notion of “touch” later. But for now let’s focus on the other part of that quote—that nobody seems to care. What Maddox is getting at, I think, is that science can never answer why something happens… at a fundamental level, it can only offer descriptions and make predictions. It can only tell you how something happens.

A Hypothetical Conversation

Let’s imagine, for a moment, a hypothetical conversation between Maddox and a physicist. If he asks about magnets… the physicist will say something like “oh the electromagnetic force is caused by the magnetic field.”

“Okay, so what causes the magnetic field?” Maddox might ask. And to this a physicist might say “Well, the magnetic field is really a relativistic echo of this more fundamental thing, the electromagnetic field tensor. A magnetic field is created by moving charge… but that motion depends on your point of view. The field tensor is invariant.”

Maddox might push further. “What causes that?” And a physicist might tell him that it’s a low-energy limit of the electroweak force.

Maddox, getting really aggravated now, might push again. “But what causes that?” And the physicist, depending on her leanings on quantum gravity, would give him an unworried shrug. “We don’t know. It just is.”

What’s Wrong With Maddox’s Question

Do you see the problem? It’s the same problem as in theology. If you ascribe cause to something, then you must ask what causes the cause. One (very theological) answer is that God is infinite and can get around these petty problems like cause and effect.

But science has a better answer: we don’t know! And moreover, we cannot know! At a fundamental level, science is based on observations of the world around us. We are limited by what those observations can tell us. These observations can tell us a lot. They can tell us what happens—to bars of iron can be made to pull at each other. They can tell us how it happens—the bars attract if they are oriented in a particular way, otherwise they repel. And, with a bit of cleverness, they can give us the tools to make predictions—an electric current will attract an iron bar.

But observations, at some level, will fail to explain something. And that’s perfectly okay. In fact, it’s better than okay. It’s a good thing to know your limits! And this is a fundamental limit. The success of science is built on knowing that whatever Nature does must be the truth, no matter how counter-intuitive.

I believe Maddox knows this. He certainly lampshades it when he comments that

But as far as I can tell—and isn’t the point of science that all its bigger propositions come accompanied by this noble caveat?—[Steven Weinberg] really can’t [explain how magnets work].

But Maddox sees this as a reason to distrust science and it is not. It is science’s greatest strength.

(I don’t mean to imply that science has no explanatory power. It tells us that magnetism in a bar magnet is caused by either atomic spin or electron spin, for example… which is very powerful. But at some point, the chain of causes stops and you can go no further.)

The Misconception of Classical Intuition

Let’s reflect on that for a moment. Whatever Nature does must be truth, no matter how counter-intuitive. This is the second misconception Maddox displays. Maddox finds it unsettling that we cannot explain “how a magnet can move a piece of metal without touching it.”

But… what does it mean to touch? Let’s think about the subatomic realm, the world of quantum mechanics. In the world of atoms and electrons, “touch” is a fuzzy concept. For one thing, there is no such thing as a “particle.” Protons, electrons, neutrons, and even atoms and molecules, are not localized balls, like we’re used to in our world. They’re waves of probability, distributed throughout space. What this means to us in the world of trains and aeroplanes is not totally clear. But it is the nature of Nature. So particles them, aren’t really particles.

For another thing, when we “touch” a table, there’s a lot of empty space between the atoms in our hands and the atoms in the table! What’s really happening is that the atoms in our hands are repelling the atoms in the table… for a variety of reasons, including the electromagnetic force and the Pauli exclusion principle. There’s none of the “touching” Maddox seeks at all! Maddox is disturbed by the idea that we appeal to “spooky action at a distance,” but a more interesting question is are there any forces that aren’t, fundamentally, this sort of spooky action at a distance.

(As a historical note, Einstein described quantum mechanics as “spooky action at a distance” because he was disturbed by the fact that quantum entanglement seemed to violate causality. We know now that it does not violate causality and Einstein was worried for nothing. But the electromagnetic force never bothered Einstein.)

Maddox is falling prey to the fallacy of classical intuition. He believes that because he experiences the world in a particular way, the world must be that particular way. But Nature is not so gentle! We evolved to perceive the world in a way that benefits us evolutionarily… not in the way it really is! Again, the great strength of science as a methodology is that it overcomes this classical intuition and allows us to glimpse the world as it really is. (Or at least, closer to how it really is.)

A Fallacious Distrust of the Abstract

Finally, Maddox says that

When you get right down to it, the mystery of magnets interacting with each other at a distance has been explained in terms of virtual photons, incredibly small and unapologetically imaginary particles interacting with each other at a distance. As far as I can tell, these virtual particles are composed entirely of math and exist solely to fill otherwise embarrassing gaps in physics, such as the attraction and repulsion between magnets.

Well, Maddox is right about one thing. Virtual particles are unapologetically imaginary. This is a complaint that I, and many other scientists, share with Maddox. But this isn’t a problem with the science. It’s a problem with lazy science communication.

As I described above, the notion of a particle is deeply misguiding. A particle is a “human-scale” approximation of the true nature of reality, which is made up fields and waves. Really, force isn’t carried by virtual particles. It’s carried by fields, which interact with each other via waves that travel at speeds no greater than the speed of light. And it just so happens that these waves look like particles to us if we squint. But this doesn’t work all the time. Sometimes the notion of a single particle simply doesn’t make sense.

But, even in the realm of subatomic physics, the idea of a particle is very powerful. It provides intuition and a surprisingly robust computational tool. This is why, historically, high-energy physics has been misleadingly called “particle physics.” (And for those in the know, how the terrible name “second quantization” came to be.) And the notion of a virtual particle, an imaginary particle associated with the excitation of a quantum field, even more powerful.

So… if it makes good predictions…. is a virtual particle really imaginary? Or is it a valid way of interpreting the fundamental nature of reality?

The answer is that, despite my distaste for virtual particles… they’re often exactly as good of a description as waves—better, because they’re easy to work with. It’s true that the description fails sometimes, but so what?

(For experts, I’m discussing the occupation-number formalism of quantum field theory, vs. other formalisms. In particular, the occupation number formalism fails when a vacuum cannot be uniquely defined… a la Unruh effect or curved spacetime.)

This is why Maddox is wrong to distrust virtual particles. Maddox’s distrust seems to stem from the fact that virtual particles are purely mathematical and that there is a more general way to describe quantum fields. But he should not distrust this mathematical abstraction. It is the tool we use to make predictions.

Moreover, it’s the only tool we have. Scientists are not explaining why phenomena occur. Really what scientists do is build Lego models of the universe, simulacra that behave like the universe and allow us to make predictions. Equations and mathematical abstraction are the Lego blocks of our models. And the particle picture of quantum field theory is a very good model indeed.

Other Rebuttals

Maddox’s post is quite old… seven years old by now. I am not the first scientist to refute him. In particular, I’d like to recommend this blog post by Sabine Hossenfelder, which is, as usual, excellent.

Posted in Physics, Quantum Mechanics, Science And Math | Tagged , , , | 8 Comments

The CMB Axis of Evil and the Nature of Randomness

axis of evil planck
Figure 1. Some fluctuations in the cosmic microwave background align around an axis in the sky, called the “axis of evil” and shown in white here. Image due to the Planck collaboration.

This Halloween, Nature News released an article titled Zombie Physics: 6 Baffling Results that Just Won’t Die. It’s a fun article describing several mysteries in physics whose solution sits in a sort of limbo. For fun, I figured, I’d explain some of these mysteries, and give my opinion about possible solutions. And first, I’m going to discuss the CMB Axis of Evil, a strange pattern in the leftover radiation from the Big Bang.

A Much-Too-Short Summary of Cosmic Inflation and the CMB

About 13.8 billion years ago, the universe was extremely hot, so hot that matter couldn’t form at all… it was just a chaotic soup of charged particles. Hot things (and accelerating charges) glow. And this hot soup was glowing incredibly brightly. As time passed, the universe expanded and cooled, but this glow remained, bathing all of time and space in light.

(The reason for why the universe was so hot in the first place depends on whether cosmic inflation is true. Either it’s because the Big Bang just happened or it’s because, after cosmic inflation, a particle called the inflaton dumped all of its energy into creating hot matter.)

Even today, the glow remains, filling the universe. As the universe expanded, the glow dimmed and its light changed colors (due to gravitational redshift), until it became microwaves instead of visible or ultraviolet light. This ubiquitous glow is called the Cosmic Microwave Background, or CMB for short, and if you turn an old analogue TV to an unused channel, some of the static you hear is CMB radiation picked up by your TV antenna.

Since its discovery, the CMB has been one of our most powerful probes of cosmology. It lets us accurately measure how fast the universe is expanding, the relative amounts of normal stuff vs dark energy and dark matter, how the density of matter fluctuated in the early universe, how the Earth is moving relative to the expansion of the universe, and much more.

Some parts of the early universe were more dense and some were less, and this translates to slight, random variation in the color of light in the CMB. And in turn, we can translate this into a temperature. The temperature of the CMB is incredibly consistent across the sky. It’s an almost perfect 2.725 Kelvin. However, there are tiny fluctuations relative to this mean, and these reflect the dynamics of the early universe. Figure 2 shows a map of these fluctuations and I describe how this map is attained in my post on BICEP2.

Planck_CMB
Figure 2. The measured CMB mapped on a flat surface. (Image due to the Planck collaboration.)

The CMB Axis of Evil

It’s very hard to see in figure 2, but with a little massaging, we can see that many of the fluctuations in the CMB align along a single axis, called the axis of evil, as shown in figure 1. (Formally, the quadrupole and octopole moments of the fluctuations align.) At first glance, is quite strange, because we believe that the fluctuations in the density of the early universe should be randomly distributed in a particular way… and this is exactly the way they are distributed on smaller scales. The mottled look of figure 2 is exactly due to this particular random behaviour of the fluctuations in the CMB.

So what’s going on? There are a couple of possibilities. I’ll go over them and add my opinion (and the scientific consensus or lack thereof).

Errors in Foreground and Modelling

Perhaps the most boring explanation is that we made a mistake when creating the CMB maps like figure 1 and figure 2. As the story of BICEP2 shows, making those maps is very hard. To create them, we have to account for all the other sources of microwave radiation in the universe and carefully remove them from our measurements.

Over time, we’ve gotten incredibly good at this…so good that we can extract all sorts of information about the early universe from the CMB. But that doesn’t mean we’re always right. There could be extra dust in the solar system. Or a confluence of the gravitational pull of distant galaxies on the light of the CMB (called the integrated Sachs-Wolfe effect) could magnify a normal random fluctuation so that it appears significant.

(I am really oversimplifying the integrated Sachs-Wolfe effect here. But that’s a story for another time.)

I think errors in foreground modelling could easily account for the axis of evil.

The Universe is a Doughnut or a Sphere

Imagine an ant living on the surface of a doughnut. The ant is so small that the doughnut appears flat to it. As the ant travels forward, it will eventually return to where it started, no matter what direction it travelled. From our perspective, of course, this is because a doughnut wraps around. But to the ant, this would be quite mysterious! Figure 3 shows the doughnut from both our perspective and the ant’s perspective. This is very similar to how if you travel East on the Earth, you eventually return to your starting place.

travel on a torus
Figure 3. An ant travels on a doughnut. From our perspective (left), the ant returns to where it started because the doughnut wraps around on itself. But from the ant’s perspective (right) it seems to walk in a straight path and eventually return to where it started.

What if our universe was like the doughnut, but in three dimensions? So if you start going in a direction, say towards Andromeda, and keep going for as long as possible, billions of light years, you would eventually get back to where you started (ignoring of course that the universe is expanding and thus the distance you would have to travel would increase faster than you could travel it).

What if, perhaps we see the same things on both sides of the axis of evil because they are literally the same things and the universe has wrapped around on itself? In the original paper discussing the axis of evil, the authors discuss this very possibility. It’s a nice idea, but it can actually be tested by trying to match images of stars and galaxies (and fluctuations in the cosmic microwave background) on opposite sides of the sky to see if they look the same. The results, however, are not favourable. So no one takes this idea very seriously… even though it’s very clever.

Cosmic Variance

This one takes a bit of explanation. So bear with me. First, let’s talk about something called a posteriori statistics.

A Posterioiri Statistics

Imagine a teacher breaks her students into two groups. She tells one group to flip a coin ten times and record the result as a sequence of heads or tails. The group might record, for example,

HHHHTTTTHT

which would correspond to a string of four tails, then a string of four heads, then one head, and one tail. She tells the other group of students to make up ten coin flips, but try to do so in a way that looks random. The two collections the students return are:

THTTHTHHTT

and

TTHHHHTHTH

And, masterfully, the teacher immediately picks out the truly random sequence.  Which one is it? How does she do it? The second sequence, TTHHHHTHTH, which looks very structured, is the random one.

The human mind is very good at picking out patterns, and attributes a cause to every pattern it sees. But random numbers, very naturally, randomly in fact, appear to make patterns, even though the pattern doesn’t mean anything. It’s just random noise. The teacher takes advantage of this. She knows her students will avoid creating a sequence that looks too structured, because they don’t think random numbers look like that. But random numbers can easily look like that.

Of course, the probability that precisely the second sequence would emerge is less than one percent. But the emergence of some sequence that looks vaguely like the second sequence is vastly more likely.  You can think of this like finding a cool looking cloud, or Jesus in your morning toast. You see the cool looking cloud and you think “Wow! A cloud that looks like an airplane! What are the odds?” But you should be thinking “Wow! A cloud that looks like an airplane! The odds of me finding a cloud that looks like something interesting are quite high because there are a lot of clouds and a lot of things I think are interesting.”

This sort of thinking is called a posteriori statistics. And in general, it causes mistaken analysis.

The CMB Axis of Evil

So what does this have to do with the CMB? Well, people who study the CMB are well aware of the danger of a posteriori statistics, so they try to avoid thinking in this way. One way to avoid this sort of thinking is to make many many measurements. If you have a huge number of sequences of coin flips, on average, the randomness (or lack thereof) will become manifest.

And this is indeed what we do for most of the cosmic microwave background. The fluctuations on small scales, which give figures 1 and 2 their mottled texture, are numerous and we can do many statistics on them by looking at different areas of the sky.

But the axis of evil is different. It covers almost the whole sky. And we only have one sky to make measurements of! So it’s not possible to do good statistics. The fact that we have only one universe to measure, which we believe emerged from random processes, and that we can’t do statistics on a whole ensemble of universes is called cosmic variance.

And cosmic variance interferes with our ability to avoid a posteriori statistics. It lets us fool ourselves into believing that the way our universe turned out is special, when there may in fact be a multitude of equally probable ways our universe could have been. And it is entirely possible that the axis of evil is one such “fluke.”

It is possible, in principle, to reduce the effects of cosmic variance. If we could move to another position in the universe, we would be able to see a different portion of the CMB (because the light that could have reached us since the CMB was created would come from a different place in the universe). In 1997, Kamionkowski and Loeb suggested using the emissions of distant dust to extrapolate what the CMB looks like to that dust. In principle, it would be possible, but very very hard, to use this trick to test whether or not the axis of evil comes from cosmic variance.

As you may have guessed from the amount of time I devoted to the explanation, I find cosmic variance to be a very compelling cause of the axis of evil.

The Most Likely Story, In My Opinion

So… what do I think is the cause of the axis of evil? The following is my opinion and not rigorous science. But it went something like this. Due to random fluctuations in the way the universe could have been, something that looks like the axis of evil formed in the CMB, but much less significant. This would be the cosmic variance explanation. To this day, the “axis of evil” remains statistically insignificant. But, because our models of cosmic microwave sources and filters look like in the universe and in our solar system are flawed, and because we don’t take the integrated Sachs-Wolfe effect into account, the axis of evil appears much bigger to us than it actually is.

So in my mind the axis is caused both by imperfect experiments and analysis and by the human need to find patterns in everything.

Acknowledgements

I owe a huge thanks to my friend and colleague, Ryan Westernacher-Schneider, who told me this story last spring and compiled a summary and list of references. Ryan basically wrote this blog post. I just paraphrased and summarized his words.

Further Reading

I’m not the first science writer to cover this material. Both Ethan Siegal and Brian Koberlein have great articles on it. Check them out:

  • This is Brian Koberlein’s article.
  • This is Ethan Siegal’s.

For those of you interested in reading about the axis of evil in more depth. Here are a few resources.

  • This is the first paper to discuss the axis of evil. It also discusses the possibility that the universe is a doughnut.
  • This paper coined the term “axis of evil.”
  • This paper discusses the possibility of solar-system dust producing the axis of evil.
  • This paper discusses the integrated Sachs-Wolfe effect and how it enhances the axis of evil.
  • This paper proposes a way of reducing cosmic variance.
  • This is the collected published results by the Planck collaboration, which analyses all aspects of the CMB in great depth.

Related Reading

If you enjoyed this post, you might enjoy my other posts on cosmology. I wrote a two-part series on the BICEP2 experiment:

I have three-part series on the early universe:

I have a fun article that describes the cosmic microwave background as the surface of an inside-out star:

Posted in cosmology, Physics, Science And Math | Tagged , , , , , , | 8 Comments

A Retraction: Backwards Heat is Not Chaotic

Airplane_vortex_edit
Figure 1. Fluid turbulence, such as vortices, hurricanes, and tornadoes, can be described as chaotic. Source: Wikimedia Commons

Yesterday I wrote a post that explored the flow of heat both forwards and backwards in time. I used this as a venue to introduce the notion of entropy and to describe one extreme example of the butterfly effect—where small changes in initial data can create big changes in the final result. That’s all fine and good and I stand by that.

But I said that the reverse heat equation, which runs the flow of heat backwards in time, was an example of chaos. And as this reddit user points out, this is very wrong. I have now fixed the original post so that it doesn’t say anything wrong. But I owe you all an explanation here.

The Heat Equation is Not Chaotic

You can never, ever actually solve the reverse heat equation. It is an example of a so-called ill-posed problem. And understanding which problems are well-posed or ill-posed is a very important topic in both physics and mathematics. (This is actually the reason I’m interested in the reverse heat equation. It’s the archetypical ill-posed problem.)

Truly chaotic systems, on the other hand, are well-posed. Although they depend strongly on their initial conditions, meaning that finding exact solutions is difficult, they can be solved. To illustrate the difference, let’s look again at the reverse heat equation, shown in figure 2.

reverse heat!
Figure 2. The heat equation, run in reverse. Colour shows temperature. Dark blue is coldest and red is hottest.

Temperature differences just build on themselves exponentially until the whole thing becomes completely unmanageable. And this is the problem. Now let’s look at a genuinely chaotic system: the flow of water in a very shallow pond, as shown in figure 3. (You can find another good video here.)

Ryan
Figure 3. Fluid turbulence. Brightness shows vorticity (roughly energy in the vortexes). The small vortices merge into bigger ones. Image made by my friend and colleague, John Ryan Westernacher-Schneider, who works on fluid turbulence.

Notice the vortices that form? The precise initial configuration of the water dramatically changes the positions of the vortices. However, although the vortices merge, they don’t grow so much that we can’t make predictions any more. And this is the important difference. This property, called topological mixing, is also what keeps the heat equation from being chaotic.
(There are other technical reasons that the heat equation is not chaotic. But this is the big one, and it’s the thing that I really failed to emphasize in my last post. So I’m emphasizing it here.)

As an aside, notice how small vortices become bigger? This is a property of fluids that are tightly confined in one direction like in a shallow pond or on the surface of the Earth. It’s actually why hurricanes form. Small vortices merge to become big vortices. In fluids without the confinement, the process goes the other way, big vortices become small.

My Apologies

As a physicist—and not a mathematician—I believed that I knew the definition of mathematical chaos when I did not. And instead of checking my facts, I just blithely went ahead and wrote about it.

Many physicists don’t know about mathematical chaos; I’m not ashamed of my ignorance. But I am ashamed of not doing my homework before writing about a topic with which I am unfamiliar. Many of you trust me as an authority on math and physics, and in yesterday’s post, I failed to live up to that trust.

I promise to be more careful in the future.

Posted in Mathematics, Physics, Science And Math | Tagged , , , , , , | 2 Comments

Heat, Chaos, and Predictability

A funny comic about the butterfly effect
Figure 1. The butterfly effect: a sinister insect plot?

The butterfly effect, shown comically in figure 1, is the idea that a very small change in one place on Earth can cause a very big change somewhere else. In this case, a butterfly flaps its wings and causes a tornado. This metaphor illustrates the mathematical concept of chaos, in which the Earth’s atmosphere is a chaotic system. While a single butterfly probably isn’t literally responsible for a tornado, mathematical chaos is very real and important. So this week, I’m going to try giving you some intuition for the butterfly effect using one extreme example from physics.

Heat

Suppose we take a flat, rectangular piece of metal and heat it up at four specific spots. Figure 2 shows what will happen to the metal: The four hot spots (shown in red at the start) will cool off as the heat spreads out, diffusing across the metal until the whole piece reaches the same temperature.

Heat diffusion
Figure 2. The heat from four hot spots on a piece of metal diffuses across the metal. Colour shows temperature. Red is hottest. Dark blue is coolest.

If we isolated the piece of metal beforehand, no heat can “escape” it, so it will never cool back down to its original temperature. The total amount of energy in the system will stay the same. The only thing that changes is how the heat is distributed over the metal’s surface. This “flow” of heat is described by the heat equation. Given any distribution of temperature across the metal, we can use the heat equation to know how hot each area of the metal will be at any point in the future.

But what if, instead of making a prediction about the future, we want to make a postdiction? What if we want to know the temperature of the metal at some point in the past?

Heat Flow Backwards?

Of course, we know the temperature change originated at the four spots we heated up, but let’s pretend we don’t. Suppose that we only saw our metal piece after its whole surface had reached the same temperature. Furthermore, suppose that we’re just a little uncertain about the temperature of the metal now. Maybe there are a few spots that are slightly hotter or colder than average—say, from us touching it, or from sunlight. Probably the best way to figure out what the metal looked like in the past is to take our best guess as to the temperature now, feed that number into the heat equation, and run it in reverse, right?

I did exactly that and figure 3 shows the result.

reverse heat!
Figure 3. The heat equation, run in reverse. Colour shows temperature. Dark blue is coldest and red is hottest.

That doesn’t look anything like the four dots! What’s going on? The heat equation run in reverse, creatively called the reverse heat equation, suffers from the butterfly effect. Small uncertainties in the known temperature distribution cause huge variations in the “postdicted” temperature distribution. In the case of the reverse heat equation, this effect is so severe that we can’t make any useful statements.

Let’s try to understand what’s going on.

Understanding the Reverse Heat Equation

Why is the reverse heat equation so chaotic? What causes the butterfly effect here? Let’s think about how heat behaves. Heat spreads out, from hot regions into cooler regions. This makes hot regions cool down and cold regions warm up. Eventually everything becomes uniform.

If you reverse this behaviour, like rewinding a video, heat moves from cold regions to hot regions. Hot regions become even hotter and cold regions become even colder! This means that if you take a surface with a uniform temperature and randomly make some spots just a little hotter than others, those random warm spots will just keep getting warmer. Any difference from the average temperature, no matter how small, gets exaggerated exponentially. This means that if we want to work backwards from a near-uniform temperature distribution to find out how it originally looked, we need to be exactly certain of the temperature everywhere. And we can never be exactly certain. Measurement tools are flawed. And even if we did have perfect tools, quantum mechanics forbids infinitely precise measurements (at least, in finite time).

Worse, since heat diffuses, every original pattern—no matter how strange—leads to a uniform temperature across the metal. So even if the heat spread out perfectly, with every spot exactly the same temperature as every other spot, the reverse heat equation is still useless. Confronted with an infinite number of possible original patterns, it’s forced to just make an arbitrary decision. And while this process isn’t random, the solution that the equation picks will almost certainly be incorrect, since its odds are literally infinity to one.

What Makes Heat Special?

The inability to make postdictions about temperature is surprising. Most of the laws of physics work perfectly well in reverse. If I know the height of waves in a pond—like the one shown in figure 4, for example—at the present moment, then I can say what the pond will be doing at any moment in time, whether past or future. (At least in principle. In reality, friction will convert much of the wave motion into heat. The waves also need to be sufficiently low-energy; otherwise, water can become chaotic. I’ll get to that in a bit.)

wave_evolution
Figure 4. The height of waves in a rectangular pond, neglecting energy loss. Colour represents height. Red is high, blue is low.

So why is heat special? Roughly speaking, the temperature of a metal is actually an average of the energy of the atoms that make it up. In principle, we could track the motion of every individual atom and make a prediction of their motion after heating the metal up with a laser. Then we could make a good postdiction by tracking the atomic motion back in time.

Of course, this is impossible in practice. There’s way too many particles and way too much information to keep track of, so we’d need a practically infinite amount of computing power. So instead, we use the abstraction of temperature, which averages over the particles.

This abstraction has a price, however.  We are intentionally hiding information from ourselves: the precise configuration of the metal. And so it should come as no surprise that we can’t use the heat equation in reverse. We lack the necessary information to do so! We can even quantify how much information we’ve hidden from ourselves. The quantity that tells us this is the entropy of the system. And one way to understand the Second Law of Thermodynamics (“entropy never decreases”) is that, as we step forward in time using the heat equation, we forget more and more about the initial configuration of our metal.

(I want to note that, although I’ve been talking about tracking particles, which are classical, quantum mechanics has analogous ideas. Instead of tracking particles, you track—or average over—a wavefunction whose amplitude represents the probability of measuring all the of the positions of a huge number of particles.)

Manageable Chaos

The reverse heat equation is totally unusable. There is no saving it. But it is an extreme example of the butterfly effect. And it’s not actually chaotic. True chaos is more manageable because it is well-posed, meaning that predictions are, in principle, possible.

Manageable chaos emerges naturally in many areas of science. If the pressure is strong enough, or the temperature or speeds high enough, fluids like air and water are actually chaotic, but in a way that we can handle. Because it takes a lot of computing power to handle the chaos in the atmosphere, it’s very difficult to make concrete predictions about the weather…but it’s not impossible.

Large-scale phenomena, like planetary motion, can also be chaotic. Two objects gravitationally attracted to each other will behave pretty predictably, but adding even one more mass to the system can cause their motion to become chaotic. Satellites under the gravitational influence of both the Earth and the moon, or both the sun and Jupiter, are important examples of such three-body systems.

Understanding chaotic systems is very difficult, but it’s also essential if we are to understand much of the universe. And in many cases, we can manage the chaos.

Related Reading

If you enjoyed this post, you may enjoy some of my other posts on mathematics.

  • In this post, I describe the many sizes of infinity.
  • In this post, I describe the history of imaginary numbers.

Further Reading

  • If you’re curious how I produced those images, I put my code in the IPython notebooks in this bitbucket repository. Feel free to play around with them. I’m afraid there’s no documentation at the moment.
  • You can find a more technical discussion of the heat equation and reverse heat equation in this blog post by an engineering Ph.D. student.
  • And here‘s an in-depth discussion of entropy as “lost information.”
  • And for a much more in-depth discussion of chaos, check out this awesome ebook.
Posted in Mathematics, Physics, Science And Math | Tagged , , , | 1 Comment

In-Falling Geodesics in Our Local Spacetime

ballin!
Figure 1. The path of a ball (rainbow) after I drop high above the surface of the Earth. The green surface is our local spacetime. The red line points towards the Earth, the blue line points forwards in time. The black line is the surface of the Earth.

My previous post was a description of the shape of spacetime around the Earth. I framed the discussion by asking what happens when I drop a ball from rest above the surface of the Earth. Spacetime is curved. And the ball takes the straightest possible path through spacetime. So what does that look like? Last time I generated a representation of the spacetime to illustrate.

However, I generated some confusion by claiming that it “should be obvious” that the straightest possible path is curved towards or away from the Earth. When a textbook author says “the proof is trivial” usually what they mean is that they don’t want to go through the work of writing a proof. The same is true here, I didn’t want to generate a picture with the path of the ball in it.  Since this was confusing however, I apologize. And to make it up to you, I’ve plotted the path of the ball, shown in figure 1.

Note that it approaches a straight line. That’s because as it accelerates it’s approaching the speed of light (we are neglecting air resistance and exaggerating the distance from the surface of the Earth to make that happen). The path of the ball is curved—it curves with the surface, after all. But it’s as straight as it possibly can be. And that’s what makes it a geodesic.

Note also that the speed of light is a straight line that’s wider than 45 degrees. I told you last time that in Minkowski space light travels at 45 degree angles. However, to make the curvature of the spacetime visible, I stretched out lengths radially (the direction of the red arrow) a bit. So actually light cones in this plot are wider. I didn’t think this would be visible when I made the plot before, but it’s quite clear if you include the geodesics. So I apologize for that slight misrepresentation last time.

I’ve updated the previous post to include this plot. So this week’s post is only for those of you who read the last post.

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment