Pope Francis says Evolution and the Big Bang are Compatible with Catholicism

Pope Francis (left) and Georges Lemaitre (right)
Pope Francis (left) and Georges Lemaitre (right). Source: Wikimedia Commons

You’ve probably heard, the news. Pope Francis has announced that Big Bang cosmology and evolutionary theory are compatible with Catholicism and “may even be required.”

This is, of course, wonderful news. It’s evidence that science and religion are not necessarily incompatible and that people of faith can modify their beliefs based on the evidence around them.

But it should have been this way all along. Indeed, it originally _was_ this way. One of the people who developed Big Bang cosmology, Monseigneur Georges Henri Joseph Édouard Lemaître was a catholic priest who believed that his studies of physics brought him closer to the mind of God. Indeed, Pope Pius XII completely accepted Big Bang cosmology when Lemaitre developed it, even going so far as to claim that it supported catholic beliefs.

At the same time, Pius XII declared that evolution was not at odds with Catholic beliefs. 

What happened?

 Where did we go wrong? Blame Pope Emeritus Benedict XVI, who threw away decades scientific progress and scientific-religious coexistence when he declared that the Church supported intelligent design.

I’m glad to see Benedict XVI’s bad decisions being undone.

Our Story Isn’t Over

The astute reader will remember that there may not have been a Big Bang. We now believe that instead, the early universe may have undergone a period of extremely rapid inflation. To learn about the discovery of the Big Bang and why we feel it might not be true, check out my three part series on the topic:

Posted in Astrophysics, Physics, Science And Math | Tagged , , , , , , | Leave a comment

Hiatus

I am buried in homework
I am buried in homework

Hi everyone. You probably haven’t heard from me in a while. This is because I have been completely overwhelmed by class work this semester, which has prevented me from doing the things I want to do, like blogging and doing my own research.

For the time being, I don’t think I can expect myself to blog every week, or even every other week. So I’m putting the blog on hold until the semester ends (which should be around the holidays).

As always, thanks for reading, everybody.

Posted in Uncategorized | Leave a comment

What Space Projects Excite Me: Multi-Messenger Astronomy

The remnants of a supernova found in 1987
The remnants of a core-collapse supernova discovered in 1987. (Image credit: ESO / L. Calçada. Via http://scienceblogs.com/startswithabang/2013/05/24/how-to-find-your-very-own-supernova/)

A few weeks ago, awesome blogger and space advocate Zain Husain asked me to contribute to a roundup post he wrote. He contacted a bunch of people (most of them much more prestigious than me) and asked them one question:

What NASA or space project are you most excited about and why?

You can (and should) read everybody’s response to Zain’s question on his blog, here. However, I wanted to expand on part of my answer and tell you why I’m excited about multi-messenger astronomy.

Supernova Supernova

It all starts with the title image above. That’s an image of SN 1987A, the first supernova discovered in 1987. This is an example of a core-collapse supernova, where a star runs out of nuclear fuel and implodes on itself to form a black hole. (There are other types of supernovae, too. For a nicely accessible review, see this article.) After the implosion, the shockwave creates an explosion which we call the supernova.

SN 1987A is really special.  It is the only core-collapse supernova in the last century to have exploded close enough to Earth that it could be seen with the naked eye—only 168,000 light years away! But, amazingly, we didn’t detect SN 1987A from its light. We detected it because of the high-energy subatomic particles it gave off.

When a core-collapse supernova explodes, it gives off most of its energy in the form of tiny, extremely energetic particles called neutrinos. On Earth, we can only make neutrinos in supercolliders like the LHC. But in space, things are wild, and neutrinos are generated in all sorts of cataclysmic events.

For me, this is why SN 1987A is special.

Multi-Messenger Astronomy

When we talk about astronomy, we imagine using light. We think of an astronomer pointing a telescope into the sky and looking through its eyepiece. Things are more complicated nowadays, of course. We shoot telescopes into space. We take pictures in the infrared and the ultraviolet. But in general, we’re still looking at light to see space…right?

For now, this is still roughly true. But scientists are increasingly moving towards “multi-messenger astronomy,” where astronomers studying some object in space look at the light it emits, the subatomic particles it emits, and even the ripples in spacetime that it causes.

Here are two exciting projects that are just coming online (or came online recently) that move astronomy in this direction.

Advanced LIGO

We know from Einstein that space and time are not separate things, but intertwined to form a single spacetime. And this spacetime can be curved or warped, which we perceive as a distortion of the very notion of distance. The motion of large objects creates “ripples” in spacetime which we perceive as distance itself oscillating in time. We call these ripples “gravitational waves.”Check out this video of a simulation of two black holes colliding and the gravitational wave signature that can be extracted:

Since a gravitational wave is a warping of how we measure distance, we should be able to detect this warp with a big enough ruler. And we’re making a bunch of gigantic rulers to do just that. Meet Advanced LIGO, the gravitational wave observatory:

LIGO Luisianna
Advanced LIGO in Louisiana. (Source: http://www.ligo.caltech.edu/)

Actually, this is only one of the tow Advanced LIGO systems. This one is in Louisiana, but there’s another in Washington State. The coolest thing about LIGO is that it detects gravitational waves. The second coolest thing about LIGO is how it does it.

Light makes an extremely good ruler and LIGO takes advantage of this. It’s essentially a four-kilometer-long giant laser. If that’s not cool, I don’t know what is.

I know this has been a bit vague, so I promise I’ll write a more in-depth article on gravitational waves and LIGO in the future.

Ice Cube

The IceCube Laboratory on the Surface of the Ice
The IceCube Laboratory on the surface of the Antarctic ice (source: IceCube)

As I mentioned above, astrophysical events like core-collapse supernovae emit subatomic particles called neutrinos. Neutrinos are tricky little things, though. Just as X-rays pass through walls, neutrinos pass through miles of solid rock and meters of dense lead. X-rays are stopped by bone and metal and can interact with our bodies, but neutrinos pass right through bone, so they’re completely harmless. Thus, if we make a neutrino detector, the probability that we’ll stop and detect a given neutrino passing through our detector is very small. To catch even one neutrino, then, we need to cast a very wide net.

And when I say wide, I mean WIDE.

Just recently, our widest net came online: the IceCube, a neutrino observatory at the South Pole. In the case of IceCube, our net is one cubic kilometer of detector dug into the ice of the South Pole. This should allow us to detect more energetic neutrinos than ever before. For a sense of scale, the IceCube collaborators helpfully diagrammed the under-ice structure next to the Eiffel Tower:

IceCube Array
IceCube underneath the ice. Source: the IceCube Collaboration.

Applications

I mentioned supernovae at the top of the article for a reason: We still don’t completely understand the nature of core-collapse supernovae, and neutrino and gravitational wave observations would really help us understand these objects better.

But the applications are far wider than that. Neutron stars are some of the densest things known to man; they’re held apart simply by the Pauli exclusion principle. However, it’s very hard to make predictions about this exotic fluid they’re made of. Hopefully gravitational wave and neutrino observations would help us understand them better. The same goes for black holes, which emit gravitational waves and neutrinos.

Thanks, Zain!

A big thanks to Zain for putting together his post and prompting me to think about this. Zain puts a lot of work into his incredible blog, and it shows.

Related Reading

This is one of many articles I’ve written that involve general relativity. If you’d like to know more, check out these articles.

  • In this article, I talk about special relativity and how it implies that space and time are unified in a single spacetime.
  • In this article, I introduce general relativity as a way to travel faster than light.
  • In this article, I discuss the geometry of spacetime.
  • In a pair of articles here and here, I answer some questions about general and special relativity.
Posted in Astrophysics, Physics, Relativity, Science And Math | Tagged , , , , , , , | Leave a comment

Non-Digital Computers

Non-Digital Computers

This is the last installment of my many-part series on computers. Last time we used the notion of a Turing machine to define what a computer is. We discovered something surprising: that not all computers need to be digital, or even electronic! A computer can be mechanical,  made of dominoes, or even just a rules system in a card game.

To give you of a flavor of how inclusive the definition of a computer really is, I’ll now give you a whirlwind tour of some notable examples of non-digital computers.

The Antikythera Mechanism

Antikythera
A piece of the Antikythera Mechanism. (source: Wikipedia)

In April of 1900, Greek sponge divers found a shipwreck off the coast of Antikythera. They found a lot of artifacts aboard the ship, but by far the most valuable piece was a device made of bronze gears. This is the Antikythera mechanism, and it’s not only the oldest known complex gear mechanism, but the oldest known purpose-built computer.

The Antikythera mechanism seems to have been made around 100 BC. It seems to have been designed to predict the motions of stars and other celestial objects. Scientists and historians are still studying it, but some replicas have been attempted.

To learn more, check out the Antikythera Mechanism Research Project’s website and their promotional video:

The website.

The video.

Charles Babbage’s Analytical Engine

Analytical_Engine_(2290032530)
A piece of Charles Babbage’s analytical engine, constructed by his son in 1910. (Source: Wikipedia.)

In 1837, inspired by his successful creation of a mechanical calculator (the difference engine—see the title image of last week’s post), polymath and all around amazing dude Charles Babbage designed a fully programmable mechanical computer called the analytical engine. This is widely regarded as the first fully programmable computer.

The mathematician (and countess) Ada Lovelace corresponded with Babbage regarding his analytical engine and she is credited with much of its theoretical underpinnings. She also designed the first and only algorithm for the device, making her the world’s first computer programmer.

Unfortunately, because of the huge scale of the analytical engine and the precision with which the parts needed to be machined, Babbage never finished constructing it.

Tide-Predicting Machines

kelvin tide predictor.
A tide-predicting machine by Lord Kelvin. (Source: Wikipedia)

In 1872, physicist Lord Kelvin developed a mechanical computer to predict the tides.

The Enigma Machine

enigma machine
An Enigma Machine. (Source: Wikipedia)

In World War II, the Axis powers encoded and decoded their military orders using an electromechanical computer called an Enigma machine. Enigma was so sophisticated that the code could be changed by programming the machine. Some historians credit the Allied victory to the help of Alan Turing and the world’s first true digital computer, Colossus, which allowed the Allies to crack Enigma’s famously difficult code.

For more information, check out Numberphile’s excellent video on the device.

Other Examples

Some more, less historically interesting examples include:

  • It’s possible to make arbitrarily good approximations of purpose-built Turing machines using dominoes. They’ll never be as good as a real Turing machine, though, because they can only write to a bit in memory once (by knocking down a domino). Check out this awesome video, again by Numberphile.
  • In the card game Magic the Gathering, one can construct a situation where, just by following the rules, the players become components in a universal Turing machine. The “mathematical” proof is here.
  • It’s possible to build a fully working computer using the physics engine of the video game Minecraft. Here’s one guy’s computer. And here’s a tutorial on how to make your own.

Any I Missed?

If you know of any cool non-digital computers I missed, let me know in the comments!

Related Reading

This is the sixth article in my multi-part series on computing. These are the other parts:

  • In this article, I describe the basics of Boolean logic, starting with truth tables.
  • In this article, I describe how to make composite logical statements using Boolean logic.
  • In this article, I introduce the triode vacuum tube and describe how it works.
  • In this article, I show how to use triodes to make logic gates.
  • In this article, I explain how to build computer memory.
  • In this article, I explain how a Turing machine works.
  • In this article, I use the idea of a Turing machine to try and nail down what exactly a computer is.

EDIT: Cracked.com covers this topic!

I’m a big fan of the comedy website Cracked.com. They often teach you something in a pretty entertaining way. So imagine my surprise when I discovered this morning that they ran an article covering the same idea as this one, but with some really awesome examples of pre-microchip computers that I’d never heard about!

It’s a beautiful article by the awesome ex-physicist Luke McKinney. You can check out his blog here: lukemckinney.net

Without further ado, here’s the article:
http://www.cracked.com/blog/4-amazing-computers-made-before-microchips-existed/

Posted in Computer Related, Electronics, logic, Mathematics, Physics, Science And Math | Tagged , , , , , , , | 1 Comment

What Is A Computer, Really?

Charles Babbage's Difference Engine
This  a difference engine, built from Charles Babbage‘s blueprints. (Source: Wikipedia)

Look at the picture above. Believe it or not, that person is operating an extremely sophisticated mechanical calculator, capable of generating tables that evaluate functions called “polynomials.” Although a graphing calculator can do that, a pocket calculator certainly can’t. The device above is a mechanical purpose-built computer!

This article is the next installment of my series on computing. In the previous parts, we learned about Boolean logic, the language computers think in. We then learned how to implement this logic electronically and, using our newfound understanding of electronics, how to make computer memory so that computers can record results of calculations. Finally, last time, we discussed a theoretical computer called a Turing machine (named after, of  course, Alan Turing). It is believed that the answer to a problem can be computed if and only if one can build a Turing machine to compute it.

This time, we use our discussion of the Turing machine to try and nail down what a computer really is.

(If you haven’t read the article on the Turing machine, I recommend you do so now. I’ll be relying on it extensively. It’s here.)

The Universal Turing Machine

Recall from our discussion last time that a Turing machine has three parts:

  1. A ticker tape consisting of cells that can be written in. We select an alphabet of symbols we’re allowed to write. For example, you could easily use the real alphabet or every character available in unicode—whatever you like.
  2. A “head” that can read from the tape and write to it. The head also has a finite number of “states” or modes. You can think of these in the same way as the modes available to your printer at home: a “scan” mode, a “print” mode, mavbe a “fax” mode. These determine what the printer does with the input it’s given. Similarly, the state of the head determines what it does when it reads a symbol.
  3. A rules table. The table tells the head what to do when it reads a symbol off of the tape, given the state it’s currently in.

Given a computable problem to solve, one can design a Turing machine to solve it by constructing an appropriate rules table. But what if we want to go the other way? What if we want to design a Turing machine that can solve any computable problem?

Turing realized that it’s possible to design a Turing machine that can mimic any other Turing machine. The simplest, most obvious way to do this is just add more modes to the head. For example, suppose we had two problems we wanted to solve, problem 1 and problem 2. And suppose we know how to construct a Turing machine to solve each problem respectively: machine 1 and machine 2. And, for simplicity, let’s say that both machines use the symbols “1” and “0” and have three modes, “red,” “green,” and “blue.”

Now let’s construct a new Turing machine, machine 3, with SIX modes: “red1,” “red2,” “green1,” “green2,” blue1,” and “blue2.” We’ll give it the following symbols to use: “1,”0,” “A,” and “B.”

Then we construct the rules table such that if the state is red1, green1, or blue1, and the symbol is either 0 or 1, then machine 3 behaves like machine 1. The rules for the state red1 correspond to the rules for machine 1 in the red state, and likewise for green1 and blue1.  Similarly, if the state is red2, green2, or blue2 and the symbol on the tape is either 0 or 1, then the machine behaves like machine 2.

The only thing we have to do is determine whether the machine uses the states for machine 1 or for machine 2. We do that with the special symbols A and B. If the machine reads symbol A on the tape, it knows to behave like machine 1. If the machine reads symbol B on the tape, it knows to behave like machine 2.

Now machine 3 can solve either problem 1 or problem 2! And we can keep going like this—if we add an arbitrary number of states, we can solve an arbitrary number of problems. A Turing machine that can mimic any other Turing machine (and thus solve any computable problem) is called a universal Turing machine. A universal Turing machine is the standard model of a fully programmable computer.

Can You Build One?

But if we needed an arbitrarily large number of modes to solve all problems and thus make a computer, that would be pretty useless. Since the number of problems we’d like a computer to solve is incredibly huge, it would be impossible to construct a computer with one mode for every single type of problem in existence (including those we haven’t even thought of yet).

So are modern computers only pale approximations of universal Turing machines? Fortunately, no. AI researcher Marvin Minsky showed in 1962 that you can construct a universal Turing machine with only seven modes and a four-symbol alphabet. Later, it was discovered that we only need a binary alphabet (1 and 0) if we use 15 modes. So yes, it’s possible to build a real-life universal Turing machine. That’s good to know, since you’re probably reading this post on one of them!

So What Is A Computer, Really?

We’re finally ready to answer our title question and tie all of this abstract computational theory back to practical electronics. A “computer” is any device that can be shown to be equivalent to a universal Turing machine. (This property is called Turing completeness.) For a device to be Turing complete, it needs the following three pieces, roughly speaking:

  1. A set of logical rules dictating how it should behave given some input. On a Turing machine, this is the rules table. In an electronic computer, it’s the Boolean logic gates we constructed out of vacuum tubes or transistors.
  2. A way to record the results of a calculation and read these results in as new input. On a Turing machine, this is the ticker tape. In an electronic computer, this is the computer memory constructed from electronic flip-flops.
  3. A way to control the device’s logic so that the same input data can be manipulated in many different ways. On a Turing machine, this is enabled by having many modes for the head. I didn’t discuss how to achieve this in an electronic computer, but we’ll suffice to say that it exists.

With these three components, it is possible to construct a universal Turing machine, or at least something equivalent. But notice that these criteria don’t limit us to electronic devices! Any device that meets them can do anything a laptop can do (albeit probably much much slower)!

Next time we’ll take a break from hardcore theory to discuss some cool examples of non-electronic computers.

A Historical Note

Although electronic computers have been shown to be Turing complete, this may or may not have really been considered during their design. Computer pioneer John von Neumann and people the working on early digital computers were certainly inspired by Turing and von Neumann extended Turing’s work. However, modern architectures weren’t really known to be related to Turing machines. It was only later that mathematician Hao Wang demonstrated that electronic computers were Turing complete.

Other Approaches

Turing’s machines aren’t the only mathematical description of what it means to be a computer. Mathematician Alonzo Church developed a completely equivalent description based on the mathematical definition of a function, called Lambda calculus. I find it beautiful, but I’m not going to try to describe it here because it’s very abstract–check out the link if you’re a fellow math nerd.

Further Reading

There are many many resources on the Turing machine, but most skip what’s special about a universal Turing machine. Here’s what I could find.

Related Reading

This is the sixth article in a multi-part series on computing. These are the other parts:

  • In this article, I describe the basics of Boolean logic, starting with truth tables.
  • In this article, I describe how to make composite logical statements using Boolean logic.
  • In this article, I introduce the triode vacuum tube and describe how it works.
  • In this article, I show how to use triodes to make logic gates.
  • In this article, I explain how to build computer memory.
  • In this article, I explain how a Turing machine works.

That’s all for now. Thanks for reading!

Posted in Computer Related, Education, logic, Mathematics, Science And Math | Tagged , , , , , , , , , | 1 Comment

The Turing Machine

Artists rendition of a turing machine
Artist’s representation of a Turing machine. (Source: Wikipedia)

This is the sixth part in my multi-part series on computing. In the previous parts, we learned about Boolean logic, the language computers think in. We then learned how to implement this logic electronically. And finally, we learned how to make computer memory, so that computers can record results of calculations.

Now before we conclude the series, we’re going to take a quick detour into computational theory and the Turing machine.

Alan Turing’s Machine of the Mind

In 1936, mathematician, WWII codebreaker, and all around awesome guy Alan Turing wanted to investigate a problem in formal logic. Specifically, he wanted to find the answer to a question first posed by David Hilbert. Simplistically, the problem is this:

We know from Godel’s incompleteness theorems that some mathematical questions simply do not have answers. But Godel opens up a Hydra’s den. Are there some mathematical questions that have answers… but answers… but answers that it is impossible for a human being or machine to discover, even in principle?

To answer this question, Turing devised a theoretical machine designed to compute the answer to any question one can pose in mathematics. You’ve probably heard of it, even if you don’t know what it does. This is the Turing machine.

To Build a Turing Machine

A Turing machine is a pretty simple device, but it sounds complicated when you describe it. So I think the best way to understand what’s going on is by example. So let’s try a little experiment where we play the role of the “brains” of a Turing machine. I’m going to give you three things and you’re going to manipulate them to form a Turing machine.

First, I’m giving you a tape, say a long ribbon of paper, partitioned into little blocks or cells. With nothing written on it, it looks something like this:

The ticker tape in a Turing machine
The ticker tape in a Turing machine

(In principle the tape could be infinitely long, but we’ll ignore that.)

Initially, the tape WILL have some symbols written on it. It’ll look like this:

The ticker tape I give you.
The ticker tape I give you.

But let’s say I wrote on the tape in pencil. This means you can erase any symbol I’ve written on it and replace it with something else. Just make sure you use pencil!

I’m only going to allow you to write one symbol in each of the cells on the tape. And I’m going to restrict what you can write. Here are the symbols you can use: “1”, ” “, “-“,  and “=”, where ” ” represents a blank space.

Next I’m going to give you a spinner like the one you might get for a game of Twister. Ours will have an arrow that can point at one of four colors: Red (R), Green (G), Blue (B), and Yellow (Y).

You won’t just randomly spin the arrow. You’ll move the arrow to keep track of a color. Let’s start the spinner pointing to red, as shown below.

the spinner I give you
The spinner I give you. Let’s start it pointing to “red.”

Finally, I give you the following table. The table tells you what to do based on the symbol you read on the tape and the color of the spinner. Here’s the table:

The instruction set you have to use.
The instruction set you have to use.

Let’s work through it.

Now, just start by looking at the symbol in the leftmost cell of the tape, as shown below.

The starting state for our Turing machine.
The starting state for our Turing machine.

The symbol is a “1” and the spinner is pointing to “Red.” If you look at your table, you see it tells you to do nothing and just look at the symbol in the cell one to the right. If you do, you see that this symbol is also a “1” and we’re just supposed to move right again.

This process repeats for a while. The instruction table doesn’t tell us to do anything different until we reach the “=” sign shown below.

Where our Turing machine is when things start to get interesting.
Where our Turing machine is when things start to get interesting.

If we look at our instruction set, it tells us to “erase the symbol and replace it with a space” and then “move one cell to the left and change the spinner to point to Blue.” So let’s do that. After we do, the Turing machine looks like this.

Our turing machine after we erase the "=" sign.
Our turing machine after we erase the “=” sign.

Now the rules tell us to replace this symbol with an “=” sign, move one cell to the left, and change our spinner to point to “Green.” Let’s do that. The results are shown below.

Our Turing machine after we erase the "1."
Our Turing machine after we erase the “1.”

Now the rules tell us to move once cell to the left and change the spinner to “Yellow.”T he results are shown below.

Our Turing machine now that we've moved left past the "-" sign.
Our Turing machine now that we’ve moved left past the “-” sign.

Now the rules tell us to erase the “1”, replace it with a blank space, move to the right, and change the spinner to red, as shown below.

Our Turing machine now that we've changed our state back to "Red."
Our Turing machine now that we’ve changed our state back to “Red.”

Well, we know what to do when the spinner points to “Red.” We move right! This continues until we get to the equals sign, as shown below.

Our Turing machine after we reach the equals sign for a second time.
Our Turing machine after we reach the equals sign for a second time.

And we erase the equals sign, as shown below…

Our Turing machine after we erase the equals sign for the second time.
Our Turing machine after we erase the equals sign for the second time.

But now our spinner is pointing to “Green” and we’re looking at a “-” sign. The instructions tell us we’re done!

 So… What?

So what have we done, exactly? Well, our tape started with an expression that looked roughly like

    \[111 - 1 =\]

and we ended up with an expression that looked roughly like

    \[11 -\]

In other words, we used a simple tally mark system to take the number “3” and subtract “1!” This sounds simplistic, but it’s not. We wrote down a set of rules so that given any number of tallies left of the equals sign and any number of tallies right of the equals sign, we can subtract the tallies on the right from the tallies on the left. And we can do so without any thinking at all, just by looking at the rules in our table.

So What’s a Turing Machine?

A Turing machine is only a little bit more general than what we just described. It has three parts:

  1. A ticker tape consisting of cells that can be written in. We select an alphabet of symbols we’re allowed to write. In our simplistic example, our alphabet consisted of “1”, ” “, “-“, and “=” but it could easily describe more symbols. For example, you could easily use the real alphabet or every character available in unicode. Whatever you like.
  2. A “head” that can read from the tape and write to it. The head also has a finite number of “states” or modes. You can think of these the same way you think of the modes available to your printer at home. A printer probably has a “scan” mode, a “print” mode, and mavbe a “fax” mode. These determine what the printer does given some input. Similarly, the state of the head determines what it does when it reads a symbol. In our example, the states were the four colors: “Red,” “Green,” “Blue,” and “Yellow.”
  3. A rules table. The table tells the head what to do when it reads a symbol off of the tape given the state it’s currently in.

Turing postulated that if there is a problem in mathematics that you can find the answer to, you can build a Turing machine that can find the answer. We currently believe this to be true, though the answer isn’t really known. (Aside: this hypothesis is called the Church-Turing Thesis.)

But, more relevantly (for us anyway), a Turing machine is a model of a single-purpose computer. Next time I’ll explain how and explain how to get from this to a fully programmable computer.

Further Reading

Here are some resources on Turing machines.

  • The Wikipedia article on Turing machines is, of course, a good place to start.
  • Mark C. Chu-Carroll has a very nice introductory article on Turing machines, from which I stole my example machine. You can find it here.
  • I-Programmer also has a nice article on Turing machines. It’s clearly meant for programmers and software engineers, but a layperson should be able to follow it. Find it here.
  • You can download a Turing machine emulator for your android phone here.
  • Want to learn how to build a Turing machine with a Raspberry Pi? Learn how here.
  • Google made a doodle Turing machine for Turing’s birthday. It’s archived here.
  • Want to build a Turing machine out of legos? You can! Find out more here.

Related Reading

This is the sixth article in a multi-part series on computing. These are the other parts:

  • In this article, I describe the basics of Boolean logic, starting with truth tables.
  • In this article, I describe how to make composite logical statements using Boolean logic.
  • In this article, I introduce the triode vacuum tube and describe how it works.
  • In this article, I show how to use triodes to make logic gates.
  • In this article, I explain how to build computer memory.

Posted in Computer Related, logic, Mathematics, Science And Math | Tagged , , , , , | Leave a comment

A Parallel Computing Primer

Janus
The Janus Supercomputer at the University of Colorado is one example of a parallel computer.

So, Jonah is moving and he asked me to write a guest post. Jonah’s recent articles about computing prompted me to write about distributed computing. The question I will answer is: how do you go from computing with a sequential program to computing on many core machines (aka Parallel Computation)?

Parallel Computation

First of all, what is parallel computation? In a nutshell, parallel computation is the science which allows you to use a many processors to compute faster. You certainly would want to do this if you worked on the stock market where the faster you are at calculating the faster you can buy and sell and make money. If you are trying render a TV show you’ll want to finish it before it is supposed to air. Many newer computers have many processors in them allowing them to take advantage of this speedup gained from using them.

I now want to discuss what it means to have a sequential program, which is the case when only one processor is used. To solve most problems you have a sequence of steps you must complete to get an answer. For example, suppose you want to find the sum of the numbers:
1 2 3 4 5 6 7 8

Computers can only do binary addition, that is add two numbers at the same time. Because of this we must add only two numbers at a time. Two possible ways are outlined below:

(((((((1 + 2) + 3) + 4) + 5) + 6) +7) +8) +9
or like this
((1 + 2) + (3 + 4)) + ((5 + 6) + (7 + 8))

The second way might not seem obvious but it lends itself to parallelization so we’ll use it as our primary example. Since we are only adding pairs of numbers we can give each pair a letter to represent it. So, in this case A represents the action of adding 1 and 2 and B represents adding 3 and 4, and so on. It’s a bit more abstract but E equals the sum of the results from A and B and similarly for F and G.

((1 + 2) + (3 + 4)) + ((5 + 6) + (7 + 8)) = ((A + B) + (C + D)) = (E + F) = (G)

To exemplify the sequential case: suppose you have a single processor which can only do one operation at a time. It would look like a single channel which you could send commands to which would be operated on sequentially.
====================================
CPU 0 | A B C D E F G
====================================

This is called the sequential method where only one processor is utilized.  If each step took 1 second to complete, this code would take 7 seconds. The steps could be performed in other orders as long as A and B come before E, C and D before F, and E and F before G. If this wasn’t the case, you wouldn’t have the output from the steps before and the code wouldn’t work.

Now, the trouble occurs when you have two processors. How do you decide how to run the commands in parallel? This example is pretty straight forward, you can only use commands that have left and right answers available for them. Using the example above, an efficient order could be
=====================================
CPU0 | A | B | E | G
=====================================
CPU1 |  C | D | F |
=====================================

Here A and C, B and D, and E and F are all performed in parallel. We can’t parallelize G as it just adds the two last numbers together and that can only be run on one processor. Now, again if each operation only takes one second then this would take 4 seconds. We seemed to have achieved a 3 second speed up in our code.

Flynn’s Taxonomy

Our example before this outlines one of the most common ways to run a program in parallel. However, there are many way to do multi-processing. To help identify the different types Michael J. Flynn created a taxonomy. His taxonomy, simply called Flynn’s Taxonomy, splits all programs into four groups denoted by SISD, MISD, SIMD, MIMD. You can visualize the taxonomy in the following graphic:

Flynn's Taxonomy
Flynn’s Taxonomy (By Wikipedia’s User:Cburnett)

The four types are separated into if they have one or more of each an instruction set and data set. The four combinations yield the four types in Flynn’s Taxonomy.

The four types with their names expanded are as follows:

  1. SISD – Single Instruction Single Data
  2. MISD – Multiple Instruction Single Data
  3. SIMD – Single Instruction Multiple Data
  4. MIMD – Multiple Instruction Multiple Data

The sequential addition code is an example of the SISD type, and our parallel addition example is illustrative of the SIMD (Single Instruction Multiple Data) type. This is because each processor does the same thing, take four numbers and add them pairwise. However, CPU0 gets the numbers 1 2 3 4 and CPU1 gets 5 6 7 8. This means each set of instructions get a unique set of inputs.

So, you might ask, how do the other two types come into play. Well the MISD case can be an instance when say you have medical records of many patients, and you want to know if your model (A) predicts cancer better than another model (B). You would want to check it on the same data but you want to run two different instructions or models; in this case A and B.

As for MIMD, this usually comes about when processing large amounts of different data. For example, suppose you are Google and you have billions of photographs. If you want to store them in the most compact form possible you’ll probably want to compress them. Since different photos lend themselves to different compression algorithms you might want to compress it with two different algorithms (the instructions) and keep the best one. I’m not saying this is how they do it but it does illustrate an instance where you might use a MIMD type program.

An Example of Parallel Code

Weather simulations need to be fast and accurate. To accomplish the accuracy, software divides the surface of the earth up into little squares. Each square represents the whole area and contains the data for its temperature, humidity, wind, etc. To make the simulation doable the whole area is represented as a single point usually located in the center of the region.

To simulate the weather, the program will move air from such one point to another to represent wind. The wind carries water and clouds which change the temperature and, not to make it sound too simple, changes the weather. By simulating this we can predict to a reasonable extent how the weather will unfold tomorrow.

Of course, if the region is too large it will not reflect the weather accurately. For instance, if the region was chosen to be the size of the US, it would represent all the weather in the US as the same, which certainly isn’t true. The size of a region is usually about the size of a large neighborhood as seen below:

A NWS prediction area
A prediction area for south Boulder. (Credit: NWS www.weather.gov)

There is a cost to choosing a small area, you need more memory to store all the information for all of the regions. Additionally, each time you add more regions you must do computations on those new regions increasing the computing time.

To overcome this, the National Weather Society (NWS) predictions weather on large distributed computer systems putting a few areas onto each computer. Then each computer applies the same instructions to each set of areas progressing the simulation one small amount of time into the future. This is then done many times until the future time is reached.

This way they can have predictions about tomorrow’s weather before tomorrow gets here.

The Trouble With Parallelized Programs

At this point it might seems as if we could continue to use more computers and get better and better performance. This would be nice but we run into the issue that not all code is parallelizable. Remember the sum of numbers problem, the last step required us to use only one processor to add the two final numbers together. In fact if we add more processors to that example we see more steps that can only be run sequentially.

=====================================
CPU0 | A | E | G
=====================================
CPU1 |  B | F |
=====================================
CPU2 | C |
=====================================
CPU4 |  D |
=====================================

Here, if we assume each step takes a second, we use 3 seconds. This means when we have 1 processor it took 7 seconds, with 2 it took 4 seconds, and with 4 it took 3 seconds. It seems as we add more processors we get a diminishing return in speedup. In fact if we add more than four there will be no additional speedup. This fact is called Amdahl’s Law.

"AmdahlsLaw" by Daniels220 at English Wikipedia - Own work based on: File:AmdahlsLaw.png. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:AmdahlsLaw.svg#mediaviewer/File:AmdahlsLaw.svg
Amdahl’s Law showing potential speedup for different parallel portions. (Daniels220 at Wikipedia)

Amdahl’s Law gives us a set of equations which depend of the portion of the code that is parallelizable and number of processors used. The graphic above shows us that for any proportion of parallelizable code, there is an upper limit to the speedup. Past a certain point, it is not possible to get more speed by adding more processors.

This fact has forced people to look for alternative methods to speedup their code. Most of the time they find better algorithms and mathematics which simplify what they must do to solve a problem and this make it run faster.

Conclusion

Parallel computing makes it possible to do large calculations in a timely manner but it is not a simple situation and there are certainly caveats to be aware of. But as computers become more parallel (which is a discussion for another day), parallel computing becomes a bigger and bigger deal. For example, Google recently created a  system for processing big data in an efficient way called MapReduce.

Jonah uses very parallel computers to study gravitational physics and he’ll discuss the intricacies of scientific computing in this environment sometime soon.

Hope this inspired some of you to investigate further. Feel free to ask any questions you might have.

Posted in Computer Related, Science And Math | Tagged , , , , | 1 Comment

Moving (again)

Cute moving dog
Cute moving dog

For (hopefully) the last time in the next three years, I’m moving! It’s only one city over, but I want to try and keep up a semblance of work productivity while I pack up and hop. So for the next two weeks or so, the blog will be on hiatus. Sorry all!

 

I will try to put up some fun content sporadically. And hopefully a guest post.

Posted in Uncategorized | Leave a comment

Flip-Flops and the Art of Computer Memory

It’s a poor sort of memory that only works backwards.
~The White Queen to Alice
(Lewis Carroll, Through the Looking Glass)

Keyboard Flip-Flop
It may surprise you to learn that flip-flops are an integral part of computing. Just… not the kind of flip-flop you think. (source)

This is the fifth part in my multi-part series on how computers work. Computers are thinking machines, and the first four parts of my series have been on how we teach computers to think.  But all of this logic, electronic or otherwise, is useless unless our computers can remember what they did. After logicking something out, a computer needs to remember the result of all that logicking!

In this post, I describe how to use the logic gates described in part four to electronically implement computer memory.

This post depends heavily on my last few posts, so if you haven’t read them, you might want to.

Still here? Okay. Let’s get to it!

 How Computer Memory Should Work

Before we talk about exactly how we implement computer memory, let’s talk about what we want out of it.

In the world of Boolean logic, there is only true and false. So all we need to do is record whether the result of a calculation was either true or false. As we learned in part three, we represent true and false (called truth values) using the voltage in an electric circuit. If the voltage at the output is above +5 volts, the output is true. If it’s below +5 volts, the output is false.

So, to record whether an output was true or false, we need to make a circuit whose output can be toggled between + 5 volts and, say, 0 volts.

Logic Gate, Logic Gate

To implement this, we’ll use the logic gates we discussed in part four. Specifically, we need an electronic logic gate that implements the nor operation discussed in part two. A nor B, which is the same as not (A or B), is true if and only if neither A nor B is true. We could use other logic gates, but this one will do nicely.

As a reminder, let’s write out the truth table for the logical operations or and nor. That way, we’ll be ready to use them when we need them. Given a pair of inputs, A and B, the truth table tells us what the output will be. Each row corresponds to a given pair of inputs. The right two columns give the outputs for or and nor = not or, respectively. Here’s the table:

truth table for OR and NOR
The truth table for the logical operations “A OR B” and “A NOR B.” Each row corresponds to a different set of inputs A and B; the last two columns correspond to the appropriate outputs.

Let’s introduce a circuit symbol for the nor gate. This will be helpful when we actually draw a diagram of how to construct computer memory. Here’s the symbol we’ll use:

a nor gate
A symbol for a NOR gate. The two terminals on the left correspond to the inputs. The terminal on the right corresponds to the outputs.

Crossing the Streams

Okay, so how do we make computer memory out of this? Well, we take two nor gates, and we cross the streams…

Ghostbusters crossing the streams
Don’t cross the streams. You know, unless you have to. (Source)

We define two input variables, which we call S (standing for set) and R (standing for reset), and two output variables, which we call Q and not Q. The reasons for this peculiar naming scheme will become clear in a moment. Then we wire the output of each of our nor gates so that it feeds back into the other nor gate as an input, as shown below.

An SR latch
One kind of computer memory constructed by cross-wiring two NOR gates. We have two input variables, called S and R, and two output variables, called Q and not Q.

For reasons that will eventually become clear, this circuit is called an SR-Latch, and it’s part of a class of circuits called flip-flops. Now let’s see if we can figure out what’s going on. Assume that both S and R are false for now. And let’s assume that Q is true and not Q is (as the name suggests) false, as shown below. 

SR latch Q state
An SR latch such that the Q state is true, the NOT Q state is false, and both S and R are false. This state is self-perpetuating. Blue indicates FALSE. Red indicates TRUE.

Is this consistent with the truth table we listed above?

  • S nor Q is false, because if Q is true, then (obviously) both S and Q aren’t false. Since we gave the name “not Q” to the output of S nor Q, this means not Q must be false. That’s what we expected. Good.
  • R nor (not Q) is true, because R is false and not Q is false. The output of R nor (not Q) is the same as Q, so Q must be true. Also what we expected!

Okay, so what does this mean? It means that if we start our circuit in this state, with R and S as false, Q as true, and not Q as false, our circuit perpetuates itself in this state. Nothing changes.

Toggle Me This, Toggle Me That

Now let’s alter the state described above a little bit. Let’s change R to true. Since R is true, this changes the output of R nor (not Q) to false. So now Q is false…and since S is still false, the output of S nor Q is now true. Now our circuit looks like this:

SR_latch_not_Q_toggle
An SR latch mid-toggle. We set R to true. Immediately, Q becomes false, which causes NOT Q to become true.

And if we wait a moment further, the cross-wiring takes effect so that both inputs to R nor (not Q) are true. This doesn’t change the output, though, as shown below.

SR_latch_not_Q_toggle_2
An SR-latch mid-toggle. After NOT Q becomes true, it feeds back into the first NOR gate. Fortunately, this doesn’t seem to affect anything.

Now we can set R to false again, and Q will stay false and not Q will stay true, as shown below.

SR_latch_not_Q
An SR-latch with Q false and NOT Q true. We got to this self-sustaining state by temporarily setting R to true.

Look familiar? This is the mirror image of the state we started with! And it’s self-sustaining, just like the initial state was. With both R and S set to false, not Q will stay true as long as we like.

Although R is false now, the circuit remembers that we set it to true before! It’s not hard to convince ourselves that if we set S to first true and then false, the circuit will revert to its original configuration. This is why the circuit is called a flip-flop. It flip-flops between the two states. Here’s a nice animation of the process I stole from Wikipedia:

SR latch animation
An animation of the flip-flop process for the NOR-gate based SR-latch we talked about. In this animation, RED signifies TRUE and BLACK signifies FALSE. (source)

This is computer memory. And similar devices are behind your CPU cache and other components inside your laptop.

A Whole World of Flip-Flops

Although it’s technically correct, generally, people don’t refer to the circuit I described as a flip-flop. The things people usually refer to as flip-flops usually only change state based on some sort of timer or clock. However, the idea is essentially the same.

In general, clocks are something I didn’t touch on. Real computers use a clock to sync the flipping of memory with the computations they perform using the logic gates I talked about earlier. (Clocks can also be generated using logic gates. This is the speed of your CPU clock.) I figured I wouldn’t go into that here because it’s a bit too technical and, in my opinion, not necessary for understanding.

Next Time

We now have enough tools to understand the essentials of how a computer works. It performs logical operations using logic gates, and it remembers what it did using flip-flops. Next time, we’ll take a break from electronic computers to discuss some other ways you can make a computer. Did you know you can make a computer with gears? With ropes and pulleys? With dominoes? With card games? Find out more next time.

Further Reading

Flip-flops are the domain of electrical engineers, so there’s not much non-technical material on them. Here’s what I could find.

  • A flip-flop is a special case of a class of circuits called “bi-stable vibrators.” If you like, you can learn about them on their Wikipedia page.
  • Here’s a fairly technical video lecture on flip-flops.
  • This set of lecture notes is at about the same level as my post, but it goes into substantially more depth.
  • If you want to understand clocked flip-flops, you might start here.

Related Reading

If you liked this article, you might enjoy reading my previous entries on computing:

  • In this article, I describe the basics of Boolean logic, starting with truth tables.
  • In this article, I describe how to make composite logical statements using Boolean logic.
  • In this article, I introduce the triode vacuum tube and describe how it works.
  • In this article, I show how to use triodes to make logic gates.

If you like explanations of how hardware works, here are some similar articles:

Posted in Electronics, Physics, Science And Math | Tagged , , , , | Leave a comment

The Boolean Circuit and Electronic Logic, Part 2

If the presence of electricity
can be made visible
in any part of the circuit,
I see no reason why intelligence
may not be transmitted
instantaneously by electricity.

~Samuel Morse

Circuit Board Snake
This sculpture by Peter McFarlane embodies the fear a lot of us feel regarding the innards of a computer. If we touch it, it might bite! Have no fear. Today we learn what all those wires on that board do.

This is the fourth part in my multi-part series on how computers work. Computers are thinking machines, but they can’t do this on their own. We need to teach them how to think. And for this, we need a language of logic. In the first part of the series, I introduced this language of logic, Boolean algebra. In the second part, I described how to formulate complex logical statements using Boolean algebra. In the third part, I described how some of the electronics in old-school computers worked.

This time, I finally get to the meat of things. I describe how you can use the circuits described in part 3 to actually implement Boolean logic electronically.

Voltages

The following discussion is going to depend heavily on an understanding of the triode vacuum tubes that I discussed last time. So if you haven’t already, you probably want to last week’s article before you read this one.

That said, some review is probably in order. First let’s review electric voltage. The salient points are this. Electrons want to flow from places in a circuit that are at low voltage to places that are at high voltage.  Voltage is measured in volts. We want to use voltage as a stand-in for the truth values described in parts one and two of my series on how computers work.

There are a number of ways we can use voltage to represent truth values. But let’s pick the most common way. Let’s say we have a circuit. We’ll connect a voltage-measuring device to a spot on our circuit—perhaps where two wires intersect—and say that, if it reads any voltage higher than +5 volts, then our circuit represents true at that spot. If it reads any lower, then our circuit represents false there.

(There’s an important detail I’m skipping here. Voltages are not absolute numbers. We know that electrons flow from low voltage to high voltage. But we need a baseline for what “zero volts” means. In every circuit, this “zero volts” value might be different, but we always have to set it and choose a value. We usually call “zero volts” ground.)

Vacuum Review

 Now let’s review the basics of a triode. As the name suggests, a triode is a circuit component with three terminals, called the cathode, anode, and filament, as shown below. There’s another piece called the called the heating element that we won’t discuss in much detail.

triode schematic 1
A schematic of a triode.

I discuss the triode in great detail in last week’s article.  But for now, here’s what you need to know. The filament controls the flow of electrons from the cathode to the anode. Suppose that I hook up the triode in a circuit so that the anode is at a high voltage, and the cathode is at a low voltage. Then if the filament is at a low voltage like the cathode, then no current will flow. If the filament is at a high voltage like the anode, then current will flow.

Fiddly Names

It’s worth noting that the pieces of a triode can have other names. The cathode is often called the plate and the filament is often called the grid. In this naming scheme, confusingly, the heating element is often called the filament.

It’s also worth noting that these days triodes have been almost completely replaced by transistors.But transistors have different names for these parts. In a transistor, the cathode is called the source, the anode is called the drain, and the filament (or grid) is called the gate. A transistor does not have a heating element. To avoid confusion, I’ll use vacuum tubes instead of transistors. And I’ll use the names in my diagram.

Boolean Circuity

Now let’s implement Boolean logic! To do this, let’s chain some triodes together, as shown below.

an or gate implemented using vacuum tubes
An or gate made out of triodes. We keep the anodes at a very high voltage, much higher than the cathodes could possibly be at. Then we connect both cathodes to a terminal that we call “C.” We name the filament terminals “A” and “B” respectively.

Here we keep the anodes at a very high voltage, much higher than the cathodes could possibly be at. We then connect both cathodes to a terminal that we call C. We name the filament terminals A and B respectively.

Now let’s manipulate the terminals A and B and see what terminal C does.If both A and B are at a low voltage (i.e., false), then current won’t flow through either triode. The result is that C stays at a low voltage, which we’ll call false. But now let’s put terminal A at a higher voltage, say true. (For now we’ll leave B as false.) Then current will flow through the left-most triode and raise the voltage at terminal C so that it becomes true. 

Similarly, if we keep A at a low voltage (i.e., false) but raise the voltage at B so that it becomes true, then current will flow through the right-most triode. This too will raise the voltage at terminal C and make it true. And of course, if both A and B are true, then current will flow and raise the voltage at C, making it true.

So if both A and B are false, then C is false.But if either A or B or both A and B are true, then C is true too. The astute among you might recognize C as the result of the logical operation A or B. We’ve just made the logical or work electronically! The circuit we just constructed is called an or gate.

More Gates

Can we do the same thing to make a logical and? Let’s try. We’ll still use two triodes. But now let’s connect them in a different way, as shown below. We put the anode of the first triode at a very high voltage. But we connect its cathode to the anode of the second triode. We label the filaments A and B respectively and the cathode of the second triode C.

An and gate implemented with triodes.
An and gate implemented with triodes. We keep the anode of one triode at a very high voltage. But we connect the cathode to the anode of a second triode. We call the filament terminals “A” and “B” respectively and the cathode of the second triode “C.”

Now current will be unable to flow unless both terminals A and B are at a high voltage. In other words, C will be true only if both A and B are true. You might then recognize C as the output of the logical A and B, which we’ve now implemented electronically! This circuit is called an and gate.

Gates in Practice

I showed you how to electronically implement the boolean operations and and or described in George Boole and the Language of Logic, Part 1. If we wanted to make composite statements as we did in part 2, then we’d also need the not operator. But that’s actually a bit trickier (though not impossible) to make electronically. So I won’t describe it.

Usually, in practice, people use nand and nor gates as opposed to and, or, and not. These are harder to make, however, so I won’t describe them. Hopefully the gates I did describe, however, give you a feel for how this works.

Electronic Caveats

In my description of the and and or gates, I claimed that when current flowed through a triode, the voltage at the cathode changed. In practice things are a lot more complicated. Voltage and current are related. In simple cases, the relationship is given by Ohm’s law, which many of you are probably familiar with. In more complicated cases, where the current changes quickly, you need to take more things into account, like the travel time of electrical signals.

That said, the description I gave gets the basic idea across and, if we added more circuit components (such as, capacitors, inductors, and resistors), the gates I described would work.

Next Time

Performing a logical calculation is useless unless you can record your findings. Next time, I’ll describe how to use logic gates to teach computers how to remember.

Further Reading

If you want to know more, here are some resources on how logic gates are implemented.

Related Reading

If you liked this article, you might also like the following articles I’ve written.

I am currently in the middle of a series on how computers work, from the ground up. Here are the previous articles:

  • In this article, I describe the basics of Boolean logic, starting with truth tables.
  • In this article, I describe how to make composite logical statements using Boolean logic.
  • In this article, I introduce the triode vacuum tube and describe how it works.

And if you like articles on how things work. Here are some other articles I’ve written in the same vein.

Posted in Computer Related, Electronics, logic, Mathematics, Physics, Science And Math | Tagged , , , , , | 1 Comment