Tuesday, February 11, 2014

History of the Periodic Table Part 2: What is Atomic Mass?

In the early 1800's, as Johan Wolfgang Dobereiner's triadic version of the periodic table was being developed, John Dalton, Thomas Thomson and Jons Jakob Berzelius were beginning to figure out the relative atomic masses of the elements. At the time, each element's mass was taken as a number relative to the lightest known element, hydrogen, which they called number 1. The logic behind this is that scientists believed that each element was built up of atoms of hydrogen. And at this time, knowing nothing about subatomic particles, they considered each atom to be an indivisible unit.

This relative mass idea is known as Prout's hypothesis. Scientists thought that the atomic mass of any element would always be an exact whole-number multiple of hydrogen's mass (1), but soon, to the shock of the scientists involved, this was proved to be wrong. Some measured masses weren't even close. In 1826, Berzelius (shown below right), a man devoted to careful measurement and fastidious lab work, developed a precise way to measure atomic mass through experiment.

He discovered that the atomic mass of chlorine, in particular, fell in between two whole numbers (its mass is approximately 35.45 u). In his Treatise on Chemistry, Berzelius described his procedure for measuring the atomic mass of chlorine:

"I established its [chlorine's] atomic weight by the following experiments: (1) From the dry distillation of 100 parts of anhydrous potassium chlorate, 38.15 parts of oxygen are given off and 60.85 parts of potassium chloride remain behind. (Good agreement between the results of four measurements.) (2) From 100 parts of potassium chloride 192.4 parts of silver chloride can be obtained. (3) From 100 parts of silver 132.175 parts of silver chloride can be obtained. If we assume that chloric acid is composed of 2 Cl and 5 O, then according to these data 1 atom of chlorine is 221.36. If we calculate from the density obtained by Lussac, the chlorine atom is 220 [relative to the atomic weight of oxygen]. If it is calculated on the basis of hydrogen then it is 17.735."

Notice that his measurement is almost exactly half of the modern measurement. The reason is that at the time scientists didn't know hydrogen existed as a diatomic gas so his hydrogen standard was off by half.

Atomic mass can be a confusing concept. Measured in unified atomic mass units, amu or just u, it is sometimes called atomic weight instead - the two terms are often used interchangeably, and most of the time that's fine. However, these are two of several terms in chemistry that create a lot of problems for students and wreak havoc on their teachers. Chemwiki has an excellent definition chart you can use to clear up the chaos. Strictly speaking, atomic mass is the mass of an individual atom at the microscopic scale, whereas atomic weight is the average atomic mass of an element. Isotopes are the reason why there is a subtle difference between the two definitions.

The Discovery of Isotopes: The Modern Atom Begins to Take Shape

Berzelius didn't know this, and in fact most high school chemistry students don't know this (yet), but you can measure the atomic mass of one sample of pure chlorine to fantastic precision and measure another sample of pure chlorine taken from somewhere else in the world and get a different number. Why?

Almost a century after Berzelius's work, around 1910, isotopes were discovered, and the unexpected discrepancies between measurements like the chlorine example I just mentioned, were shown to be due to an isotope effect, where the masses of elements may reflect a mixture of stable isotopes of those elements. This means that atoms of the same element (same number of protons) can have different numbers of neutrons in the nucleus, and that variation affects the atom's mass.

Most people credit the discovery of radioactivity to Henri Becquerel in 1896. He noticed that uranium salts blackened photographic plates, due to some kind of radiation, thought at first to be X-rays. It was not long until researchers such as Ernest Rutherford, Paul Villard and Pierre and Marie Curie realized that the radiation that Becquerel detected was more complex than first thought. And the implications were unsettling.

Around this time, researchers working with thorium, a radioactive element found in naturally occurring thorite minerals, discovered that the naturally found thorium in the mineral emits beta particles (electrons). A thin sheet of thorium in argon is shown below. Pure thorium is a silvery white lustrous metal but when some thorium oxide is present, as it usually is, it eventually tarnishes to black.

alchemist-hp;Wikipedia
They found that thorium isolated from decaying uranium emitted an entirely different particle - the alpha particle. Thorium has over 30 (all radioactive) isotopes, most of which decay by emitting an alpha particle but some isotopes decay through beta decay.

They didn't know what alpha and beta particles were but they could detect that they moved in opposite directions when placed in an electric or magnetic field, and they could detect that one kind (beta) always traveled a lot further than the other kind (alpha). It only took a few years for Becquerel to realize that the beta particle was an electron based on its mass to charge ratio, which was the same as Thomson's results. Otherwise, the two thorium samples were identical. These results flew directly in face of Dalton's atomic theory: If two atoms have the same number of electrons they must also have the same number of protons in their nuclei, and therefore they should behave identically. And yet, these researchers knew that something had to be different between these two thorium samples.

There are many different kinds of radioactive decay and some of them change or transmute one kind of atom into another kind by changing a neutron into a proton or vice versa. Protons were discovered in 1917 when Rutherford expanded on Prout's idea that hydrogen was a standard building block of all heavier elements. Hydrogen contained only one of the newly discovered positively charged particles  emitted through some kinds of radiation (proton emission), while other atoms contained more of these particles. Rutherford named these positive particles protons in 1920.

Along with the isotope mystery, something about atoms was way off. Scientists, looking at the various elements, knew that the relative atomic mass of an atom always seemed to be a bit more than double the atomic number, Z. They also knew that almost all the mass of an atom was concentrated in a tiny volume in the centre (thanks to Rutherford). The atom, as far as they knew,  consisted only of protons and electrons and the atomic mass data meant there had to be twice as many protons as electrons. This was a mystery because they also knew that atoms are electrically neutral. They thought that perhaps half the electrons were bound up in with the protons, cancelling their positive charge somehow. It wasn't a very satisfactory explanation, and the newly formulated uncertainty principle implied that there wasn't nearly enough energy in the atom to confine the (electrically repulsive) electrons inside the positive nucleus.

Neutrons were discovered in 1932 by James Chadwick. It's a bit of a story and the link explains how he did it. His discovery of the neutron finally put the lingering isotope mystery on firm conceptual ground. Isotopes have the same number of protons but different numbers of neutrons in their nuclei. The alpha particle was finally found to be a helium nucleus, consisting of two protons and two neutrons.

Relative atomic mass, which used to be called standard atomic weight, is now calculated as 1/12 the mass of carbon-12 (which happens to be almost exactly 12 u; 12.0107 u to be more exact*). It is the average atomic mass of both stable isotopes of the element carbon (carbon-12 and carbon-13) based on their relative abundance on Earth. However, the isotopic makeup of samples from different sources on Earth can vary quite a bit, and this turns out to be both a little problem and a very useful scientific tool. On the plus side, you can pinpoint the original location of archeological samples of bone, teeth, iron tools, glass and lead-based pigments based on their isotopic profile. On the minus side, this can lead to inaccuracy when relying on the relative or average value for mass. Calculating the geographical variance of the isotopic profile for various elements is still a work in progress, as scientists work out with increasing precision the relative isotopic abundance of elements not just at various locations on Earth, but in the universe as a whole.

As a result, in 2010 the International Union of Pure and Applied Chemistry (IUPAC) changed the formal definition of atomic mass. The atomic masses of hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine and thallium are now written as intervals rather than as single numbers. Some modern applications require very precise atomic mass, so this change is necessary for accuracy. *Carbon is now listed as 12.0107 +/- 0.0008 u, a reflection of the varying abundance of two stable isotopes - carbon-12 and carbon-13 depending on the geographical origin of the sample. Only ten elements have so far been updated either because others only exist as a single stable isotope or because the upper and lower mass limits of the element haven't been measured yet. The IUPAC also regularly updates atomic masses as measuring precision improves.

Below is a screenshot from Wikipedia showing the average relative atomic masses of the elements. It doesn't reflect the new mass intervals of ten elements.


Lead (Pb, Z=82, atomic weight = 207.2) is the heaviest stable element. All elements with atomic numbers over 82 have no stable isotopes. The atomic mass of these elements is taken as the mass of the longest-lived isotope, and for some very short-lived elements it is an estimate only.

Mass Defect

When isotopes were discovered, scientists figured that a pure isotope (no mixture) should still have a relative atomic mass that is an exact multiple of 1 to within 1%. In other words, it should have a mass equal to adding up the individual masses of its protons, neutrons and electrons (proton and neutron masses are almost identical). However, this too is now known to be incorrect, and we will use the element helium as an example to explain why.

Helium exists almost entirely as helium-4 on Earth. Based on what we now know about relative atomic mass, we expect its mass to be almost exactly 4.000 - 2 neutrons + 2 protons = 4 (plus a small mass contribution made by electrons). We find by Googling helium that its average measured atomic mass (standard atomic weight) is 4.003.

However, helium-3 is also present in trace amounts on Earth (it is the only other stable isotope of helium). Looking up helium-3, we find it has an isotope mass of 3.016. Wait a minute. We expect it to be almost exactly 3.000 because it is a pure isotope. Why so much off?

This leaves us with two questions: (1) why isn't helium-3, a single isotope, almost exactly 3.000 and (2) why is the average atomic mass of helium slightly HIGHER than 4.000 (4.003) rather than lower, since helium-3 (smaller mass) makes an, albeit, small contribution?

The answer is called mass defect. To illustrate what mass defect is, let's add up the known masses of all the subatomic particles in an atom of helium-4. It has two electrons, two protons and two neutrons. The masses of these particles (in amu) are all known to at least six significant digits so:

2 X proton (1.007276) = 2.014552
2 X neutron (1.008665) = 2.017330
2 X electron (0.000549 = 0.001098

Total = 4.032980

Why isn't this value 4.003, the measured mass of helium? First, let's recap what we know: 4.000 is helium's relative atomic mass (relative to carbon) but it is not its unified atomic mass, the value that reflects its actual average measured atomic mass as it is found on Earth. If we look up helium-4's unified atomic mass, it is 4.002603 amu (or just u), rounded up to 4.003 as shown above. This number is quite a bit different from both its relative atomic mass (4.000) and the value we got by adding up its components (4.032980).

The difference between the sum of the components and the measured amu value is called the mass defect. Energy is released when the helium nucleus is assembled from its protons, neutrons and electrons, so the helium atom has lower potential energy (reflected by 4.003 rather than 4.033). The mass difference, 0.030377 u, is the mass equivalent of the energy that is released. This released energy is called the binding energy. The helium nucleus has lower potential energy than it's separate nucleons but it has higher binding energy, and this is what makes the formation of helium atoms thermodynamically favourable. If we wanted to split a helium-4 atom, we would have to add that energy back.

You might wonder why we have to add energy to helium to get it to break apart. After all, doesn't the fission going on in nuclear plants release energy? The answer is in the size of the atom. Iron is the most stable atom of all atoms. It has the highest binding energy and therefore it doesn't fuse or split. Atoms smaller than iron release energy when they fuse into larger atoms. The larger atom will have higher binding energy and lower potential energy. Atoms larger than iron (including those used in nuclear fission reactions, such as uranium) release energy when they are split apart.

If we add up the components of helium-3, we get:

2 X proton (1.007276) = 2.014552
1 X neutron (1.008665) = 1.008665
2 X electron (0.000549 = 0.001098

Total = 3.024315

Helium-3's isotopic mass is 3.0160293 u, so 3.024315 - 3.0160293 gives it a binging energy of 0.0082854 u, about a quarter of helium 4's binding energy of 0.030377 u.

ALL atoms, including those with radioactive unstable nuclei, have at least some positive binding energy. However, the binding energy in some atoms is not strong enough to hold the nucleus together indefinitely. These atoms will lose neutrons or protons (decay) until they reach a product that is stable. Very unstable nuclei may decay in microseconds while almost stable nuclei may take up to billions of years to decay.

The real measured atomic mass of an element therefore depends not only on its isotopic makeup but on its mass defect as well, and that depends on the particular atom's binding energy. An atom's binding energy consists of its nuclear binding energy (huge contribution since the strong force is involved) as well as its electron binding energy, better called ionization energy (a much smaller contribution since the far weaker electromagnetic force is involved). Ionization energy is the energy required to strip the atom of its electrons, to ionize it in other words.

Atoms with especially stable arrangements of neutrons and protons have especially high mass defects, especially low potential energy, and especially high binding energy. Atoms with unstable nuclear arrangements are radioactive. Calculating stability compared to decay rate is a bit complicated; it is an example of a many-body problem in physics. Physics Stack Exchange provides a really good explanation of it, if you are curious. Radioactive nuclei will go through a decay process. There are three basic decay possibilities: A nucleus will change a proton into a neutron or the reverse (beta decay), it will eject either an alpha particle (helium nucleus) or a proton, or, third, it will eject an even larger element nucleus. These decays either result in a new isotope of the element (if only the number of neutrons is altered) or a whole new element (if the number of protons changes). The latter process is called transmutation and it is the only real way, along with fusion, to change one kind of element into another kind. This is the real-life version of the philosopher's stone mentioned in the previous article.

Helium, with its two neutrons, two protons and two electrons, forms an unusually stable atomic arrangement. The graph below compares binding energy with nuclear size.


Iron (Fe), mentioned earlier, has the highest binding energy (graph peak) while helium-4's binding energy forms an unusually sharp upward spike at the left end of the graph. Helium-4 has a remarkably stable nucleus giving it a significantly lowered atomic mass. If you look at helium-3 above you see that it's right in line of where it's supposed to be. It's nuclear arrangement of two protons and just one neutron gives it an average binding energy.

Helium is extremely inert, which means it is chemically unreactive under all normal conditions so it won't form any compounds. It is also almost always a monatomic gas, condensing to a liquid only at the very cold temperature of 4.22 K, or -269°C, that's very close to the temperature of the vacuum of outer space (2.73 K). Like all elements, these chemical properties of helium come from its electron configuration, which is influenced by the electron's interactions with each other and with the nucleus. Helium's unusually stable nucleus and high binding energy is why element formation in the early universe pretty much stopped after helium nuclei formed (all larger atomic nuclei have been created in stars).

Why isn't hydrogen chemically inert too? Iron, even though it has an extremely stable nucleus, is fairly chemically reactive. Because of its particular electron configuration, it can either lose or accept electrons (usually from water or oxygen) to form various ionic compounds, such as iron oxide (rust). It is the electron configuration, not nuclear stability, which influences the physical and chemical properties of the elements, aside from their radioactivity of course. Chemical bonds between atoms was explained by Gilbert Newton Lewis in 1916, as an interaction between the electrons of the atoms involved.

Between 1800 and the early 1900's, the idea of what an atom is evolved at an explosive rate because there were so many great minds at work on the concept of the atom. Around the same time as the proton and neutron were discovered, the electron configuration of the atom was being sorted out. Bohr's model of the atom hinted that electrons have specific energies within the atom. They can gain or lose only discrete packets or quanta of energy. In the 1920's the quantum mechanical model of the atom was formulated. The evolution from Rutherford's atomic model to the modern quantum mechanical model marks one of the greatest breakthroughs ever in both chemistry and physics, with spin-off progress in biology, engineering, geology, practically every other scientific discipline there is.

In 1817, when Johan Wolfgang Dobereiner was putting together his law of triads, none of these things were known - isotopes, nuclear binding energy and ionization energy. No one knew that mass and energy were equivalent. No one knew exactly how atoms interact with each other, why they give off light and other radiation and how they transmit heat. No one knew how the fundamental forces make atoms what they are. All they knew was that the Earth seemed to be composed of a growing list of various simple substances, substances that seemed at the time to be fundamental, meaning they can't be broken down into anything smaller, and that some substances reacted with other substances to make yet different substances and others did not react at all. Unknown to researchers of this time, they were taking the first steps toward an amazing new era of science.

Friday, February 7, 2014

History of the Periodic Table Part 1: From Alchemy to Mendeleev

Memorizing the periodic table and old dead guys is a quick way to turn people off chemistry. But, like many things, the background and context bring it to life. For teachers, I hope this is a refreshing second look. I this six-part series of articles, we will explore not only the timeline itself but some aspects that are less often discussed, such as how the concept and measurement of atomic mass was developed, how X-ray spectroscopy works and what it has to do with the periodic table, what actinides and lanthanides are and why they are kept separate. We will also explore the rare earths in particular and why their story is currently newsworthy, Finally we will close with a glimpse into the table's future.

There are fascinating tales of mystery, intrigue, competition and greed behind the evolution of chemistry, and the history of the periodic table is really about that. It is about how magic evolved into science, and yet the elements themselves are interesting in their own right. They are puzzles and their relationships to one other offer clues into the deepest nature of the atom itself. We will explore this in detail in this series. We may be surprised to to find that the periodic table we all had to learn may be about to change.

Here is Wikipedia's classic periodic table of elements:

DePiep;Wikipedia
My favourite table comes from the Royal Society of Chemistry (RSC). If you click on any element there you will get a scroll-down index of all kinds of information about it. Wikipedia also does a good job if you want to search an individual element by name. If you look up iron for example, you will see a photo of it and a useful chart at the right with electron configuration (we'll find out why this is so important), melting point, oxidation states, etc.

The following 11-minute video is a nicely done introduction to the history of the periodic table created by CrashCourse.



It Begins with Gold and Black Magic

The first known elements were those that that stood out visually and were accessible. Ancient people found ingots of gold where they lay scattered about on the soil or just beneath. Gold, along with copper, was collected, melted down and shaped into decorative objects as early as about 6000 BC. It wasn't until about 330 BC, however, that people such as Aristotle looked at these pure materials and began to wonder if these and other less visually distinct materials ultimately come from some basic "prima materia" or first matter. At around this time, Plato, thinking along the same lines, suggested there are four basic building blocks of all matter - earth, water, air and fire - and he named them "stoicheia," the Greek word for elements. For many centuries afterward, no one knew how these elements formed all the different materials on Earth, but it was thought that everything stemmed from a single mysterious formless source variously named chaos, quintessence or the aether.

Ingots of gold are beautiful in their own right and it's not surprising that people were not only curious about how these quite rare materials formed but also how they could get more of them. In the meantime, between 6000 BC and 750 BC, a variety of other useful metals were discovered. In order of discovery they were silver, lead, iron, tin and mercury. The craft of smelting - getting the metal to melt and separate from its ore - improved over time and made many of these discoveries possible. While gold, copper and silver can be found in their native form, other metals like lead, tin and iron (except for pure iron found in meteors) are not. Metals, hard and malleable, were a huge boon for ancient people. Metal tools and weapons were vastly superior to those made of wood, rocks or bone. Unknown to them, these metals were the first elements to be identified, extracted and purified.

At around 300 CE, or possibly even before this, what would become the legend of the philosopher's stone took shape. People thought that some alchemical substance with magical properties could turn more common base metals such as lead and iron into more rare and highly valuable gold or silver, and thus began the alchemical race for the philosopher's stone. This might seem stupid to us living in the quantum age, but back then, without scientific knowledge to rest on, it would have been only logical to conjure up magical origins for phenomena we don't understand. There was a significant magical component to very early medicine as well. And, if you look around you, you will come across many vestiges of magical thinking today. Consider superstitions.

This stone was not only considered to be a physical material but a symbol of perfection and an elixir of life and immortality as well. Prima materia was thought to be a starting ingredient in a recipe for the philosopher's stone. The quest to find that recipe went on for many centuries.

Along the way, scientific progress was made, if by sheer trial and error. An alchemist in the 8th century, Jabir ibn Hayyan, surmised that every metal must be made of a combination of four principles. Elaborating on the four original elements, these principles were hot/dry, cold/dry, hot/wet and cold/wet. By rearranging these principles and applying some kind of elixir (the philosopher's stone), he thought he could transmute one metal into another metal. Although gold is generally found in its native pure form as nuggets, he thought that yet more gold resided hidden within different principle mixtures (we call them alloys and ores today) and it was just a matter of treating those mixtures with an elixir to release them. Smelting (roasting the ore or mineral over fire to release molten metals such as copper, lead, silver, tin, iron and mercury) was known since 6000 BC, but chemical metallurgy was not. By the 14th century, one such chemical treatment, called aqua regia (royal water), was discovered. Not the pretty solution you might expect an elixir to be, it was a fuming highly corrosive orange/yellow mixture of powerful acids that can dissolve certain metals such as gold from minerals and alloys so they can be recovered in pure form. This was the first chemical extraction of an element, as opposed to the physical (heat) extraction method called smelting.

Henning Brand - From Pee to Phosphorus, or, From Alchemy to Chemistry

Still, as of 1649, alchemists remained on the hunt for the elusive philosopher's stone, a substance that could do one better than extract a substance - it could create it anew. A German merchant called Henning Brand, trying to find the stone, ran experiments on distilled human urine, and discovered not gold but a white substance that glowed pale green in the dark, which he named phosphorus, the name owing itself to the Greek word for "light bearer."

Of all substances, why urine? At the time philosophers believed that man's body is a microcosm of the universe, so bodily fluids should contain, like the world itself, gold among all other materials. It is reported that he eventually came across a recipe in a then fairly recent tome called "400 Auserlensene Chemische Process" that called for using a mixture of alum, salt peter and concentrated urine to turn base metals into silver (it didn't work). So, I imagine that book's promise placed gold in the realm of possibility too. I can't help but chuckle here. Men.

This discovery would have been awesomely horrific: imagine a giant cauldron, with a fire roaring underneath it, boiling with pee. Eventually the urine concentrates into syrup. A glowing liquid trickles out the bottom spigot - itself entirely aflame.

The chemistry of the process is this:

Urine is rich in potassium salts. Evaporating it produces, among other salts, ammonium sodium hydrogen phosphate, or (NH4)NaHPO4. Heating the evaporate decomposes it into sodium phosphite, ammonia and water:

(NH4)NaHPO4 → NaPO3 + NH3 + H2O

Heating sodium phosphite with charcoal decomposes it into carbon monoxide and white phosphorus (not an especially riskfree reaction):

8NaPO3 + 10C → 2Na4P2O7 + 10CO + P4 (white phosphorus)

This experiment (as well as Boyle's famous phosphorus experiment to follow) is recreated in the first episode of the BBC series Chemistry: A Volatile History (a link to watch it can be found at the end of this article).

Though not precious gold, I am sure he reveled in his man-made product, aglow with some kind of mysterious life force! Below is a sample of white phosphorus (as a solid) under water.

BXXXD; de.wikipedia
Not a substance to mess around with, white phosphorus, also known as Willie Pete, is highly reactive (hence the water) and volatile. It has often been used as an incendiary weapon of war. Set it aflame and everything it touches including humans lights on fire, stays on fire, and creates a thick smothering smoke. It is also what makes matches burn as long as they do.

Brand's journey was captured in the famous 1771 painting "The Alchemist in Search of the Philosopher's Stone," shown below.

This painting romanticizes the actual process that was described in 1730 as requiring 50-60 pails of urine that was both putrid and "bred worms" (chuckling again).

Joking aside, this was the first element to be chemically discovered and its discovery was the catalyst (sorry) that ushered in the age of modern chemistry as we know it. There was a lot of fascination around this new product and Brand sold his secret recipe to anyone willing to meet his price. At the time, alchemy was a shadowy secretive world, filled with arcane symbols and recipes and procedures that were rarely shared. The RSC Periodic table has an alchemical version where you can see the (very beautiful) alchemical symbols for various elements that were known at the time. They are quite fascinating. For example, the symbol for iron (right) is also the symbol for Mars and for masculinity.


An ironically fun fact here: phosphates (salts of phosphorus) are one of three essential nutrients (nitrogen, potassium and phosphorus) for plants. There are some movements underway around the world to once again harness the phosphates in urine to use in fertilizer, as geological supplies of phosphate rock dwindle.

Phosphorus, later rediscovered by Robert Boyle, led many others to wonder what exactly an element is. In 1661, Boyle opened up current alchemical knowledge to the world by publishing a book called The Sceptical Chymist in plain English. He defined an element as "any substance that can't be broken down into a simpler substance by a chemical reaction," a good working definition that serves well even after the discovery of subatomic particles (particularly electrons) in the late 19th century with the work of J.J. Thompson and others.

The transition from the Middle Ages to the Age of Enlightenment marked the gradual transition from alchemy to chemistry, as notions of transmutations and the philosopher's stone gave way to the hunt for new "simple substances."

Antoine-Laurent de Lavoisier - the Father of Modern Chemistry

Antoine-Laurent de Lavoisier was the first person to categorize a list of all then known "simple substances." He placed it in a book called Traité Elémentaire de Chimie (Elementary Treatise of Chemistry). It's quite the volume, over 500 pages. You can see a translation of it online created by Project Gutenberg. He brought the concepts of balancing equations and the conservation of mass to chemistry. The law of conservation of mass is the rationale behind balancing a chemical equation. This is his formulation of the law translated into English:

'We may lay it down as an incontestible axiom, that, in all the operations of art and nature, nothing is created; an equal quantity of matter exists both before and after the experiment; the quality and quantity of the elements remain precisely the same; and nothing takes place beyond changes and modifications in the combination of these elements.'

He also explained combustion in terms of combination with oxygen, a breakthrough over an earlier theory in which combustible substances were filled with a fire-like liquid called phlogiston that was released when the substance burned. Below is an elegant portrait of him and his wife (who was also a chemist) painted in 1788.

This was the first chemistry book ever written and it included elements such as oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc and sulphur, among others. When Lavoisier's chemistry book was published (1789), over 20 elements were known. They also included light and "caloric," which were thought at the time to be basic material substances too. Light is, well, light and caloric is what they thought heat was made of - a liquid that flowed from hot objects toward cold objects. This book's chemical classifications were simple but classic: metals and nonmetals.

During the next century many more elements were discovered - 56 by the year 1850. The RSC period table has a history version where you can simply plug in any date from 1 CE to 2014 and all elements known at that date are highlighted. You can click on each element to read a short story about its discovery. Wikipedia has a good timeline of element discoveries as well.

In the 1800's, people were beginning to wonder how this wide variety of substances related to one another. There was a human need to get a handle on them by comparing their physical and chemical properties and categorizing them. In 1817, Johan Wolfgang Dobereiner noticed that there were trends in the properties of elements, so he came up with a way to classify them accordingly. He organized the elements into groups of three so that each element in a group shared related known properties.

John Dalton - the Atom Within the Element

Just a few years prior to Dobereiner's work, John Dalton and others achieved a huge breakthrough that would be tremendously helpful in categorizing these substances. Dalton figured out that substances are made of atoms. This first atomic model was Dalton's model. It was pretty simple - an atom was just a small indivisible object. You can think of it as a tiny solid sphere. Dalton, a physicist (shown right), revolutionized chemistry in 1802 with a series of papers on his atomic theory.

Based on his work with gases, he realized that each pure element is made up of identical atoms, and that atoms of different elements combined with each other in fixed ratios, a revolutionary idea. He also discovered that atoms could be told apart from each other by their unique relative atomic weights. Below is a scan of the first page of his "A New System of Chemical Philosophy" published in1808.


Its quality isn't great but you can see at the bottom of the diagram how much work he did figuring out how atoms combined with each other to make various compounds. The names of many of Dalton's atoms reveal a work in progress. For example, elements 8, 9 and 10 are called lime (which is actually a group containing many calcium-containing compounds), soda (there are actually many sodium salts) and potash (there are many potassium salts). We now know that number 37 is not a 'septemary' element composed of atoms of sugar, with each atom composed of 1 atom alcohol (33 left) and 1 atom carbonic acid (28 left). We know that sugar is a compound, composed of carbon, hydrogen and oxygen atoms. But Dalton based his results on what he knew from experimentation - that sugar ferments into two products - acid and alcohol. He knew that water (21) is made of oxygen (4) and hydrogen (1), except that he thought the ratio was 1 oxygen to 1 hydrogen because at the time no one knew hydrogen was a diatomic gas, and he also thought of water as binary atom, rather than a compound.

Wolfgang Dobereiner was the first person to try to classify the elements based on Dalton's work. He grouped them into clusters of three, arranging them in increasing relative atomic mass so that the mass of the middle element was close to the mean of the outer two elements. The next article in this series, History of the Periodic Table Part 2, explores in detail how the concept of atomic mass was developed and how its measurement was, and continues to be, refined.

Examples of Dobereiner's Triads (numbers are relative atomic masses) are shown below:


An example of one of his triads - lithium, sodium and potassium - is shown below left.

These three elements have a lot in common. They are all a silvery colour. You can squash them with a knife. They float on water. They melt very easily. And, they are all highly reactive. They have to be stored under oil but even under oil they gradually corrode, reacting with traces of oxygen that eventually seep through the oil. Now we group these elements as alkali metals.

This particular grouping works but in general the triad concept wasn't very successful. Many similar elements (judged by appearance and reactions) such as the transition metals (nickel, copper, chromium, zinc, platinum, etc.) can't be arranged in triads, while chemically dissimilar elements can be placed in triads. However, the triad arrangement did provide a clue about the relationship between element appearance/behaviour and atomic mass.

Several more attempts were made to categorize the elements, with none having much more success than the triad table. In 1865, John Newlands devised a 'law of octaves' for the elements (56 of them known by then). He noticed that many pairs of similar elements existed but they differed by a multiple of eight in their mass number, so he organized the elements into eight rows and gave each element an atomic number. A scan of his original list is shown below.




(credits for element photos respectively: Tomihahndorf;Wikipedia, Dnn87;en.wikipedia, http://images-of-elements.com/potassium.php;wikipedia)





For example, lithium (Li, atomic number, Z=3) and sodium (Na, Z=11) (his atomic numbers are a bit off) have similar properties as we just saw. Both are soft and can be cut with a knife, they are good electrical conductors, they both react exothermically with water.

Beryllium (Be, atomic number Z=4) (below left) and magnesium (Mg, Z=12) (below right) are also fairly similar to each other in appearance and chemistry. Both are strong rigid metals and both form a thin layer of oxide in air.


(Alchemist-hp;Wikipedia and Wanut Roonguthal;Wikipedia)

The problem with Newlands' scheme is not that it was fundamentally wrong. The problem was that he compared it to an octave musical scale. The fairly recently formed Chemical Society of London thought his musical note theory was far too ridiculous to publish until the octet theory of chemical bonding was established in 1916, and its importance was finally recognized. The octet rule we know is basically a modern rewrite of Newland's rule of octaves with a few corrections. Both systems have limitations. For example, they work well only for second period elements (lithium (Z = 3) to neon (Z = 10)).

Dmitri Mendeleev, the Father of the Periodic Table

It was not long afterward (1869) that the man we know as the father of the periodic table, Dmitri Mendeleev (below), published his periodic table in an obscure Russian journal.


His arrangement provided spaces for elements that were not yet discovered and it could predict some of these unknown element's characteristics, based on their locations in the table. The only group missing was the noble gases, a group of odourless colourless almost nonreactive gases that were discovered later. And thallium, lead, mercury and platinum were in the wrong groups.

This is what his handwritten version looked like (below right), not the neat and colourful castle turret block diagram we know today:


X-ray Spectroscopy Refines Mendeleev's Table

In 1914, X-ray spectroscopy was a new and very au courant investigative tool thanks to the work of W.H. and W. L. Bragg and Maurice de Broglie. Physicist Henry Moseley took advantage of this tool to find a relationship between the X-ray wavelength of an element and its atomic number, Z. This was the first logical order imposed on the elements that is based on physics, and it also refined Ernest Rutherford's model of the atom, which was published just a few years earlier, in 1911. Rutherford discovered that the atom, rather than being a uniform sphere, consisted of an intense central positive charge concentrated in a tiny volume at the center surrounded by a more diffuse but equal negative charge. Moseley's work suggested that the central positive charge was not a single positive charge but actually up to several positive charges instead, the number of which was equal to the element's atomic number in the table. To explore in detail how Moseley's work brought the periodic table into the quantum age, see History of the Periodic Table Part 3: Spectroscopy and the Quantum Atom.

Mendeleev's periodic table doesn't much look like our modern version. I created a block version of his table, shown below, with Dobereiner's triads marked out as yellow squares, as well as the elements known to the ancients (squares with dark green dots). The green squares are elements, such as noble gases, unknown in Mendeleev's time but he left the green gaps to be filled in later. Approximate atomic masses are shown beneath each element's symbol.

Beyond Mendeleev's Table

Many more elements are known to us today. Our modern version of the periodic table organizes elements into blocks, shown below.

Roshan220195;wikipedia)
This block arrangement reflects a deeper quantum mechanical understanding of the different atoms. Many scientists, working at the beginning of the 20th century, ushered in a new era called quantum physics. With this came a new atomic model, called the quantum mechanical model. Beginning with J.J. Thomson's discovery of the electron in 1897, and working from the Bohr and Rutherford models in which electrons orbit a positive nucleus in concentric orbits with increasing energy as one goes outward, rapid progress was made in determining exactly how electrons behave inside an atom. Physicists Albert Einstein, Louis de Broglie, Erwin Schrodinger, Werner Heisenberg, and Max Born all contributed to the modern model of the atom in which electrons orbit in orbitals defined by a probability distribution that is determined by the electron's angular momentum.

Blocks s, p, d and f are derived from the orbital configurations of the electrons in the atom, which themselves derive form the quantum angular momentum numbers of the electrons. To learn about orbitals and subshells and to review them, read Atoms Part 4A: Atoms and Chemistry - Atomic Orbitals and Bonding. Orbital configuration will also be explained in detail in History of the Periodic Table Part 4: Lanthanides and Actinides - Elemental Misfits?

The s-block contains alkali earths and alkali metals. The p-block contains all the nonmetal elements with the exception of hydrogen and helium. The d-block contains transition metals. The f-block contains inner transition elements - the lanthanides and actinides.

S-block elements exhibit well-defined trends in their physical and chemical properties, which can be explained by the increasing number of valence electrons filling up the s-subshell. The valence (these are highest energy outermost electrons involved in chemical interactions) electrons in the P-block elements fill up the p subshell. Valence electrons in the d-block elements fill up the d subshell, and f-block elements have valence electrons that fill the f subshell. What sets the f-block elements apart is that after the first element in each series (for example, lanthanum, Z=57, in the lanthanide series), the energy of the outer 4f subshell falls below that of the inner 5d subshell, so electrons from cerium (Z=58) onward start to fill up the 4f subshell before any more electrons are added to the 5d subshell. This places them in the f-block even though technically they fit into the d-block (see the light and dark blank pink squares in the table at the the beginning of this article). We will explore this in greater detail in History of the Periodic Table Part 4.

An additional hypothetical g-block would consist of elements with atomic numbers higher than 117. These elements would be filling g orbitals and all would be unstable and therefore radioactive. In fact, all elements above Z = 82 (lead) are unstable and the trend is that the half-lives of elements above lead generally decrease (meaning they are increasingly unstable) as atomic number increases. No elements larger than Z = 118 (ununoctium) have been created or discovered. Ununoctium has a half-life of just 0.89 milliseconds, and only a few atoms of it have ever been created in a collider. However, researcher Walter Greiner predicts there are more elements to discover and, in  fact, there may not be a highest possible element, at least theoretically.

Some elements may exist in what Glenn Seaborg, a nuclear physicist, called an island of stability: a group of elements with a neutron number (N) around 178 and an atomic number (Z) around 118 that would be unusually stable with half-lives at least minutes long and perhaps far longer, demonstrated below in a 3-dimensional map.
InvaderXan;Wikipedia
This hypothesis is built on the nuclear shell model in which the atomic nucleus is built up of energy shells, analogous to electron energy shells in which completely filled shells impart extra stability to the atom. This arrangement implies that certain numbers of nucleons in the nucleus are magic numbers - they create extra stable nuclei. Furthermore, researchers expect the shells for protons and the shells for neutrons to be independent of each other, so there are doubly magic numbers of nucleons imparting even greater stability to the nucleus. One such doubly magic number is around 178 neutrons and 118 protons, the island of stability, above.

These additional elements could be placed in an extended version of the periodic table, shown below. This is hard to see so you can find a large version of it here.


The extended periodic table would include additional blocks of elements such as super-actinides and eka -super-actinides.

In History of the Periodic Table Part 5, we will focus on the lanthanide series in the f-block elements - the rare earths. These elements have unique properties that are essential to the high-tech industry, a story that is gaining political energy. Part 4 and 5 articles continue our story of the history of the periodic table and bring it into the quantum mechanical age.

Finally, in History of the Periodic Table Part 6, we will explore the future of the periodic table further. In light of quantum mechanics, some experts believe that the periodic table is long overdue for makeover. The extended table above is a hint at what may come.

Finally, I recommend a BBC series of three 1- hour documentaries called Chemistry: A Volatile History (released 2010) as a must watch for those of us fascinated by the history of the periodic table and of chemistry in general. It is not easy to find and doesn't seem to be available for purchase, but you can watch the series (complete with episode descriptions) here at brainpickings.org.

Thursday, January 23, 2014

Fractal Universe Part 3

Fractal Spacetime - A Whole New Paradigm

You can't just fall in love with fractals and decide, what the Hell, space-time itself (the well from which fractals are drawn) must be fractal too. As you have no doubt discovered in your reading, all phenomena are described mathematically in physics. In order to put fractal spacetime in a context that can be tested and explored, physicists must figure out a mathematically consistent theory for fractal spacetime. Much of this article is devoted to that process. It is unavoidably complex. I know many of us are uncomfortable with differentiation (calculus) let alone variations on it. I don't think it is necessary to understand every process described here (I wouldn't want to be tested on some of them!). Instead, I think it's much more interesting to see how physicists go through the process. These kinds of mathematical journeys are at the heart of physics. They tell as much about what and how the people are thinking, as they do about the phenomenon at hand. A subtext of this series of articles is how the thinking of scientists evolves, how the process of science moves.

Fractal spacetime is a possibility in quantum mechanics. In 1948, Richard Feynman described the paths of photons as they struck a mirror and reflected off of it in a shocking and brilliant way: each photon of light actually has not just one trajectory but all possible paths to the mirror and back. What we see as a single reflected path of light is the probability amplitude for all the possible paths, even crazy, long, curved paths. Ultimately, these possibilities can be mathematically described and given a geometry. And in this geometry, each photon pathway is a continuous non-differentiable trajectory, making the geometry itself fractal, if we recall from the previous article that fractal math is continuous non-differential math. These photon trajectories can be described by a fractal dimension that jumps from nonfractal behaviour (whole integer dimensions; regular spacetime) at large everyday scales to fractal behaviour (a dimension that may exist in between whole integers) at the quantum scale of physics. These in-between fractal dimensions are strange and impossible to visualize; they remind me of Platform 93/4 at King's Cross Station in Harry Potter. This classical-to-quantum/fractal transition is thought to occur at the de Broglie scale (in the nanometer (nm) or  billionth of a meter range). Above this scale, we see light reflected in ordinary straight lines as we expect it to. Below this scale, things get weird. If we could see it, reflected light would be individual photons going every which possible way, but with the vast majority of them following (fractal) trajectories that are very close to the classical trajectory.

A Classical to Quantum/Fractal Transition at the Very Small Scale

Why a transition at the de Broglie scale? Every subatomic particle has a wavelength thanks to its wave nature, which is inversely proportional to its momentum. A (1 eV) photon has a de Broglie wavelength of around 1240 nm. An electron, on the other hand, has a de Broglie wavelength about a thousand times shorter than that. Unlike the photon, it has rest mass-energy that gives it much more momentum, and much shorter wavelength. This shorter wavelength is why electron microscopes, using electrons rather than photons, have much higher resolution. What is a de Broglie wavelength?

De Broglie figured out how electron orbits inside atoms remain stable rather than crashing into the nucleus, which is exactly what would happen according to Newton's laws for gravity. De Broglie realized that (a) electrons must act like waves and (b) these waves must fit around the nucleus as whole integer waves, called standing waves. Electrons can increase their energy in an atom by moving to a new shorter wavelength standing wave. This means that electron energy comes in discrete packets and it gives electrons their observed quantum (packet-like) energy levels. It also gives a resonant scale structure to atoms. Feynman suggested that at scales smaller than these standing waves, particle behaviour transitions into continuous non-differentiable behaviour, which is characteristic of fractal behviour.

De Broglie wavelength is thought to be the large-end cut-off point for fractal quantum behaviour. There is also a smallest possible cut-off point in fractal geometry in general, and it might come as a surprise that it is not zero. Planck length is the lowest possible universal scale in physics, about 1.6 x 10-35 m. Most physicists working in this area take Planck length to be fractal geometry's lowest limit and the reasons for it are quite technical. I think it's worth noting, and quite interesting, that while the mathematics of fractals allows a fractal structure such as the Koch snowflake to have infinite magnification and therefore it can theoretically approach infinitely small segment lengths that go smaller than Planck length, spacetime fractal behaviour that is measurable in any meaningful way has  a Planck length limit imposed on it. To be described by our current physical laws, fractal geometry has to follow the universal limit of meaningful scale in physics. This doesn't necessarily mean that reality has a Planck length limit; it's seems to be more of an uncomfortable compromise between old physics and new physics, at least to me.

Quantum Fractal Behaviour: From a Differential Equation to a Partial Differential Equation

This leads us to the mathematics behind fractals and the mathematics behind quantum mechanics. Can they be reconciled? In order to describe how a quantum state changes over time, physicists turn to the Schrodinger equation, formulated in 1925. This incredibly important equation is the quantum equivalent of Newton's second law, a classical law that describes the motion of a system. Newton's second law is often written as a differential equation where motion can be described as a smoothly changing system. Schrodinger's equation is written as a partial differential equation. A partial differential equation is a differential equation that contains unknown multivariable functions and their derivatives. A differential equation, in contrast, deals only with functions of a single variable and their derivatives. Because of this,  a partial differential equation can offer a more complete description of a complex real system. These kinds of equations are used in many areas of physics such as electrostatics, electrodynamics, fluid flow and elasticity in solid state physics - in addition to quantum mechanics. They have a unique ability to treat distinctly different phenomena as variations on a single theme, bringing their dynamics together into a single language other words. Because of their multivariable functions, partial differential equations can also describe a system changing in multiple dimensions rather than just one, another big advantage. The cost is that they are usually far more difficult to solve than ordinary differential equations are.

Partial differential equations can be generalized as stochastic partial differential equations, a word we will explore shortly. It is this generalization that paves the way toward bringing the math of quantum mechanics and fractal geometry together.

From Brownian Motion To Fractal Quantum Mechanics

The Schrodinger equation describes both the wave function of the quantum system and the evolution of the quantum state of the system over time. Interestingly, although we think of the Schrodinger equation as being useful only for describing quantum scale behaviours, it is a partial differential equation that can be generalized to describe any physical system including macroscopic systems. The trick will be going from this kind of equation to a non-differential equation, one that truly describes the close relationships between quantum mechanics, fractal geometry and chaos.

Luckily there is a phenomenon that offers a potential bridge between fractals and quantum mechanics, and that is Brownian motion, a phenomenon that has humble origins. The name, Brownian motion, was coined in 1827 to describe the random motion of particles suspended in a liquid or a gas. When you see dust particles dancing in a beam of sunlight, you are observing this kind of motion. Below is a simulation of five yellow particles that collide with a set of 800 particles, each one leaving behind a blue trail of perfectly random motion. One red velocity vector for one yellow particle is also shown. This is a computer model of Brownian motion.

Lookang;Wikipedi
Brownian motion, discovered in classical physics, can be shown to underlie modern particle theory. This motion is a very simple example of something called a continuous stochastic (or probabilistic) process. A continuous stochastic process is a collection of random variables that can be used to represent the evolution of some random value or system over time. This sounds pretty close to a partial differential equation, doesn't it? It is the opposite of a deterministic process, which can evolve in only one way and can be described using an ordinary differential equation. An example of a simple continuous deterministic process is the curved trajectory of a projectile. You can reverse time and trace back the motion to its origin. Brownian motion, on the other hand, is an example of a chaotic system. It's impossible to trace back this kind of motion because it is random. In fact, you can think of most stochastic processes as idealizations of a more primary chaotic process, one which also underlies fractal geometry and is described in Part 1 of this series.

Nelson's stochastic quantum mechanics formally puts this connection between fractal mathematics and quantum mechanics together. It describes a quantum particle, such as an electron, as being subject to an underlying Brownian motion of unknown origin, which, in turn, is described by two processes called  Markov-Weiner processes, one backward and one forward. Combining these processes gives the electron its wave function and it transforms Newton's dynamics into the Schrodinger equation, which describes quantum behaviour, as we saw earlier. The Schrodinger equation is a partial differential equation that describes how the quantum state of the electron, for example, changes over time. It describes its wave function in other words. If you remember at the beginning of this article, I mentioned that, according to Feynman, the (classical) reflection of light breaks down into the continuous non-differential trajectories of individual photons at the de Broglie scale. This fractal dimension at the quantum scale is also the fractal dimension of Brownian motion, described mathematically as a Markov-Weiner process.

In other words, by introducing quantum Brownian motion (the Markov-Weiner process) to the Schrodinger equation (a partial differential system), quantum mechanics can be described in terms of fractal geometry, a continuous non-differential system.

One thing that makes this Brownian motion idea so interesting is that it changes how we think of spacetime at the quantum scale. Spacetime is a bit of a mystery at this scale. General relativity doesn't seem as relevant here because gravity's influence is minuscule and the strong force, which holds the nuclei inside atoms together, becomes accessible. Spacetime, viewed at the quantum scale, appears to have a mysterious non-zero energy that is not associated with any of the fundamental forces, called vacuum energy. Nelson's stochastic quantum mechanics turns the concept of vacuum energy on its head. Vacuum energy is the energy at any point in spacetime that allows the creation of virtual particles and their antimatter twins to pop into existence. Once they form, they immediately annihilate each other and this quantum froth of activity results in changing or fluctuating vacuum energy on a very tiny, quantum, scale. Brownian motion makes the vacuum fluctuations of spacetime the driver of quantum mechanics, if we consider that the underlying Brownian motion is describing quantum fluctuation. This makes vacuum fluctuations the "Brownian motion of unknown origin" mentioned earlier. Usually, physicists think of it the other way around: quantum mechanics (the uncertainty principle) is the driver of quantum fluctuations.

These Brownian processes suggest that spacetime at the quantum level is fractal rather than flat and Minkowskian (Euclidean-like three dimensions of space plus one dimension of time), if we assume that the trajectories of Feynman's virtual photons are part of a fractal curve. Because fractal spacetime is nondifferentiable, it implies there must be an infinite number of geodesics (virtual trajectories) that the photons choose from.

A fractal interpretation of quantum mechanics means that the physical properties defining not only the virtual photon, but any particle, (properties such as mass, momentum, spin, velocity, etc.) can be defined as geometric structures of its fractal trajectory. A photon, or any particle, is no longer a point with momentum that follows a trajectory (more specifically one of its virtual trajectories recalling Feynman's work). Now a particle IS the fractal structure of its trajectory.

Quantum spin is now a purely geometrical property of the virtual trajectories of the particle. The whole infinite family of possible geodesics can be extended to describe the wave-particle nature of a particle. The probability cloud of an electron, for example, owes itself to the non-differentiability of fractal spacetime and the infinite family of geodesics that results from it.

From Fractals to Quantum Jewels

A very recent article (2013) by Natalie Wolchover, called A Jewel At The Heart Of Physics, describes an intriguingly similar geometric description of particle-particle interactions that, like fractals, challenges our current understanding of spacetime. A jewel-like geometric object, calculated by physicist Nima Arkani-Hamed and his doctoral student, Jaroslav Trnka, encodes the probabilities of outcomes for particle interactions in its volume, drastically simplifying calculations of particle interactions in the process. Click on the link above to see an approximate image of this beautiful and mesmerizing jewel. Like fractal quantum theory, this object suggests that quantum interactions are the consequence of geometry. This object, called an amplituhedron, forms the basis of a new quantum field theory that might be help researchers find a quantum theory for gravity that will seamlessly connect two mutually exclusive theories together - quantum mechanics and general relativity. This would amount to nothing less than discovering the theory of everything. Like fractal quantum theory, the probabilities associated with quantum phenomena are natural outcomes of the object's geometry.

The new object really shines in the field of high-energy physics, and that is where it was born. Calculating all the possible outcomes of even a very simple gluon-gluon collision, for example, requires the calculation of millions of different possible scattering amplitudes. This is done by running Feynman diagrams through a powerful computer program and, even with the best technology available, complex collision probabilities can be practically unsolvable. A construction that took decades to come together, the amplituhedron effectively takes all these calculations and turns them into one function. Instead of tediously plotting out millions of position-time variables, the amplituhedron couches the scattering process in terms of variables called twistors. A handful of twistor diagrams can describe very complex particle interactions. These twistor diagrams correspond to the volumes of pieces that fit together to construct the amplituhedron. The Feynman diagrams can piece the amplituhedron together too, but they are far, far less efficient.

They also found a "master" amplituhedron with an infinite number of facets. Its volume represents the total amplitude of all physical processes. Lower dimensional amplituhedra, representing the interactions of limited numbers of particles, live on the faces of this master structure. I am very eager to see how this very new theory plays out and how fractal quantum theory relates to it. Are these two different aspects of a single description?

Fractal Spacetime in General Relativity?

Fractal geometry seems to meld much more successfully into quantum mechanics than it melds into the much larger scale where relativity becomes the basic description of spacetime, and at the largest super-galactic scales, fractal geometry runs into possibly fatal trouble.

The geometry of spacetime is currently described by a set of field equations for general relativity. These equations are based on the idea that spacetime is flat (the topology) and curved to a manifold by a metric that is described by continuous (smoothly changing) differential geometry. This is how spacetime is curved by momentum and we experience this curve as gravity.

There is the hope, as I mentioned earlier on in this series, that by coming up with this kind of fractal theory of spacetime, what physicists observe as the effects of dark energy could be distortion-type effects of the underlying fractal geometry of spacetime becoming significant over great distances across space. In view of the large-scale structure of the universe (the scale of galaxy clusters and larger), some physicists such as Luciano Pietronero (1987) have attempted to model the distribution of galaxies on a fractal pattern. He claimed that a fractal dimension could be detected over a wide range of scale in the universe, one which seemed to hint that there is both randomness and hierarchal structuring at work at these very large scales in the universe. Other researchers have also examined the large-scale structure of the universe for signs of fractal geometry. An interpretation of recent cosmology data (David Hogg et al., 2005) suggests that, while the universe seems to be fairly clearly homogenous at the level of galaxy distribution (mass is smoothly spread out as evidenced by early Sloan survey results), it may exhibit a fractal dimension at distances of about 60 light years.

While fractal geometry offers an incredibly enticing new paradigm in physics, recent galaxy survey data cannot be ignored. In fact, it tears a big and problematic hole in fractal cosmology. The very same galaxy surveys (for example, the WiggleZ Survey concluded in 2012) that recently proved the existence of dark energy (the increasing expansion rate of the universe) tells us that the universe is very homogenous at very large scales. Despite some of the earlier claims described above, this homogenous galaxy distribution offers no sign of any fractal-like patterning at this scale. This means that, while you would expect some kind of galaxy clustering (a fractal pattern) at ever-larger scales, none are seen. There is no deviation from pure randomness of mass distribution, what you would expect from a homogenous universe. These observations do not support fractal theory being the answer to explain dark energy, which shows its effects especially at larger and larger scales, and they pose a problem for any universal fractal spacetime theory. The Sloan Digital Sky Survey, scanned over a decade and completed in 2013, created the largest ever three-dimensional map of galaxies in the universe, a map that would require half a million HD TV's to view its full image, which contains over a trillion pixels of information. These results together with the WMAP data definitively show that both the large-scale structure of the universe (at scales larger than 250 million light years) and the cosmic background radiation are evenly distributed, or homogenous, meaning that there is no other ordering, fractal or otherwise, of the universe at the galactic scale and larger. They do not lend support to fractal theory being the answer to explain the general relativity anomalies of dark energy or dark matter, which begin to show their effects at very large scales of spacetime.

So, is the universe fractal or not? One possible way around this is to think of the universe as snow: it is made up of fractal flakes but it transitions to a smooth and uniform sea of white as you step back. In the same way, the fractal nature of spacetime can only be observed at the quantum scale.

To describe space-time in terms of fractal geometry, the theory needs some kind of scale relativity and a continuous non-differential geometry that is completely dependent on the scale of the observation, so that at scales larger than de Broglie scale, the scaling part is dominated by the differential general relativity part. The geometry is still there underlying the theory but it's hidden at this scale. At scales smaller than de Broglie scale, the scaling part is dominated by fractal geometry instead, as the differential general relativity part becomes less relevant. The theoretical scale change from quantum to everyday to galactic is a bit like the change that happens during symmetry-breaking in gauge particle theory, or during a phase transition, except that here, the underlying scale symmetry is always intact.

I should make a distinction here between this kind of overall scale relativity and the scale invariance of fractals themselves. Fractal images reappear no matter what scale you are looking it them from. Some people call this an example of scale invariance but it is better described as closely related self-similarity.

The Fractal Advantage

The universe at its largest scale does not appear to have any fractal geometric ordering. And yet, hints at an underlying fractal nature can be seen everywhere around us. As described above, it is possible to describe a transition from the (classic) macroscopic scale of physics to quantum mechanics, where underlying fractal geometry seems to show much promise (we will see more examples of this promise in a moment). It may still be possible that fractal geometry underlies spacetime at all scales, but at macroscopic and larger scales it must be hidden from observation. Even though spacetime at these scales does not exhibit any fractal nature, fractal-like ordering seems to be at work in various biological processes, structures and in geology. What makes this kind of ordering favourable in living systems in particular? Is it a cost-saving or simplification advantage? Does it impart structures (like shells and trees for example) with greater strength or durability? A 2012 paper called Fractal Structures Do More With Less investigates possible advantages of fractal-like design and hierarchal structuring in construction. Researchers found that, for example, when more hierarchal levels are added to structures less material is needed to support a given load. Trabecular (spongy) bone is a similar architectural example taken from biology, in which evolution favours hierarchal fractal-like ordering in order to maximize strength, all while minimizing the amount of input material required as well as minimizing weight. Similarly, tree branching reveals fractal-like ordering, a quality that Leonardo da Vinci noticed and called his "rule of trees" more than 500 years ago. This kind of ordering is especially obvious after deciduous trees lose their leaves in the fall. Tree branching may offer two advantages - hydrological (this arrangement most efficiently transports sap) and structural (it increases the tree's resistance to various stresses like snow load and wind). A 2011 study, based on computer-modeled trees, suggests that fractal ordering in trees protects them against wind damage in particular. There is a great deal of current research on the appearance and roles of fractal-like ordering in nature, but perhaps the most convincing evidence for fractal ordering in matter comes from solid-state physics.

Evidence of Fractals from Solid State Physics

There is increasing evidence that fractal geometry underlies processes at the molecular, atomic and quantum scale.

For example, during phase transitions from regular conductors into superconductors, when electrons in a material organize themselves in ways that resemble bosonic (force particle) behaviour rather than the fermionic behaviour that particles of matter normally display, fractal geometry pops up. Even very good conductors experience at least some electrical resistance. However, during this phase change the wave functions of electrons spread out over the whole material in a special way that allows it to conduct electricity without any resistance at all. High-temperature superconductors are especially mysterious, because the molecular jiggling that goes on at temperatures well above absolute zero should destroy the kind of ordering that is necessary for a spread-out wave function. Physicists have recently found a clue about how this phenomenon is made possible. In 2010, physicist Antonio Bianconi discovered that oxygen atoms inside ceramic compound high-temperature superconductors appear to be in random positions and to take on complex geometries that display self-symmetry, a fractal behaviour. Larger fractals correspond with higher superconductivity temperatures. No one yet knows exactly how fractal ordering seems to stabilize wave functions and make high-temperature superconductors possible.

Ali Yazdani and his colleagues at Princeton University in the US observed a fractal pattern created when electrons interfere with one another. They observed the material gallium arsenide undergoing a phase transition under a scanning tunneling microscope (it gives you atomic-scale resolution) and found that a fractal pattern is observed as it changes from a metal into an insulator. When this happens, the wave functions of the electrons change from being shared across the whole material (metallic state) to being localized at individual atomic lattice sites (insulator state). During transition, the electron wave functions get squashed together and begin to affect each other in a complicated pattern of constructive and destructive interference and this is when a fractal pattern develops. Their results were published in 2010.

Last year (2013), physicists found the first proof of a decades-old theoretical fractal pattern called the Hofstadter Butterfly. Hofstadter, a graduate student in the 1970's, discovered that electrons confined inside a crystalline atomic lattice would race around in circles when placed in a powerful magnetic field. The motion of the circling would soon become complicated (chaotic). When plotted on a graph, the motion revealed a fractal pattern that looked like a butterfly, shown below as a computer rendering, although fractals were not known at the time.


In the diagram above, the horizontal axis is energy and the vertical axis is magnetic flux through the material. Warm and cold colours represent positive and negative values for Hall conductance (a voltage difference), respectively. Like all fractal images it shows self-similarity. Small fragments of the structure contain a (distorted) copy of the entire structure.

Pablo Jarillo-Herrero at MIT in Cambridge found that by stacking a sheet of graphene with a sheet of boron nitride and applying a magnetic field, he observed discrete changes in conductivity, stepwise jumps that corresponded to the same splitting of electron energy levels that Hofstadter observed. Wolfgang Ketterle, also at MIT, is currently trying to go a bit further by making supercooled rubidium atoms act like electrons by trapping the atoms in regularly spaced pockets and guiding them with lasers and gravity to mimic the circular motions of electrons in a magnetic field. If he succeeds, he may be able to show fractal ordering at the atomic level.

Some physicists are wondering if these kinds of fractal organization observed with electrons and atoms might offer quantum clues about why living systems tend to show a preference for fractal-type structures. It's possible that, like many natural and geologic structures, the natural world at the quantum level favours fractal structures.

Other researchers are making a possible link between string theory and fractal geometry. General relativity treats spacetime as 4-dimensional with three spatial dimensions and one time dimension. String theory predicts the existence of extra dimensions in spacetime. M-theory, for example, predicts 11 dimensions. A new possibility is that the dimensions in spacetime change with fractal scale, allowing small scales to exhibit fractal properties. For example, such a theoretical framework could describe quantum relativity, or quantum gravity in other words, where gravity at quantum scales appears fractal. This idea expands upon the idea that fractal spacetime is composed of non-integer dimensions, rather than the whole integer dimensions (at all scales) of spacetime described by Euclidean space, Minkowski space and the curved spacetime of general relativity. By giving fractal spacetime non-integer dimensions, the properties of spacetime depend on the scale of observation. In this case, the very dimensions of spacetime are scale-dependent.

Fractals Push Against the Differential Heart of Physics

Fractal geometry at the quantum level seems to be gaining momentum because there is so much promise, as well as some enticing experimental evidence coming together, as physicists try to sew together a consistent fractal theory for quantum particle behaviour. Meanwhile, research into fractal biology, geology and several other fields is taking off. But the possibility of a fractal cosmos seems far less promising. At best, a possible underlying fractal nature seems to be hidden from view.

Physicist Tom Palmer thinks that fractal geometry might be alive and well in the cosmos after all - if we look for it in the right place. He argues that each physical system around us has an invariant set, a mathematical ground state, in which it is unable to lose any more information. If you take a large star, for example, you will find that it has an enormous amount of data held within all the atoms that make it up (information like quantum spins, mass, momentum, energy state, etc.). When it starts to collapse in on itself at the end of its life, some of that data is lost. When it collapses all the way into a black hole, much more data is lost. The black hole is a minimal information ground state where no more data can be lost, and this is the invariant set which underlies that star's information. This kind of logic can be extended to the universe as a whole, and the invariant set of the universe might be fractal in nature. This approach could lead to an explanation for some of the most puzzling paradoxes in spacetime such as nonlocality - the ability of two entwined particles to communicate with each other across vast distances of space, or the ability of a single particle to exist in more than one location in space at the same time. It seems reminiscent of the Holographic approach to spacetime.

Perhaps the greatest promise offered by fractals is the possibility of creating more accurate mathematical models of how nature works. It seems, as we saw earlier, that reforming quantum mechanics into a non-differential equation opens up a whole new way of investigating quantum processes. A 1993 paper by Laurent Nottale discusses the movement away from differential math toward non-differntial math in physics. Since the time of Isaac Newton, differentiatial calculus has been used to describe most physical phenomena. There are countless examples of physical and biological processes (any phenomenon that changes over time) that are described in terms of one or more differential equations. This Wikipedia link lists many examples of differentiatial calculus at use in physics, engineering, biology and economics to describe processes such as radioactive decay, diffusion, animal population dynamics and evolutionary changes, just to name a few.

Calculus (differentiation and integration), developed in the mid 17th century, was a great breakthrough in physics because it offered a way to model continuous change in systems. Yet, there is no underlying principle in place that says the fundamental laws of physics must be differentiable. What if the basic reality of the universe is more accurately modelled using non-differentiable mathematics, and that fractal geometry underlies all physical processes even though it may be hidden at larger scales? If the universe really does have a fundamentally fractal backbone, it would mean reconstructing physical laws in terms of continuous but non-differentiable equations. Quantum mechanics, for example, becomes mechanics in non-differentiable spacetime.

Fresh new possibilities like this remind us that even established paradigm-making theories are not sacred. There may be other overlooked assumptions waiting to be questioned.

A Few Final Questions:

Fractal geometry seems to impart some kind of efficient process to particle interactions, an efficiency that nature at larger everyday scales seems to draw from. Is it the geometry itself or is there something deeper from which it draws? Is fractal geometry universal, and if so how is it hidden from view at the cosmic scale? Will fractals be the link between the macro universe and the micro universe that allows physicists to find a theory for quantum gravity?