How do we know climate change is occurring: Ice Core Stratigraphy

There are many ways to create a paleotemperature record and form a chart of temperature and atmosphere concentrations dating back thousands of years.  One of the most common ways, and most commonly known, is ice core stratigraphy, or the use of ice core data to determine temperature and atmosphere conditions.

So how does this work?  When ice forms, it traps bubbles of gas inside, and as this ice accumulates on top of more ice it compresses, retaining this trapped gas.  Therefore, scientists can go to Greenland or Antarctica and drill a core of ice dating back hundreds of thousands of years, formed from compressed snow and ice.  Using very careful methods, they can extract the gas from the core and analyze it for concentrations of carbon dioxide, methane, and other greenhouse gases.

Then scientists have to “date” the ice cores, or determine when the gases they are analyzing came from, to accurately get an understanding of how the atmospheric conditions have changed over time.  This is why it is known as ice core stratigraphy; they can use the strata, or layers, in the ice as an indicator of time.  There are layers of annual freezing and melting that are identifiable in the ice core; by looking at these layers we can get an estimate of when the gases were trapped in the ice.

However, this is not always accurate; it’s hard to count year-by-year and sometimes the individual layers are not visible enough to create an accurate estimate.  So, scientists look at impurities in the ice.  Scientists can cut out a portion of the ice, melt it down, and then analyze the water for impurities.  These impurities vary annually.  For example, in the spring in Greenland storms bring in large amounts of dust, which can be identified by high calcium concentration and insoluble dust (the dust that does not dissolve in water) can be seen by shining light at the water and looking at how the light bounces off the insoluble dust particles.  Other elements and compounds, such as sodium, ammonium, and nitrate, also vary by season.  Looking at the relative concentrations of these impurities gives scientists a more accurate estimate of when the gases were trapped in the ice.

So now, we have atmospheric conditions and when this atmosphere existed.  Now, to create a chart of how these atmospheric conditions relate to temperature, we need a paleotemperature record, or an idea of how temperatures fluctuated over time.  To do this, scientists look at isotopic concentrations.  We have discussed the concept of an isotope in previous posts; isotopes of elements differ only in number of neutrons (which leads to a difference in mass).  Isotopes of elements exist in different amounts.  The most common isotope of oxygen is oxygen-16, which means an atom of this oxygen isotope weighs 16 atomic mass units.  There does exist an isotope of oxygen known as oxygen-18.  Water molecules can have either oxygen-16 or oxygen-18, and those molecules with oxygen-18 are heavier than those with oxygen-16.  Thus, when those molecules evaporate, they are the first to rain back down, because they are heavier.

So, oxygen-18 and oxygen-16-carrying water molecules evaporate (mostly from the equator because that is where the waters are warmest) and as they move away from the equator in clouds, they cool.  As they cool, they begin to precipitate out some of that water from the clouds, and the first types of water to precipitate out, as we discussed above, would be the water molecules containing oxygen-18.  So, we would see the highest concentration of oxygen-18 near the equator, and as those clouds move farther to the poles, they are much more concentrated in oxygen-16-carrying water molecules.  Thus, all the ice cores that we are digging up in Greenland and Antarctica are very heavily concentrated in oxygen-16.

However, this cycle does change with temperature.  When the temperature is much warmer, we should see more oxygen-18 making it to the poles because the cooling phase, when most of they oxygen-18 precipitates out, happens later and later in the cycle because the atmosphere is so warm.  Thus, scientists can use the isotope concentrations at the poles to create a temperature record.

Some scientists can also use fossils of marine creatures from a long time ago to date these temperatures.  At the equator, when it is warmer, we will see a lower concentration of oxygen-18 because it is carried all the way to the poles (or close to the poles).  Marine creatures can take oxygen from the water and incorporate that into their skeletons.  Therefore, when they are fossilized, those isotopes are trapped in their skeletons.  Scientists can then extract those isotopes and use radiocarbon dating (see the previous post) to date when that temperature was.

This is just one of the ways we can get an understanding of how the atmosphere and temperature changes over thousands of years.

Works Cited

http://www.iceandclimate.nbi.ku.dk/research/drill_analysing/cutting_and_analysing_ice_cores/analysing_gasses/

http://www.iceandclimate.nbi.ku.dk/research/strat_dating/annual_layer_count/dating_using_impurities/

http://www.iceandclimate.nbi.ku.dk/research/strat_dating/annual_layer_count/ice_core_dating/

http://www.iceandclimate.nbi.ku.dk/research/past_atmos/past_temperature_moisture/fractionation_and_temperature/

http://www.iceandclimate.nbi.ku.dk/research/past_atmos/composition_greenhouse/

http://www.iceandclimate.nbi.ku.dk/research/drill_analysing/cutting_and_analysing_ice_cores/isotope_measurement/

http://www.iceandclimate.nbi.ku.dk/research/past_atmos/past_temperature_moisture/fractionation_and_temperature/

http://www.iceandclimate.nbi.ku.dk/research/past_atmos/past_temperature_moisture/

http://www.iceandclimate.nbi.ku.dk/research/strat_dating/annual_layer_count/ice_core_dating/

Svensson, et al, “A 60,000 year Greenland stratigraphic ice core chronology,” http://www.clim-past.net/4/47/2008/cp-4-47-2008.pdf

http://earthobservatory.nasa.gov/Features/Paleoclimatology_OxygenBalance/

Advertisements

Combating Data Manipulation V: Law of Large Numbers

You conduct two polls to determine the percentage of people in a town that like Italian food.  For one poll, you ask and collect data from 5 people and determine that 80% of the people in that town like Italian food.  For the second poll, you ask and collect data from 1,000 people and determine that 60% of the people in that town like Italian food.  Which poll is more correct?

We explored this idea a little bit in a previous post about polling, but almost everyone will agree that the poll that sampled 1,000 people is more accurate than the pol that sampled 5 people.  But why is that?  The answer lies in a theorem developed by Jacob Bernoulli, now known as Bernoulli’s Theorem, or the Law of Large Numbers.

The idea is that the more trials used, the more accurate the probability will be or, according to Wolfram, “as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero.”

So what does this NOT mean?  Say we somehow know that exactly 62% of the people in our town from the example above like Italian food and after sampling 1,000 people, we have 60%.  Some may say (understandably) that the next 1 or 2 or even 100 results will be above 62% to correct the difference and get us closer to the actual value of 62%.  This is commonly known as the gambler’s fallacy (based on the faulty idea that after 100 or 200 (or even more) tries at a slot machine, the gambler is then “due” for a win eventually).  But this is not necessarily correct – the probability that you’ll find someone in that town who likes Italian food is always 62%, so there is a 62% chance that the 1,001 person will like Italian food, but also a 38% chance that they won’t.

This is related to another fallacy, called the law of small numbers, which says that people tend to assume that a small sample of a population or of trials is representative of the larger population or of a larger probability, which is not necessarily true.  If you take the people 500-600 that you tested for Italian food affinity, it is not true that that probability should be 62%.  The Law of Large Numbers simply states that as you take more and more observations, the probability will get closer and closer to 62%.

This is an important law to keep in mind when looking at polling results, voting results, for insurance companies to figure out the probability that some event may happen, and many other examples.

The next post will be about Bayes’ Theorem and its legal applications 🙂

Works Cited

http://mathworld.wolfram.com/LawofLargeNumbers.html

http://math.arizona.edu/~jwatkins/J_limit.pdf

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter8.pdf

http://hspm.sph.sc.edu/COURSES/J716/a01/stat.html

http://www.logicalfallacies.info/relevance/gamblers/

http://pirate.shu.edu/~hovancjo/exp_read/tversky.htm

Greenhouse Gases

The Greenhouse Effect

How do(es) carbon dioxide (and other greenhouse gases) cause warmer temperatures?  Sunlight comes through the atmosphere and when that happens, some of that sunlight is directly reflected into space (through ice in the poles) and some hits the ground.  When that happens some of the energy is absorbed in the ground, and then it reflects back into space with a different wavelength (in the infrared radiation part of the electromagnetic spectrum).  For that reflected energy in infrared part of the spectrum, some goes straight into space and some gets absorbed by certain gases called greenhouse gases.  When that energy gets absorbed, it gets trapped into our atmosphere and it warms our atmosphere.  This phenomenon is not a bad thing inherently; in fact, it is necessary for our survival.  Without it, the Earth’s temperature would plummet.  However, too much of it can also be a bad thing, because it causes the Earth to warm too much.

But, why are certain gases greenhouse gases, but not others?  Certain molecules are nonpolar (meaning that the electron density is equally distributed throughout the molecule) but through vibrations in the molecule, it can become polar momentarily.  However, some molecules cannot do this.  For example, CO2 and H2O can vibrate in many different ways, but gases like N2, which only have two atoms, cannot vibrate in the same ways:

Screen Shot 2014-04-07 at 10.35.47 PM

Gases with only two atoms cannot do these vibrations.

The energy from that infrared radiation that is being reflected from the ground gets absorbed by these gases only because it causes them to vibrate, causing the molecule to become polar from nonpolar or nonpolar from polar (this essentially just means that the electrons, as the molecule vibrates, become unequally distributed throughout the molecule).  This causes the molecule to gain kinetic energy and vibrate, absorbed the heat.  More carbon dioxide in the atmosphere would cause more infrared radiation to be absorbed by these molecules, causing more heat to be retained in the atmosphere.

Watch out for a post about the evidence for climate change coming soon!

Works Cited

http://www.elmhurst.edu/~chm/vchembook/globalwarmA5.html

http://www.sciencemuseum.org.uk/climatechanging/climatescienceinfozone/exploringearthsclimate/1point5/1point5point2.aspx

Radiocarbon Dating

Many times we will hear references to dated fossils; we know generally when the dinosaurs lived, when they went extinct, around when plants and vertebrates became terrestrial, and even when the Earth formed.  But how do we know all these dates?

The most common type of dating is known as radiocarbon dating and it is used to determine the age of fossils that used to be animals, plants, or bacteria.  It uses isotopes of carbon; an isotope is an atom of carbon that has a different weight (due to, typically, different numbers of neutrons).  Some of these isotopes are radioactive, meaning they decay into electrons, neutrons, and/or another atom.  The way scientists refer to them is: the parent isotope decays into the daughter isotope.  Additionally, we know exactly how long it takes them to decay.  We can determine how long it takes for half of the parents to decay into the daughters.

Radiocarbon dating uses an isotope of carbon called Carbon-14.  It is radioactive and will decay, and its half life is about 5,700 years.  So, scientists can measure the how much of the daughter isotope exists and then determine how many half-lives the carbon has gone through and therefore we can know how old the fossil is.

But, how do we know how much of the parent isotope we started with?  To scientists, this is known as the “closure time,” it is the point at which the isotopes can no longer escape from the sample, for example when molten or semi-molten rock cools.  Before that decay happens, the leftover “daughter” isotopes escape from the sample and then we know that when there are equal concentrations of parent and daughter isotopes, one half-life has been completed and therefore that fossil is around 5,700 years old.  The concentrations of daughter isotope combined with the concentrations of the parent isotope should be equal to the initial amount of parent isotope.

Of course, radiocarbon dating can only be really used for fossils that are around 30,000 years old; after that, it does not tell us much.  Other radioactive isotopes can be used to date older fossils; we can use potassium-argon dating, uranium-lead dating, etc. and these can be used to date fossils (such as rock or sediment) that have certain elements that fossils of organisms do not have.  There are a few other strategies to date fossils other than using radioactive isotopes as well; to get relative timing of fossils, we can just look at the strata (or layers) in the rock.  Those fossils that are deeper in the rock are older than the fossils that are higher up to the ground.  However, unlike radiocarbon dating, that only gives us relative timing instead of absolute timing.

Works Cited

http://www.nhm.ac.uk/nature-online/science-of-natural-history/the-scientific-process/dating-methods/index.html

http://www.actionbioscience.org/evolution/benton.html

http://www.nature.com/scitable/knowledge/library/dating-rocks-and-fossils-using-geologic-methods-107924044

http://www.chem.uwec.edu/Chem115_F00/nelsolar/chem.htm

http://weber.ucsd.edu/~jmoore/courses/anth42web/DATINGmethods.pdf

Combating Data Manipulation IV: Voting

A little bit about how error interacts with voting:

Counting is difficult.  And yet, voting, which has such a huge impact in the United States, relies mainly on people’s ability to count.  And, as I’m sure most people know, counting large numbers of anything always leads to mistakes; inherent in every election is error.  Typically in elections, these don’t make much of a difference in declaring a winner because the error is much smaller than the difference in votes between the candidates.  This means that if you take the error (let’s call it x) and add plus or minus x to the number of votes a candidate received, it would not change who won the election.  It happens in every election that a few hundred votes go missing or are just miscounted, but typically these are ignored and never mentioned because they wouldn’t make a big enough impact on the election.

But there are those few elections in which the error made a huge difference.  The example that comes to most people’s minds is the Bush v. Gore election in 2000.  Many votes were miscounted because of poor ballot layout or because people filled out the ballots improperly.  As Charles Seife writes, “Even under ideal conditions, even when officials count well-designed ballots with incredible deliberation, there are errors on the order of a few hundredths of a percent.  And that’s just the beginning.  There are plenty of other errors in any election.  There are errors caused by people entering data incorrectly.  There are errors caused by people filling out ballots wrong, casting their vote for the wrong person…[b]allots will be misplaced.  Ballots will be double-counted” (163).  When the winner is unsure, the first thing people do is order a recount, not taking into account that every single time they count, there will be errors associated with the end number, even if those errors are different among recounts.  It is impossible to get the “correct” number of votes for each candidate.

According to Seife and many other political thinkers and mathematicians, the 2000 election was a tie; the error was too much bigger than the difference between the candidates to conclusively say one candidate one over the other.  As Seife writes, “It’s hard to swallow, but…the 2000 presidential election should have been settled with a flip of a coin” (166).  However, no one likes the idea that an election can be a tie, so we operate under the idea that we can find a real victor if we recount enough times.

Next in the series, the law of large numbers and probability!

Works Cited

http://www.politicalmath.com/supremecourt/bushgore.php

Charles Seife, Proofiness

http://www.ncpp.org/?q=node/20

http://www.ams.org/notices/200804/tx080400448p.pdf

Jonathan K. Hodge, The Mathematics of Voting and Elections: A Hands-On Approach