This blog provides background for and explanation of current topics in science.

Tuesday, September 20, 2011

The Paleocene-Eocene Temperature Maximum (PETM)


Using a variety of temperature proxies, including the ratio of the different stable isotopes of carbon, 13C and 12C, where higher temperatures favor the lighter isotope, 12C and one would expect to see a negative δ13C. Another indicator is the negative δ18O > 1% present in foraminifera shells that grew both in the surface and deep oceanic waters that occurred during this same time period. Likewise, analyses of the foraminifera Mg/Ca ratio and ratios of particular organic compounds used for TEX861,2, a paleothermometer based on the composition of marine picoplankton, Crenarchaeota3. There is clear evidence of a massive injection of 13C-depleted carbon at the Paleocene-Eocene transition. First, the previously mentioned negative δ13C change in carbon compounds at this time, about 55 million years ago. Second, carbonate dissolution which indicates the acidification of the ocean that occurs when atmospheric CO2 increases. The source of this carbon dioxide is unknown. Massive volcanic activity might have been to blame, but there is no evidence of sufficiently large volcanic activity such as the ash and lava deposits one would expect. There are volcanic traps in Siberia that were formed about this time, but there that activity alone would not account for the volume necessary to account for the likely CO2 content in the atmosphere of over 1700 ppm (parts per million) at its peak. This is based on a base value of about 1000 ppm near the end of the Paleocene and an addition of about 3,000 GtC (gigatonnes of carbon). This amount of carbon was determined by taking into account the carbonate compensation depth (CCD), a value determined by the carbonate depletion of shallow ocean bottom cores.4 A comet or asteroid impact that caused continent-wide fires was considered and rejected because there is no layer of soot that has been found in ocean bottom cores as would be expected. Nor has a crater been found of the right age to account for this. A third possibility is that the large deposits of methane hydrate that exist on the deep continental shelves were destabilized and released. Methane hydrate is a single molecule of methane, CH4, enclosed in a "cage" of water molecules, H2O. This compound is only stable in a narrow range of temperatures and pressures. It would only have taken a small change in temperature or a physical disturbance of the water surrounding the methane hydrate to cause it to become unstable and release the CH4. Methane is 20X as effective as CO2 at retaining heat, so with a large release of methane, there would have been an immediate rapid surge in temperature. Methane decays over a period of about ten years to carbon dioxide which would then linger for over one hundred years, heating the atmosphere more slowly for a longer period of time. The total temperature of the ocean water, both shallow and deep was about 9<sup>o</sup>F,reaching 77<sup>o</sup>F in the surface waters near the poles.  There was virtually no ice anywhere on Earth at this time since the temperature had already been warm prior to the PETM.  The poles warmed more than the rest of the planet because with the lack of snow, the albedo increased, causing a larger temperature excursion.1

Save up to 90% on Used Textbooks at BarnesandNoble.com. Shop Now!The transition from the Paleocene epoch to the Eocene epoch Is marked by sharp decrease in the percentage of 13C, δ13C, dissolution of carbonate in all the ocean basins, the extinction of many terrestrial mammals, and foraminifera in the mud on the ocean floor (i. e., benthic).  It was also marked by the origin of several new mammalian orders, including primates, artiodactyls (even-toed ungulates), and perissodactyls (odd-toed ungulates). The δ13C and dissolution of carbonate indicate that the ocean experienced a geologically rapid acidification and temperature rise. The acidification strongly implies a rapid rise in the amount of carbon in the atmosphere. This increase occurred over a period of about 20,000 years, a rate that is about one-tenth of the rate of increase of CO2 that we have experienced during the past 150 years.
Humans are currently adding about 30 GtC annually. At this rate, anthropogenic carbon additions will equal what was released at the P-E boundary in 100 - 200 years, some 100 times faster than occurred 55 million years ago.1,5 because we have geologic evidence of what the climate was like during this P-E boundary time period, we have an idea of what the climate will be like in our near future if we continue on our current path. Periods of 100o temperatures in the North American Southwest will last for months around the clock. Drought will be longer and more frequent than now. Flooding will be worse in those areas that commonly flood. All the glaciers will melt, raising the sea level by about 200 feet - it was 220 feet higher at the P-E Boundary than it is now. Since there are vast stores of methane under the permafrost, if the permafrost melts, and that now seems inevitable, the released methane will just exacerbate the problem and hasten the warming. There are also large deposits of methane hydrate on the continental shelves that could be released if the ocean temperature exceeds the threshold.
Only broad features of the climate during the 150,000 year transition from the Paleocene to the Eocene until CO2 levels returtned to the levels of the late Paleocene after this large carbon dioxide excursion. One can hope that by further study, a more detailed understanding will be gained along with indications of how this scenario might be mitigated
  1. National Geographic, October 2011, pp. 90 – 109
  2. http://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum
  3. http://en.wikipedia.org/wiki/TEX86
  4. http://www.realclimate.org/index.php/archives/2009/08/petm-weirdness/
  5. http://www.realclimate.org/index.php/archives/2004/12/how-do-we-know-that-recent-cosub2sub-increases-are-due-to-human-activities-updated/ 
Gaiam.com, Inc

Wednesday, September 14, 2011

Scientist claims that he has created life-like cells based on metal

Lee Cronin at the University of Glasgow claims that he may have developed cell-like structures capable of adapting to changing environments.1 The materials that he has used to create what he calls iCHELLS (inorganic chemical cells) are known as polyoxometalates. These are compounds made with various metal atoms combined with oxygen (O) and phosphorous (P).  Tungsten (W) is the most common metal used.

One of the iCHELLs.
Image from NewScientist.com.
He creates large negatively charged ions of these metal oxides and creates a salt by mixing these ions with protons (positively charged hydrogen [H+]) or sodium (Na+).  He next injects a solution of this salt into a solution containing an organic salt with large organic cations (positive ions) and small anions (negative ions).  What results is a salt of the metal oxide anion with the organic cation that precipitates from the solution in the form of a small bubble-like structure.  He is able to manipulate the form of these structures to give them some of the characteristics of organic cell membranes such as selective permeability that will control what chemicals reside within the bubble.  He has created bubbles within bubbles to give the appearance of the internal structure of living cells.  He has attached photosensitive dyes to the iCHELLS which enabled them to mimic rudimentary photosynthesis, including the ability to split H2+ and O plus an electron (e-), an important step in the process.

It is during his current experiment, scheduled to run for seven months that he has given some indication that he has succeeded in modifying the iCHELLs so that they will adapt to the environment they are in.  The final results are still a few months away and the details have not yet been published.  Last year he also showed that polyoxometallates could serve as templates for self-replication analogous to how DNA and RNA operate.  Taken together, these could be good first steps toward creating artificial life.

Reproduction, self-repair, adaptation, growth, and evolution are all characteristics of single-celled and more complex life as we know it.  If Cronin is able to induce these iCHELLs to perform all three, would this constitute a new form of life?  It would seem to be.  If he is able to accomplish this, and it is not certain that he will, it would revolutionize our understanding of what life forms might be possible on other planets in our solar system and elsewhere in the universe.  It may expand the Goldilocks zone and drastically increase the possibility of finding other life.

1. http://www.newscientist.com/article/dn20906-lifelike-cells-are-made-of-metal.html



Tuesday, August 2, 2011

Contributions to Global Warming and Climate Change

The Sun and Global Warming

iconOf the many trends that appear to cause fluctuations in the Sun’s energy, those that last decades to centuries are the most likely to have a measurable impact on the Earth’s climate in the foreseeable future. Many researchers believe the steady rise in sunspots and faculae since the late seventeenth century may be responsible for as much as half of the 0.6o of global warming over the last 110 years (IPCC, 2001).


iconThe effects of orbital mechanics on global warming - the Milankovic cycle
The variation of insolation due to orbital mechanics was solved by Milankovich in the 1930's.
There are three relevant cycles and four factors that contribute: precession of the orbit, meaning that the location of the perihelion (point of closest approach of the Earth to the Sun) rotates around the Sun; precession of the Earth's spin axis relative to the orbital axis (like a top that is spinning with its axis of rotation at a tilt); change in eccentricity of the orbit between a perfect circle and an ellipse that is slightly different from a circle; and variation of the tilt of the Earth's axis between about 23o and 24.5o.
The iconcombination of the variation of the orbit's eccentricity and the Earth's spin-axis precession creates climate change on about a twenty-thousand-year cycle. The oscillation of the spin-axis relative to the orbital plane between 23o and 24.5o has a 40,000 year cycle. The eccenticity change by itself causes small variations in the insolation with a cycle of about 100,000 years, but its effect is only about 0.1 % of the others. These factors are known as Milankovich forcing.


These cycles were correlated with the the timing of the ice ages and hypothesized to actually cause them. Currently, these variations should have been cooling the Earth's mean temperature over the past 150 years, but the global mean temperature has in fact been rising.


Relationship between CO2 emissions and rise in temperature
iconThe relationship between CO2 emissions and rise in temperature, one tonne (one tonne = 1000 kilograms = 2200 lb = one long ton) of added carbon will cause 1.5 x 10-12 degree rise in temperature, is based on an article in Nature.  In the USA, one tonne of carbon is emitted per person annually. Assuming that worldwide emissions per person are one tenth of a tonne per person, then for just the additional carbon emitted each year and assuming that this rate is steady (It is not, it is increasing), then each year the temperature rise will be 9 x 10-4 oC. However, this also assumes that the temperature has reached equilibrium with the current level of CO2 in the atmosphere, but it hasn't.


This chart shows the normal lag of CO2 concentration to temperature rise and that the relationship has reversed in the present, i.e., the CO2 concentration has continued to rise even as the temperature has been relatively stable. Based on the past record from the ice cores, one would expect that the temperature would be dropping with the CO2 concentration either stable or decreasing as well:






















When the The Milankovich cycles cause an increase in solar energy striking the Earth, this will initiate a melting of some of the glaciers and at least one of the polar ice caps.(depending on the season). As the article states, 18,000 years ago, this increase in the intensity of the sunlight occurred during the spring in the Southern Hemisphere. This led to the Antarctic ice cap melting more than it had been. This decrease in the area of ice decreased the amount of sunlight reflected and a corresponding increase in the amount of sunlight absorbed. This initiated a positive feedback loop where reduced reflection and ocean warming lead to more temperature rise than would be expected from Milankovitch forcing. Currently, if Milankovitch forcing were the primary factor, global temperature should be falling.


As the temperature of the ocean increases, it can hold less carbon dioxide, so the carbon dioxide level in the atmosphere increases. This increased CO2 level creates a second positive feedback loop that also causes more warming than can be explained by Milankovitch forcing, oceanic warming, and reduced reflection alone. The lag in CO2 concentration from temperature rise has normally been about 600 +/- 400 years for the last 450,000 years based on several different sources (primarily ice cores).  For about the last one hundred years, the rate of increase of CO2 has been leading the rise in temperature.


iconAnother point is that with the melting of the permafrost, the risk of large methane releases increases. It isn't a matter of "if" but "when" if current trends continue. Methane has 25 times the warming effect of carbon dioxide. This is being studied in Alaska where new shallow summer lakes are forming on top of the permafrost and permafrost is meting more deeply than in recent history and increasing amounts of methane are being released. It is uncertain where the threshold is for large releases to occur.


For a more detailed explanation and the history of the theory that links CO2 to temperature rise, see http://www.aip.org/history/climate/co2.htm.

Wednesday, June 15, 2011

Maunder Minimum and Climate Change

Stop by every day to shop our new Deal of the Day at BarnesandNoble.com!
A Maunder Minimum is a period of low sunspot activity. The period from about 1650 to 1720 was the nominate Maunder Minimum and coincided with a period known as the Little Ice Age. Recently, Space.com posted an article, Sun's Fading Spots, that stated that we may be in the beginning of a new Maunder Minimum. In the following, I attempt to assess the probable influence this will have on the climate.

It is apparently correct that reduced sunspot activity is correlated with reduced solar irradiance. The Maunder Minimum during the latter half of the 17th century did lead to what was known as the Little Ice Age. However, there is also strong evidence that the rise in greenhouse gases has become the dominant agent affecting global temperature as can be seen in this graph:

Courtesy of http://lasp.colorado.edu/images/science/solar_infl/Surface-Temp-w-paleo.jpg

The golden line with diamonds is the measured or reconstructed total solar irradiance. The solid blue line is the reconstructed global surface temperature. The dashed blue line is the measured temperature in the Northern Hemisphere (NH). One can see that solar irradiance and the reconstructed global temperature follow one another fairly closely. The measured NH temperature also tracks total solar irradiance until the last 40 years when it starts to sharply diverge. This graph shows the recent data at higher resolution:

Courtesy of http://www.skepticalscience.com/pics/Solar_vs_Temp_basic.gif

The divergence is even more clear here since total solar irradiance has been declining as global mean temperature has been rising. One can see on the first chart that there is a very close correlation between solar irradiance and temperature. The effect of the three major volcanic eruptions that are noted caused an immediate drop in solar irradiance and a corresponding drop in temperature. So the implication of the second chart is that there must be at least one other factor that is influencing global mean temperature than solar irradiance and that this factor is overriding the drop in solar irradiance.

If we are indeed entering another Maunder Minimum of solar activity, it at most will give us a temporary reprieve and slow the rise in temperature. It will not cause a long term (i.e., greater than 100 years) cooling and it will not reduce the importance of reducing anthropogenic greenhouse gases.

Saturday, April 23, 2011

Critique of Junk Science on Climate Change

An acquaintance cited this as a critique of climate science that debunks the “myth” of anthropogenic contributions to climate change:
http://junksciencearchive...reenhouse/What_Watt.html.  My response follows. 
icon
iconFrom the beginning of this article, she is minimizing the significance of the role of CO2 in the value of the mean global temperature.  She is also claiming that mean global temperature is not a significant number.  She writes, “… many excellent researchers who have pointed out that the concept of a globally averaged forcing constant is flawed and they are right. Sensitivity absolutely depends on where the forcing is applied and when. Unfortunately a globally averaged forcing value is applied via climate models and cited in IPCC documents and we are thus stuck with addressing an invalid value in common use.”  However, she doesn’t name these excellent researchers, nor does she explain why globally averaging the value is invalid.  When she states that the sensitivity depends on where and when, she is correct as for local effects.  But because of the convective action of the atmosphere and ocean, these local effects are naturally averaged out over the globe.  If one thinks about heating one’s house in winter, one could just as easily say that since the air coming from the vents when the furnace is running causes convection to occur near the vents and not so much away from the vents, trying to calculate the mean temperature rise in your house based on the total amount of heat coming from the furnace minus the heat that is being lost through the walls and ceiling is invalid.  Her reasoning, at least as stated, is incorrect. 
She next states, “climate models make much of water vapor "feedbacks" -- a multiplier effect due to a small warming from carbon dioxide increasing evaporation and thus adding to the major greenhouse gas, water vapor -- this in turn is supposed to increase the greenhouse effect, leading to more evaporation and yet more warming and so on. The amount of water vapor in the atmosphere is not a simple function of evaporation, however. All of the water vapor that is being continuously evaporated from the Earth's surface must eventually return to iconthe surface as precipitation. The climate system strikes a balance, allowing only so much water vapor to accumulate before it is depleted by either rain or snow."

iconA fact that she is overlooking is that the amount of water vapor in the atmosphere is dependent on the temperature of the air.  The warmer the air, the more water vapor it can contain and on average will contain.  This is why we talk about relative humidity instead of absolute humidity.  Relative humidity is the amount of water vapor present compared to the amount it can hold.  The dew point is the temperature at which relative humidity equals 100%.  This temperature varies with the amount of water vapor actually present.  As an illustrative example without doing real calculations, if the RH is 70% at 50 F, then the dew point might be 40 F while if the RH is 70% at 70 F, the DP might be 45 F.  One can see this if one has noticed that, in general, dew won’t be present in winter, on those days when temperatures have remained above freezing, until a lower temperature than in summer.  The atmosphere will approach a dynamic equilibrium of the amount of water vapor present based on the global mean temperature.  It will vary from place to place and with the time of day, always changing everywhere, but on average, the vapor content will rise as the temperature rises.  There will be both more precipitation and more evaporation, but the balance will increase with increasing temperature.  See: http://www.giss.nasa.gov/research/briefs/lacis_01/ for a discussion based on recent satellite measurements.
She states, “climate models use λ values of 0.75 ± 0.25 °C per Wm-2, 5-10 times greater than empirical measures support.”  This is false.  She has chosen the lowest empirical value available.  A value of ~0.7 k/Wm-2 is empirically derived as shown in this article from Wikipedia: http://en.wikipedia.org/wiki/Climate_sensitivity.  This article includes citations giving a range of values for the climate sensitivity parameter. The rest of her article is similarly skewed.  She consistently cherry picks the data and research to support her point of view.  Consider her referenced to an article by Lyman et al.  It initially claimed that there had been a cooling of the ocean surface temperature in the early 2000s, but they later published a correction that stated that there had been no cooling, but the temperature had been stable during the three year time period of their study.  She includes this information, which is good, but then does not correct her conclusions to account for this.
icon
iconIn her discussion of cloud cover effects, she emphasizes only the negative feedback.  This is unwarranted.  Based on recent studies, it appears that thick clouds have a negative feedback effect (net cooling effect), but thin cloud cover like from cirrus clouds or contrails have a net positive feedback effect (net warming).  The balance between these two is unclear, but it seems to be that there is a net positive feedback based on the most recent satellite measurements.  See: http://sciencetrends.blogspot.com/2011/04/positive-and-negative-feedback-in.html for more detail.
Based on my analysis of her article, she is strongly biased against anthropogenic global warming.  She does not consider all the data that contradicts her position.  If this article is an example of how he analyzes scientific results, “Junk Science” applies more to what she has written than the science that she is trying to debunk.  The good climate scientists, as illustrated in the quoted Wikipedia article, cite a range of results, trying to make it as representative as possible.  There is much uncertainty in the climate models and in some of the data as well as a glaring lack of data in some areas.  However, on balance, the available data do support the contention that the global climate is warming and that it is warming at a rate higher than can be accounted for without considering the effect of anthropogenic CO2.
Similarly, Bryson, http://drywind.net/blog/science/climatologist-who-discovered-jet-stream-debunks-human-caused-global-warming/75/, has discounted recent data and seems to dismiss climate model projections because he doesn't think they are good enough.  He may be right, but the fact is that the mean value of the results of the computer models twenty years ago underestimated the temperature rise and sea level rise.  Current models are improved because of more accurate and finer grained data.  The results published in the latest IPCC report,  a not perfect summary of the status of climate science, tend toward the conservative end of the projections.  If one actually considers all the data and not just the data that fits one's preconception or bias, the rising temperature trend is clear and the fact that human activity is contributing is clear.  It is not clear exactly how much of the warming is due to human activity, but it appears to be at least 30% of it.
It has also been suggested that higher CO2 content was actually beneficial because some plants were growing faster.  It is true that some plants are growing faster but that is only one effect (and does anyone in Georgia really want kudzu to grow any faster than it already does?).  The resulting warming from the increasing CO2 level also is eliminating the habitats of several species and possibly contributing to the decline of amphibians by enhancing the ability of certain fungal diseases to spread.  The warming is causing the Greenland ice cap and many continental glaciers to melt at an increasing rate, raising the sea level and threatening low-lying coastal areas and islands.  Parts of the Antarctic ice cap are increasing while other parts are decreasing, but overall there is a net decrease in the ice contained there that is also contributing to the sea level rise.  Rising temperatures also are improving the agricultural potential of northern areas such as Greenland and Iceland.  What have historically been considered tropical diseases, such as malaria and dengue fever, are now spreading to mid latitudes.  The ocean is acidifying, threatening the health of the plankton that form the base of the global food chain.  So obviously the effects are varied and complex.  The net effect, though, is deleterious to many species of land animals, including humans.  If the acidification continues, it will negatively impact nearly every species except bacteria.

Sunday, April 17, 2011

Fossil Mammal Found with Transitional Middle Ear

The discovery of a complete fossil Liaoconodon hui with the three ossicles of the mammalian ear attached to the lower jaw by cartilage near the groove in the lower jaw that had long been something of a puzzle, represents a clear intermediary form between early and modern mammals.  This find in Mesozoic rock in China was predicted by evolutionary theory.  It helps to demonstrate a more gradual transition from reptilian jaw to mammalian middle ear.
  
Liaoconodon lived about 125 Mya and would have been about 14" (36 cm) long. The ossicles were only used for hearing.  They were still closer to the lower mandible than are the bones of the middle ear in modern mammals.


An early mammal, Morganucodon, that lived 200 million years ago (Mya) had bones that were more reptilian. Morganucodon was a small shrew-like animal, only about four inches long. The bones that became the ossicles in later mammals were smaller in Morganucodon than in reptiles. They still functioned as part of the jaw even though they also were used for hearing.

Thursday, April 7, 2011

Positive and Negative Feedback in Global Warming Models


According to an article in Scientific American, "Climate Change: Larch in the Lurch," Another potential positive feedback event is occurring in Siberia. As temperatures increase, larch, deciduous needled trees, retreat north and are replaced by evergreens such as spruce or fir.   When larch lose their needles in winter, it exposes more snow on the ground to sunlight than the evergreens do.  This means that less light is reflected back into space as the larch are replaced by fir and spruce.   The light coming from the sun is the energy that the Earth receives that keeps the planet warm enough for life to exist.  But there is a somewhat delicate balance. The amount of heat that is retained is determined by how much of the light is absorbed and reflected.  As the amount of light that is absorbed increases, the Earth tends to get warmer.  The more light that is reflected, the cooler the Earth will tend to be.  The overall reflectivity of the Earth is known as the albedo.   Larch trees allowing more snow to be exposed during the winter increases the albedo.  Fir and Spruce trees covering up the snow reduces the albedo.  Warming temperatures tend to reduce the area covered by larch trees and increase the area covered by fir and spruce.  Fewer larch and more evergreens mean a lower albedo, more light is absorbed which leads to more warming. This is what is known as a positive feedback loop.   An initial warming leads to more warming. It tends to accelerate the rate of temperature increase of the Earth.  It makes global warming happen faster.   There are negative feedback loops as well. These tend to decrease the rate of warming as temperatures rise.   Clouds have both positive and negative feedback effects. Sunlight is reflected from the tops of clouds, as everyone knows.  As temperatures rise, more water evaporates and creates more clouds.  More clouds mean more light is reflected, i.e., the Earth's albedo increases, tending to slow the warming.   However, clouds also trap more of the light energy that is transmitted through the clouds which increases the warming effect of the sunlight.   For very thick clouds, the amount of light that is reflected exceeds the amount of light that is trapped, so overall, thick clouds had a cooling effect. With very thin clouds, such as high cirrus clouds or contrails, the effect is not as clear.   Measurements seem to indicate that the overall effect is a positive feedback.   It thus is important to know the proportion of thick and thin clouds to determine whether the overall effect of increasing cloud cover due to warming is positive or negative. This remains an area of active research. 

Aerosols also affect whether clouds have a net positive or negative feedback effect.   Aerosols consist of tiny particulates, one to ten microns (micrometers - millionths of a meter), that are emitted into the atmosphere.  The first effect of aerosols is to provide "seeds" for condensation of water droplets (technically, these are called nucleation sites).  The more particulates there are, the number of droplets in the clouds increase and the size of the droplets decrease; both of these effects increase the reflectivity (albedo) of the clouds and cause a net cooling effect.  However, particulates also absorb sunlight, warming the air, raising the dew point and causing the clouds to dissipate and a net warming effect.  Modeling based on measurements made over the Amazon have shown that neither effect always predominates.  Instead, there is an ebb and flow of which effect predominates at any time.   At lower levels of aerosols, the first effect, increased reflection and cooling, tends to predominate while at higher aerosol concentrations, the evaporation of clouds and warming is predominant.   However, this is just one study and more work still needs to be done.   This study does lend a cautionary note to those who wish to engage in geoengineering and inject aerosols, usually sulfur compounds are recommended.  Without further study and a better understanding of the interplay of the many factors influencing the atmosphere, the risk of unintended consequences remains uncomfortable high.

Tuesday, April 5, 2011

Response to "All Those Darwinian Doubts" by David Berlinski


The main text in black is the complete text of Berlinski's article as published on his site at the Discovery Institute. My comments are enclosed in square brackets and [colored blue].

ALL THOSE DARWINIAN DOUBTS (http://www.discovery.org/scripts/viewDB/index.php?command=view&printerFriendly=true&id=2450)
By: David Berlinski
Wichita Eagle
March 9, 2005

Original Article
NOTE: The article below is the full version by Dr. Berlinski. The Wichita Eagle opted to shorten the piece to only 400 words.

The defense of Darwin's theory of evolution has now fallen into the hands of biologists who believe in suppressing criticism when possible and ignoring it when not. It is not a strategy calculated in induce confidence in the scientific method. A paper published recently in the Proceedings of the Biological Society of Washington concluded that the events taking place during the Cambrian era could best be understood in terms of an intelligent design—hardly a position unknown in the history of western science. The paper was, of course, peer-reviewed by three prominent evolutionary biologists. Wise men attend to the publication of every one of the Proceeding's papers, but in the case of Steven Meyer's "The origin of biological information and the higher taxonomic categories," the Board of Editors was at once given to understand that they had done a bad thing. Their indecent capitulation followed at once. [Detailed coverage of the incident can be found at http://en.wikipedia.org/wiki/Sternberg_peer_review_controversy] 

Publication of the paper, they confessed, was a mistake. It would never happen again. It had barely happened at all. And peer review?

The hell with it.

"If scientists do not oppose antievolutionism," Eugenie Scott, the Executive Director of the National Center for Science Education, remarked, "it will reach more people with the mistaken idea that evolution is scientifically weak." Scott's understanding of "opposition" had nothing to do with reasoned discussion. It had nothing to do with reason at all. Discussing the issue was out of the question. Her advice to her colleagues was considerably more to the point: "Avoid debates."

Everyone else had better shut up.

In this country, at least, no one is ever going to shut up, the more so since the case against Darwin's theory retains an almost lunatic vitality.

Look — The suggestion that Darwin's theory of evolution is like theories in the serious sciences — quantum electrodynamics, say — is grotesque. Quantum electrodynamics is accurate to thirteen unyielding decimal places. Darwin's theory makes no tight quantitative predictions at all. [Evolution is an historical science, http://www.stephenjaygould.org/library/gould_fact-and-theory.html, the discovery of tiktaalik is one recent and well publicized example of a prediction made on the basis of evolution. The entire field of genetics is support for the fact that all current species descended from earlier ones, going back to a common ancestor.]

Look — Field studies attempting to measure natural selection inevitably report weak to non-existent selection effects. [This is blatantly false. 1. Lenski's experiment: http://en.wikipedia.org/wiki/E._coli_long-term_evolution_experiment; 2. Observations of Darwin's finches on the Galapagos islands by Peter and Rosemary Grant: http://www.nature.com/news/2009/091116/full/news.2009.1089.html.  One counter example is sufficient to disprove his statement.  I have provided two and there are many more.]

Look — Darwin's theory is open at one end since there are no plausible account for the origins of life. [The theory of evolution makes no statement about the origin of life, http://www.darwiniana.org/abiogenesis.htm.   It concerns the origin of extant species.]

Look — The astonishing and irreducible complexity of various cellular structures has not yet successfully been described, let alone explained. [False. This was debunked at the Kitzmiller vs. the Dover Board of Education trial, http://www.pamd.uscourts.gov/kitzmiller/kitzmiller_342.pdf. Behe introduced the flagellum as an example of irreduccible complexity. It was shown to have evolved from a previous structure similar to a hypodermic needle. http://www.talkdesign.org/faqs/flagellum.html]

Look — A great many species enter the fossil record trailing no obvious ancestors and depart for Valhalla leaving no obvious descendents. [The fossil record is incomplete, as is well known.   The theory of symbiogenesis may account for sudden speciation events.   Acquiring Genomes, A Theory of the Origins of Species, by Lynn Margulis and Dorion Sagan; "Introduction to Symbiogenesis" by D. Kiehl, http://sciencetrends.blogspot.com/2011/04/introduction-to-symbiogenesis.html]

Look — Where attempts to replicate Darwinian evolution on the computer have been successful, they have not used classical Darwinian principles, and where they have used such principles, they have not been successful. [Origin of the Species is not treated by scientists as if it is as unassailable as the Bible by Christians.   It is a scientific work subject to investigation, critique, and correction.  Darwin's theory of natural selection as a mechanism of evolution and speciation has been borne out over 150 years of observation and experimentation. But it is not the only mechanism at work.   Genetic mutation is a source of random variation that had not been discovered by Darwin's time, but it provides one source of the variation he observed that natural selection works upon.  Polyploidy and symbiosis are two other sources of variation that had either not been discovered or were not well understood when Darwin published his seminal work on evolution. Expecting that only using the principles presented in Darwin's work should be used is absurd and Berlinski should know that.]

Look — Tens of thousands of fruit flies have come and gone in laboratory experiments, and every last one of them has remained a fruit fly to the end, all efforts to see the miracle of speciation unavailing. [False. He should have done a literature search before making such a claim: http://www.talkorigins.org/faqs/faq-speciation.html]

Look — The remarkable similarity in the genome of a great many organisms suggests that there is at bottom only one living system; but how then to account for the astonishing differences between human beings and their near relatives — differences that remain obvious to anyone who has visited a zoo? [The 2% or so genetic difference is sufficient to produce the observed differences between species.  How is this statement even considered a criticism of evolutionary theory?]

But look again — If the differences between organisms are scientifically more interesting than their genomic similarities, of what use is Darwin's theory since its otherwise mysterious operations take place by genetic variations? [What does this even mean?  What mysterious operations?  This is guilt by suspicion. He has stated no facts that counter the current Theory of Evolution.   Some of his criticisms are valid concerning Darwin's work, Origin of Species, but have no relevance based on what is now known.]

These are hardly trivial questions. Each suggests a dozen others. These are hardly circumstances that do much to support the view that there are "no valid criticisms of Darwin's theory," as so many recent editorials have suggested. [There are valid criticisms of the Theory of Evolution and there are many raging controversies concerning some issues of how evolution works.   Belinski just hasn't mentioned any.  His "criticisms" that are listed above are standard Discovery Institute pap, unworthy of someone with his credentials.]

Serious biologists quite understand all this. They rather regard Darwin's theory as an elderly uncle invited to a family dinner. The old boy has no hair, he has no teeth, he is hard of hearing, and he often drools. Addressing even senior members at table as Sonny, he is inordinately eager to tell the same story over and over again.

But he's family. What can you do?

David Berlinski holds a Ph.D. from Princeton University. He is the author of On Systems Analysis, A Tour of the Calculus, The Advent of the Algorithm, Newton's Gift, The Secrets of the Vaulted Sky, and, most recently, Infinite Ascent: A Short History of Mathematics. He is a senior fellow with Discovery Institute's Center for Science and Culture.

Berlinski's criticisms are all directed at Darwin's theory as presented in Origin of Species as if nothing has been learned since.  That is kind of like criticizing Newton's theory of gravitation by pointing out its failure to account properly for observed bending of light by massive objects.   Newton's theory accounted for all the known data at the time. As new data was discovered, Einstein's theory was necessary to account for it. Likewise, the entire field of genetics has developed and expanded our understanding of how evolution works.  The fossil record does, in fact, support the theory of evolution.  It does not necessarily support Darwin's theory of gradualism. It does support punctuated equilibrium and symbiogenesis. 


Monday, April 4, 2011

Introduction to Symbiogenesis


Symbiogenesis is the theory that describes speciation that arises by symbiosis. Symbiosis is the beneficial association of two or more different species of organisms. For convenience, this discussion will just consider the cases of the symbiosis of two species with the understanding that situations exist in nature that involve more than two species. This association differs in intimacy from simply living in close proximity where each species derives some benefit from the other to becoming so closely intertwined that neither species can survive for long without the other except through artificial means. A cleaner shrimp that pick tiny food scraps from the mouth of a moray eel is one example of a very loose symbiotic relationship. The shrimp gains a safe an easy source of food and the eel gains good oral hygiene. Some species of lichen are the archetypal representatives of a very intimate association between an alga (or cyanobacterium) and fungus where if the two are separated they would only survive for a short time in the wild.

This theory originated in tsarist Russia in the latter half of the 19th century and early part of the 20th century. Andrey Famintsyn did much of the seminal research and was one of the first to enunciate the theory. Konstantin Merezhkovsky first publish the theory in 1905 in his paper, "The Nature and Origins of Chromatophores in the Plant Kingdom." Famintsyn published his version two years later also on the topic of chromatophores. Although the two men communicated frequently prior to the publication of either paper and Merezhkovsky referred to experimental work done by Famintsyn, the two men appear to have arrived independently at the conclusion that chromatophores in plants were the result of symbiotic incorporation of bacteria into another species.

Russia's plentiful and varied lichen biota provided a natural laboratory for the investigation of symbiogenesis. Gelatinous lichens in particular were found to range from loose associations to symbiosis so intimate that it was difficult to separate the alga and fungus as separate organisms. In the process of cataloging and studying the rich variety of lichens, scientists were able to see the progression from two species into a new third species. In addition Famintsyn's and Merezhkovsky's studies of plastids, such as chloroplasts, in plant cells when compared to certain very simple species of cyanobacteria provided direct evidence that these plastids had once lived as separate species and had been symbiotically incorporated into another species to give rise to a third species. Russian botanists in particular continued to investigate and refine this theory into the 1960s and 1970s. However, their papers were only published in Russian and so this theory was unknown in Western Europe and the USA.

Lynn Margulis rediscovered this important mechanisms of how evolution can proceed. It augments but does not replace the accumulation of mutations as a mechanism of speciation. Margulis is a microbiologist and developed her own hypothesis of symbiogenesis - speciation through symbiosis - based on her research into the structure of protists and protoctists where she was able to establish that chloroplasts, mitochondria, flagella, and other cell organelles had once been independently living organisms that had been incorporated by another to produce new species. Her work in many ways paralleled the work of the two Russians, Famintsyn and Merezhkovsky. It was only after she had published her work that she became aware of the earlier work in Russia.

In the case of simple organisms, protoctists in particular, symbiogenesis is well established as a scientific fact. The chlorophyllic plastids in alga, for example, were once independent cyanobacteria. The nuclei of eukaryotes have been demonstrated to have arisen from flagellant bacteria as have cilia (see Acquiring Genomes, A Theory of the Origins of Species by Lynn Margulis and Dorian Sagan for more information). This mechanism of speciation is in addition to the gradual accumulation of mutations that is one of the first proposed mechanisms. Natural selection works on both of these sources of variation, symbiosis and mutation, to favor the individuals and populations that adapt best to the environment in which they live.

The fossil record actually supports this and similar phenomena as important mechanisms of evolution. The punctuated equilibrium theory developed by Stephen Jay Gould and Niles Eldredge is more consistent with symbiogenesis than phyletic gradualism, in my opinion. This theory describes long periods of stasis (little change in the biota as evidenced by fossils) interrupted by periods of relatively rapid speciation (think of the demise of nonavian dinosaurs and the adaptive radiation of mammals in the few million years around the Cretaceous-Tertiary boundary demarcated by the iridium-rich layer signifying the asteroid impact at about 65 mya (million years ago).

In addition to the research concerning microbes, there are well documented cases of underwater slugs (Elysia viridis, for example). The ancestors of this newly evolved slug are gray animals that eat algae. In some of them, instead of being digested, the alga became part of the animal - a plant animal hybrid in fact. Elysia viridis are always green and they do not eat as adults. They derive all their nourishment from the symbiotic algae and so spend their time sunbathing instead of grazing on algae like their gray ancestors. Based on studies of the genomes of the green and gray slugs, it is clear that the green slugs evolved from the gray slugs by symbiotically incorporating the same algae that serve as the fodder of the gray slugs. This is a very clear case of evolution by symbiogenesis.

Thursday, March 31, 2011

What are Safe Radiation Limits?


The short answer is that there is no absolute level that can be certified as safe.  What is known is that the average annual dose that a person receives during the year from naturally occurring radiation, mostly from airborne radon with smaller contributions from cosmic rays and internal body radiation, is about 300 millirems at sea level.  In Denver, one mile above sea level, the annual background radiation is about 400 millirem.  The 300 millirems is roughly equivalent to 30 chest x-rays or one-third of a whole-body CT scan.  Radiation dosage is accumulative throughout one's life.  So if you are 50 years old and have received no radiation other than average background radiation, your life-time dose would be 50 x 300 mrems = 15,000 mrems or 15 rems total.  The fact that the cancer rate is lower in Denver than the average in the rest of the USA (Bulletin of the Atomic Scientist, Letters, Bertram Wolfe, President of the American Nuclear Society, San Jose, Ca, November 1986, p55) indicates that it is probably reasonable to assume that this level of radiation is relatively safe, that is, the number of deaths due to background radiation is likely small.


At the other extreme, based on data from the aftermath of the atomic bombs that were dropped on Nagasaki and Hiroshima, 600,000 millirems is 100% fatal.  Anyone receiving this dose of ionizing radiation will die of radiation sickness.  50% of the people exposed to a total dose of 450,000 millirems died.  You can determine your approximate annual exposure at this handy website: Radiation Dose Chart.


Each nation sets what the limits it considers safe for its citizens.  For the USA, dosage limits are as follows:
  Astronauts:  25,000 millirems per mission
  Occupational, adult:  5,000 millirems per year (reduced from 15,000 mrems/yr. in 
                                 1957, reduced from 25,000 mrems/yr. in 1950)
  Lifetime exposure:  1,000 mrem/yr. multiplied by ones age in years
  Occupational, minor:  500 mrem/yr.
  Fetus of pregnant worker:  500 mrems total dose; 
                   recommended:  maximum of 50 mrem/month. (limit created in 1994)


  Note:  All these levels are in addition to the background radiation one receives.


So how much additional radiation is a person living on the west coast of the USA likely to receive?  Most likely, one will receive about 300 microrems per year, roughly one-thousandth of the normal background radiation exposure coastal residents experience.  So fear not for the radiation Americans will receive from Japan; hope for the best for the Japanese living within 50 miles of the Fukushima nuclear plant.  The radiation levels are continuing to increase.  Most recently, traces of plutonium were found in soil samples around the plant site.  This can only occur when the shielding around the core has been breached.  Reports indicate that this nuclear situation is approaching the severity of Chernobyl.

Tuesday, March 29, 2011

Conversion of Radiation Units, a Follow up to the 15 March 2011 Post


Radiation Hazard 120px tallThe media have begun using a variety of units to measure radiation: sieverts, becquerels, millirems, etc.  How many people know all these units and whether 10 sieverts/hour is more or less dangerous than 100 becquerels/hr.?  To help with the confusion, here is a conversion table that may be of use:

Here is a conversion table for various other units of measure:

Conversion Factors
To convert from
To
Multiply by
Curies (Ci)
becquerels (Bq)
3.7 x 1010
millicuries (mCi)
megabecquerels (MBq)
37
microcuries (µCi)
megabecquerels (MBq)
0.037
millirads (mrad)
milligrays (mGy)
0.01
millirems (mrem)
microsieverts (µSv)
10
milliroentgens (mR)
microcoulombs/kilogram (µC/kg)
0.258
becquerels (Bq)
curies (Ci)
2.7 x 10-11
megabecquerels (MBq)
millicuries (mCi)
0.027
megabecquerels (MBq)
microcuries (µCi)
27
milligrays (mGy)
millirads (mrad)
100
microsieverts (µSv)
millrems (mrem)
0.1
microcoulombs/kilogram (µC/kg)
milliroentgens (mR)
3.88

Radiation Measurements

RadioactivityAbsorbed DoseDose EquivalentExposure
Common Unitscurie (Ci)radremroentgen (R)
SI Unitsbecquerel (Bq)gray (Gy)sievert (Sv)coulomb/kilogram (C/kg)
From: http://orise.orau.gov/reacts/guide/measure.htm#Conversions

So the answer to the question is neither.  A becquerel (Bq) is a measure of the radioactivity in counts per second of a given sample where one count represents the decay of one nucleus. It is an SI (Systeme Internationale) unit of measure related to the older, non-SI unit of Curies (Ci).  One Ci is equal to the activity of one gram of radium-226.  A sample must be normalized to the atomic mass of the isotope being measured.


The absorbed dose is what is actually important when determining if a radiation level is of concern or not.  But the radioactivity is the number that is most easily measured.  Different types of ionizing radiation (radiation that will change a neutral atom to an ion by removing an electron - similar to creating a free radical that can then damage cells and DNA and such) are absorbed at different levels.  Absorbed Dose is measured in joules per kilogram (grays [Gy]), where joules measure the energy absorbed from the radiation and kilograms is the measure of the mass of the absorbing medium (a person's tissue).  The effective dose is determined by multiplying a conversion factor,  a weighted average of absorption of different organs (heart, liver, skin, e.g.), times the amount of radiation exposure.  It is not a simple calculation nor is it exact.

So when the reporter talks about there being so many becquerels of radiation, he is reporting how many counts are being recorded.  It is an indication of how much radioactive material is present.  By also determining what isotopes of what elements are present, one can evaluate the relative risk to humans and other living things.


Monday, March 28, 2011

Relative Risk of Power Generation Methods


The nuclear disaster in Japan has focused the world's attention on the dangers of nuclear power generation.  Germany has temporarily shut down seven reactors, China has suspended approval of new facilities, and the usual hew and cry has arisen from the anti-nuclear energy crowd in the USA.  No one denies that when a nuclear reactor malfunctions that the risk of a large number of deaths is possible and that it is essential to assure that proper precautions be taken to prevent such disasters.  However, when assessing the dangers associated with nuclear power to determine if pursuing this form of power generation is worthwhile, one must compare the risks to those associated with other forms of power production such as coal, oil, natural gas, and hydroelectric.  One must consider not only the extent of a possible tragedy but also the chance of such a tragedy occurring to determine the full impact of using any particular source.  One must also consider the full life cycle of the power production rocess for each fuel type, fuel extraction, plant building, fuel transport and processing, power plant productive life time, decommissioning, and environmental impact of all of these phases.

According to a study in China, based on experiences in their study, coal-fired energy production has caused twelve times the number of deaths per annual GW as the nuclear energy chain.  By implementing known improvements that are needed, this ratio could be reduced to a 4X disadvantage for coal.  However, this does not include any nuclear accidents, since none have occurred in China to date.  This report also found that the radiation from coal, totaled over the entire product cycle, is nearly twice as high for coal as for nuclear power; again, this does not include estimates for nuclear accidents such as occurred at Three Mile Island, Chernobyl, or Fukushima.  As another means of comparison, a study by the Clean Air Task Force, attributes 13,200 deaths to coal plant emissions annually in the USA, this does not include deaths from any other portion of the energy production chain.

In 1975 China, a one-hundred-year flood event caused 30 dams built for hydroelectric power to fail, drowning at least 230,000 people.

The point of this information is to highlight the fact that no means of energy production is without risks.  Nuclear power generation seems riskier for two reasons: when there is an accident, it tends to be a disaster, killing and injuring a large number of people at one time and causing environmental damage over a large area; plus it is relatively new and there are many unknowns about it.  Nuclear disasters are blatantly obvious and the news media takes advantage of people's fear to raise their ratings and make the consequences seem even more dire than they already are.  Predictably people react in fear and want to stay with power sources they know.  Coal-powered plants are "silent" killers.  Except for the occasional mining disaster, one doesn't hear about the 13,000 annual deaths from breathing particulates emitted by coal-fired plants because they don't happen in bunches nor is it always obvious that coal is the culprit except to a doctor who performs an autopsy and sees the person's black lungs or measures radioactive isotopes charactistic of coal in a person's body.

Whether nuclear power should remain an option should be based on fact-based reasoning and decision making.  One must compare all costs associated with each means of power production from acquisition of the fuel until the plant is decomissioned and torn down or buried.  Not only the direct human impact, some of which has been highlighted here, but also the environmental impact to teh Earth as a whole and the other organisms that share this p[lanet with us.  Currently, fossil-fuel powered plants do not pay for the effect they have on the environment, there is little social cost assessed on carbon emission or the emission of other pollutants.  To make economically sound decisions for the long term, these costs must be included.  Making such an assessment is beyond the scope of today's post but will be the subject of one in the future.



Dreamhost promos