Part 3 - Predicting earthquakes
and next week’s rain: forecasting successes and forecasting failures
I have a number of friends and
colleagues in the Earthquake Science Center of the US Geological Survey. As a
group, they are some of the smartest people I’ve ever met. Most of them have
PhD’s, and they have spent their lives pouring almost all their thought and
effort into one quixotic quest: to predict earthquakes. The ultimate goal is
laudable at the highest level: to save human lives.
However, the 220 scientists in that
science center, along with at least twice that more in east and southeast Asia,
as well as Europe, have little to show for their efforts after 50 years or more
of trying. One estimate I’ve heard suggests that upwards of $70 billion dollars
have been spent in earthquake prediction research. Sadly, we are just as far
away from predicting earthquakes after all that money has been spent than we
were a half century ago, and every scientist I’ve talked to about this readily
agrees. We understand faults better - we understand (after the fact) why the
Great Tohoku earthquake of 2011 was so massive, so devastating, for instance.
The thrust-fault was shallow, meaning there was a much larger than usual
fault-plane lying above the high-pressure-high-temperature “plastic” zone of
the upper Mantle that could accumulate strain. Enormous resources have been
poured into a drill-hole project (“SAFOD”) that snakes down past and then into
the deep San Andreas Fault. This has been done to better understand the physics
of what is happening to rocks in and adjacent to a major transform fault.
After all this, we still cannot
predict earthquakes.
But in this research effort an
interesting observation popped out (yes, if you apply thoughtful analysis to
vast amounts of data there sometimes CAN be a payout). The crucial discovery:
earthquakes follow a power law. This is also called the Gutenberg-Richter Law. There
is a verbal way to explain this and a graphical way to explain this. The verbal
way is this: if you have X number of magnitude 4 earthquake events, you will
have less magnitude 5 earthquakes by a certain factor (about a tenth as many).
You will then have even less magnitude 6 earthquakes by the same factor. Larger
earthquakes will be proportionally fewer, until magnitude 8 earthquakes happen
only about once a year on average worldwide. Magnitude 9 earthquakes (like
Tohoku in 2011 and Chile in 1960) are very rare - but they happen. There does
appear to be an upper limit on earthquake magnitude: they are roughly proportional
to the fault surface disrupted. To this end, the San Andreas fault is in most
of its length roughly perpendicular to the earth’s surface. This means that
down around 10+ kilometers, the rock is so hot and under so much overlying
pressure that it turns plastic. It won’t break, but instead deforms and flows,
so no more earthquakes. So a magnitude 7.3 is about all you will get from the
San Andreas. It’s more than enough to flatten most houses and pancake most
hospitals, however.
With this power law behavior
there comes the inevitability of more and more smaller and smaller earthquakes,
until you literally have millions of tiny events that most humans will never
feel - they are only picked up by the most sensitive instruments. Think:
sensing a garbage truck driving by your house.
The graphical way to show this
power law behavior is again by using a log-log plot. If one axis is magnitude
and the other is frequency of occurrence, then all earthquakes fall on a straight line.
With this you can give a probabilistic estimate of the likelihood of an earthquake of a given size happening on, say the Hayward Fault east of San Francisco (about a 31% chance for an M = 6.7 event in the next 30 years). But you cannot predict WHEN. You CAN budget funds to retrofit your home against such an event, however, and the power law (and some basic information about the length and dip of the fault) will give you a maximum bound for the event.
With this you can give a probabilistic estimate of the likelihood of an earthquake of a given size happening on, say the Hayward Fault east of San Francisco (about a 31% chance for an M = 6.7 event in the next 30 years). But you cannot predict WHEN. You CAN budget funds to retrofit your home against such an event, however, and the power law (and some basic information about the length and dip of the fault) will give you a maximum bound for the event.
Forecasting weather is one of
the bright spots of the forecasting story. Twenty years ago weathermen
generally avoided admitting to people what their jobs were at cocktail parties.
They would get flogged for a failed forecast - yesterday, or last week, or on
that person’s birthday or planned party. Today TV weathermen still have the
same abysmal forecasting success record - less than the 50% coin-toss - but now
the failure is deliberate. Like Fox “News” or MSNBC talking heads, the object
is NOT to dispense truth, but to make people feel better. If a TV weatherman
gave the same predictions as weather.noaa.gov (a free service available to
everyone), they would miss some of the rain squalls that hit in nearly every
community when the percentage chance of rain is given as, say 20%. They get
flogged for missing these. However, if they deliberately over-predict wet
weather and it doesn’t come to pass, there is no punishment, no societal memory
of a failure. This is just human nature. And advertisers follow the Nielsen
Ratings closely.
However, the National Weather
Service has gotten dramatically better with weather predictions. In part this
is because of better and faster computers, computers that permit modeling
(Silver calls this “heuristics”) of ever-finer weather cells. The more
pressure, temperature, and wind-speed data available, the more resolution they
can provide. But weather is not the same where you are now standing as it is
at the top of your highest neighboring volcano. And THAT weather (temperature,
wind-speed, wind-direction, pressure, humidity) is not the same as at the
altitudes where a commercial jet flies.
This difference for different
elevations gives rise to lenticular clouds forming over volcanoes -
often the
source of 911 calls about a flying saucer or a volcanic eruption. It’s
simply
caused by warm, humid air being lifted from lower elevations to higher
elevations where lower-density and colder temperatures prevail, causing
the
humidity to drop out of solution to form droplets: a cloud. These
disappear as
the air passes down the other side of the mountain. What you see,
however, is a
cap cloud that appears to be more or less stationary (or, in the case of
this double-lenticular cloud over Mount Hood, two stationary clouds).
This is a long way of saying
that computer models of the atmosphere must take into account the fact that
there is not just a horizontal grid, but a 3rd - vertical -
dimension that has to be modeled. Converting any model from 2D to 3D
dramatically amps up the computing time necessary - everything is multiplied by
the number of vertical levels you want to use. However, Moore’s Law (that
computing power doubles about every 18 months) has come to the fore in recent
years, and the models have become more and more sophisticated.
But any model implies that you
understand the relationships - the physics of how the global weather system
works. This is stunningly complicated; weather modelers also have to incorporate
effects of mountains (rain shadows), metropolitan “heat islands”, and large
water bodies. The temperature is always moderated near a coastline because
water has such a high heat coefficient.
Another water-body effect is the so-called Lake Effect. Buffalo, New York, for instance, gets far more snowfall than Toronto on the other side of Lake Ontario (in the opposite direction of the prevailing wind). To an increasing degree, these physics and geographic details - these system understandings - can be folded into the modeling algorithms.
But the critical recent successes have come from making the weather forecast both local and hybrid, by incorporating human beings within the forecasting process. Experienced weather experts, familiar with a local region, can refine any prediction by a computer model (those blurry-looking green clouds rapidly moving over your metro area map on TV), and tell you when the precip will hit, and do this in a probabilistic way.
Another water-body effect is the so-called Lake Effect. Buffalo, New York, for instance, gets far more snowfall than Toronto on the other side of Lake Ontario (in the opposite direction of the prevailing wind). To an increasing degree, these physics and geographic details - these system understandings - can be folded into the modeling algorithms.
But the critical recent successes have come from making the weather forecast both local and hybrid, by incorporating human beings within the forecasting process. Experienced weather experts, familiar with a local region, can refine any prediction by a computer model (those blurry-looking green clouds rapidly moving over your metro area map on TV), and tell you when the precip will hit, and do this in a probabilistic way.
This probabilistic expression of
what will unfold in the future is a hallmark of sophisticated modern
forecasting, and builds into it a crucial bit of information: the uncertainty
in any model.
What about the other weather
forecasters out there? Here’s a dirty little secret: they use the same government weather forecasts and provide
“value added” information like pollen counts and advertising (e.g.,
Accuweather.com). They are good at
presentation, but the substance is borrowed from the US government, whose minions
never complain.
Next: Predicting terrorist
attacks
2 comments:
Hmm, very interesting. And thanks for the explanation of what a lake effect is. I've heard that term ever since living on the Wasatch Front, but didn't really understand what it meant. Now I think I get it.
Just remembered: another name for a lenticular cloud is an "orographic cloud". As in "It's not an eruption, it's just an orographic cloud."
~~~~~
Post a Comment