Artificial Intelligence Takes On Earthquake Prediction

In May of last year, after a 13-month slumber, the ground beneath Washington’s Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada’s Vancouver Island. It then briefly reversed course, migrating back across the U.S. border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.

Because the quake was so spread out in time and space, however, it’s likely that no one felt it. These kinds of phantom earthquakes, which occur deeper underground than conventional, fast earthquakes, are known as “slow slips.” They occur roughly once a year in the Pacific Northwest, along a stretch of fault where the Juan de Fuca plate is slowly wedging itself beneath the North American plate. More than a dozen slow slips have been detected by the region’s sprawling network of seismic stations since 2003. And for the past year and a half, these events have been the focus of a new effort at earthquake prediction by the geophysicist Paul Johnson.

Johnson’s team is among a handful of groups that are using machine learning to try to demystify earthquake physics and tease out the warning signs of impending quakes. Two years ago, using pattern-finding algorithms similar to those behind recent advances in image and speech recognition and other forms of artificial intelligence, he and his collaborators successfully predicted temblors in a model laboratory system — a feat that has since been duplicated by researchers in Europe.

Now, in a paper posted this week on the scientific preprint site arxiv.org, Johnson and his team report that they’ve tested their algorithm on slow slip quakes in the Pacific Northwest. The paper has yet to undergo peer review, but outside experts say the results are tantalizing. According to Johnson, they indicate that the algorithm can predict the start of a slow slip earthquake to “within a few days — and possibly better.”

“This is an exciting development,” said Maarten de Hoop, a seismologist at Rice University who was not involved with the work. “For the first time, I think there’s a moment where we’re really making progress” toward earthquake prediction.

Mostafa Mousavi, a geophysicist at Stanford University, called the new results “interesting and motivating.” He, de Hoop, and others in the field stress that machine learning has a long way to go before it can reliably predict catastrophic earthquakes — and that some hurdles may be difficult, if not impossible, to surmount. Still, in a field where scientists have struggled for decades and seen few glimmers of hope, machine learning may be their best shot.

Sticks and Slips

The late seismologist Charles Richter, for whom the Richter magnitude scale is named, noted in 1977 that earthquake prediction can provide “a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers.” Today, many seismologists will tell you that they’ve seen their fair share of all three.

But there have also been reputable scientists who concocted theories that, in hindsight, seem woefully misguided, if not downright wacky. There was the University of Athens geophysicist Panayiotis Varotsos, who claimed he could detect impending earthquakes by measuring “seismic electric signals.” There was Brian Brady, the physicist from the U.S. Bureau of Mines who in the early 1980s sounded successive false alarms in Peru, basing them on a tenuous notion that rock bursts in underground mines were telltale signs of coming quakes.

Paul Johnson is well aware of this checkered history. He knows that the mere phrase “earthquake prediction” is taboo in many quarters. He knows about the six Italian scientists who were convicted of manslaughter in 2012 for downplaying the chances of an earthquake near the central Italian town of L’Aquila, days before the region was devastated by a magnitude 6.3 temblor. (The convictions were later overturned.) He knows about the prominent seismologists who have forcefully declared that “earthquakes cannot be predicted.”

But Johnson also knows that earthquakes are physical processes, no different in that respect from the collapse of a dying star or the shifting of the winds. And though he stresses that his primary aim is to better understand fault physics, he hasn’t shied away from the prediction problem.

More than a decade ago, Johnson began studying “laboratory earthquakes,” made with sliding blocks separated by thin layers of granular material. Like tectonic plates, the blocks don’t slide smoothly but in fits and starts: They’ll typically stick together for seconds at a time, held in place by friction, until the shear stress grows large enough that they suddenly slip. That slip — the laboratory version of an earthquake — releases the stress, and then the stick-slip cycle begins anew.

When Johnson and his colleagues recorded the acoustic signal emitted during those stick-slip cycles, they noticed sharp peaks just before each slip. Those precursor events were the laboratory equivalent of the seismic waves produced by foreshocks before an earthquake. But just as seismologists have struggled to translate foreshocks into forecasts of when the main quake will occur, Johnson and his colleagues couldn’t figure out how to turn the precursor events into reliable predictions of laboratory quakes. “We were sort of at a dead end,” Johnson recalled. “I couldn’t see any way to proceed.”

At a meeting a few years ago in Los Alamos, Johnson explained his dilemma to a group of theoreticians. They suggested he reanalyze his data using machine learning — an approach that was well known by then for its prowess at recognizing patterns in audio data.

Together, the scientists hatched a plan. They would take the roughly five minutes of audio recorded during each experimental run — encompassing 20 or so stick-slip cycles — and chop it up into many tiny segments. For each segment, the researchers calculated more than 80 statistical features, including the mean signal, the variation about that mean, and information about whether the segment contained a precursor event. Because the researchers were analyzing the data in hindsight, they also knew how much time had elapsed between each sound segment and the subsequent failure of the laboratory fault.

Armed with this training data, they used what’s known as a “random forest” machine learning algorithm to systematically look for combinations of features that were strongly associated with the amount of time left before failure. After seeing a couple of minutes’ worth of experimental data, the algorithm could begin to predict failure times based on the features of the acoustic emission alone.

Johnson and his co-workers chose to employ a random forest algorithm to predict the time before the next slip in part because — compared with neural networks and other popular machine learning algorithms — random forests are relatively easy to interpret. The algorithm essentially works like a decision tree in which each branch splits the data set according to some statistical feature. The tree thus preserves a record of which features the algorithm used to make its predictions — and the relative importance of each feature in helping the algorithm arrive at those predictions.

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance — a measure of how the signal fluctuates about the mean — and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place — that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

To be sure, sliding blocks don’t begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

Out of the Lab

Most if not all of the places on Earth that can experience a magnitude 9 earthquake are subduction zones, where one tectonic plate dives beneath another. A subduction zone just east of Japan was responsible for the Tohoku earthquake and the subsequent tsunami that devastated the country’s coastline in 2011. One day, the Cascadia subduction zone, where the Juan de Fuca plate dives beneath the North American plate, will similarly devastate Puget Sound, Vancouver Island and the surrounding Pacific Northwest.

The Cascadia subduction zone stretches along roughly 1,000 kilometers of the Pacific coastline from Cape Mendocino in Northern California to Vancouver Island. The last time it breached, in January 1700, it begot a magnitude 9 temblor and a tsunami that reached the coast of Japan. Geological records suggest that throughout the Holocene, the fault has produced such megaquakes roughly once every half-millennium, give or take a few hundred years. Statistically speaking, the next big one is due any century now.

That’s one reason seismologists have paid such close attention to the region’s slow slip earthquakes. The slow slips in the lower reaches of a subduction-zone fault are thought to transmit small amounts of stress to the brittle crust above, where fast, catastrophic quakes occur. With each slow slip in the Puget Sound-Vancouver Island area, the chances of a Pacific Northwest megaquake ratchet up ever so slightly. Indeed, a slow slip was observed in Japan in the month leading up to the Tohoku quake.

For Johnson, however, there’s another reason to pay attention to slow slip earthquakes: They produce lots and lots of data. For comparison, there have been no major fast earthquakes on the stretch of fault between Puget Sound and Vancouver Island in the past 12 years. In the same time span, the fault has produced a dozen slow slips, each one recorded in a detailed seismic catalog.

That seismic catalog is the real-world counterpart to the acoustic recordings from Johnson’s laboratory earthquake experiment. Just as they did with the acoustic recordings, Johnson and his co-workers chopped the seismic data into small segments, characterizing each segment with a suite of statistical features. They then fed that training data, along with information about the timing of past slow slip events, to their machine learning algorithm.

After being trained on data from 2007 to 2013, the algorithm was able to make predictions about slow slips that occurred between 2013 and 2018, based on the data logged in the months before each event. The key feature was the seismic energy, a quantity closely related to the variance of the acoustic signal in the laboratory experiments. Like the variance, the seismic energy climbed in a characteristic fashion in the run-up to each slow slip.

The Cascadia forecasts weren’t quite as accurate as the ones for laboratory quakes. The correlation coefficients characterizing how well the predictions fit observations were substantially lower in the new results than they were in the laboratory study. Still, the algorithm was able to predict all but one of the five slow slips that occurred between 2013 and 2018, pinpointing the start times, Johnson says, to within a matter of days. (A slow slip that occurred in August 2019 wasn’t included in the study.)

For de Hoop, the big takeaway is that “machine learning techniques have given us a corridor, an entry into searching in data to look for things that we have never identified or seen before.” But he cautions that there’s more work to be done. “An important step has been taken — an extremely important step. But it is like a tiny little step in the right direction.”

Sobering Truths

The goal of earthquake forecasting has never been to predict slow slips. Rather, it’s to predict sudden, catastrophic quakes that pose danger to life and limb. For the machine learning approach, this presents a seeming paradox: The biggest earthquakes, the ones that seismologists would most like to be able to foretell, are also the rarest. How will a machine learning algorithm ever get enough training data to predict them with confidence?

The Los Alamos group is betting that their algorithms won’t actually need to train on catastrophic earthquakes to predict them. Recent studies suggest that the seismic patterns before small earthquakes are statistically similar to those of their larger counterparts, and on any given day, dozens of small earthquakes may occur on a single fault. A computer trained on thousands of those small temblors might be versatile enough to predict the big ones. Machine learning algorithms might also be able to train on computer simulations of fast earthquakes that could one day serve as proxies for real data.

But even so, scientists will confront this sobering truth: Although the physical processes that drive a fault to the brink of an earthquake may be predictable, the actual triggering of a quake — the growth of a small seismic disturbance into full-blown fault rupture — is believed by most scientists to contain at least an element of randomness. Assuming that’s so, no matter how well machines are trained, they may never be able to predict earthquakes as well as scientists predict other natural disasters.

“We don’t know what forecasting in regards to timing means yet,” Johnson said. “Would it be like a hurricane? No, I don’t think so.”

In the best-case scenario, predictions of big earthquakes will probably have time bounds of weeks, months or years. Such forecasts probably couldn’t be used, say, to coordinate a mass evacuation on the eve of a temblor. But they could increase public preparedness, help public officials target their efforts to retrofit unsafe buildings, and otherwise mitigate hazards of catastrophic earthquakes.

Johnson sees that as a goal worth striving for. Ever the realist, however, he knows it will take time. “I’m not saying we’re going to predict earthquakes in my lifetime,” he said, “but … we’re going to make a hell of a lot of progress.”

Leave a Reply

*