The Azimuth Project
Experiments in El Niño analysis and prediction (Rev #3, changes)

Showing changes from revision #2 to #3: Added | Removed | Changed

Background

A short video explains El Niño issues in a simple way:

Tutorial information on climate networks, and their application to El Niño signal processing:

This paper explains how climate networks can be used to recognize when El Niño events are occurring:

This paper on El Niño prediction created a stir:

A lot of the methodology seems to come from this free paper:

Action plan for improving on the paper

John said:

We would first need to get ahold of daily temperature data for “14 grid points in the El Niño basin and 193 grid points outside this domain” from 1981 to 2014. That’s 207 locations and 34 years. This data is supposedly available from the National Centers for Environmental Prediction and the National Center for Atmospheric Research Reanalysis I Project. The paper starts by taking these temperatures, computing the average temperature at each day of the year at each location, and subtracting this from the actual temperatures to obtain “temperature anomalies”. In other words, we want a big array of numbers like this: the temperature on March 21st 1990 at some location, minus the average temperature on all March 21sts from 1981 to 2014 at that location. Then they process this array of numbers in various ways, which I can explain… They consider all pairs of locations, so at some point they are working with 207 × 207 × 365 × 34 numbers. Is that a lot of numbers these days?

category: climate, experiments