The Azimuth Project
Experiments in El Niño analysis and prediction (Rev #5)

Background

A short video explains El Niño issues in a simple way:

Tutorial information on climate networks, and their application to El Niño signal processing:

This paper explains how climate networks can be used to recognize when El Niño events are occurring:

This paper on El Niño prediction created a stir:

A lot of the methodology seems to come from this free paper:

Action plan for improving on the paper

John said:

We would first need to get ahold of daily temperature data for “14 grid points in the El Niño basin and 193 grid points outside this domain” from 1981 to 2014. That’s 207 locations and 34 years. This data is supposedly available from the National Centers for Environmental Prediction and the National Center for Atmospheric Research Reanalysis I Project. The paper starts by taking these temperatures, computing the average temperature at each day of the year at each location, and subtracting this from the actual temperatures to obtain “temperature anomalies”. In other words, we want a big array of numbers like this: the temperature on March 21st 1990 at some location, minus the average temperature on all March 21sts from 1981 to 2014 at that location. Then they process this array of numbers in various ways, which I can explain… They consider all pairs of locations, so at some point they are working with 207 × 207 × 365 × 34 numbers. Is that a lot of numbers these days?

A first look at some of the data

This shows surface air temperatues over the Pacific for 1951.

Yearly mean temperatures over the pacific in 1951

To see how this image was made: R code for pacific1951 image

Second look

Pacific temperatures 1955-1961

To see how this image was made: R code to display 6 years of Pacific temperatures

category: climate, experiments