The Azimuth Project
Experiments in El Niño analysis and prediction (Rev #14)

Background

A short video explains El Niño issues in a simple way:

Tutorial information on climate networks, and their application to El Niño signal processing:

This paper explains how climate networks can be used to recognize when El Niño events are occurring:

This paper on El Niño prediction created a stir:

A lot of the methodology seems to come from this free paper:

This paper is also relevant:

Ludescher et al on El Niño forecasting by cooperativity detection

This paper:

uses data available from the National Centers for Environmental Prediction and the National Center for Atmospheric Research Reanalysis I Project:

More precisely, there’s a bunch of files here containing worldwide daily average temperatures on a 2.5° latitude × 2.5° longitude grid (144 × 73 grid points), from 1948 to 2010. If you go here the website will help you get data from within a chosen rectangle in a grid, for a chosen time interval. These are “netCDF files”; an R package for working with these files is here and some information on how they look is here.

The paper uses daily temperature data for “14 grid points in the El Niño basin and 193 grid points outside this domain” from 1981 to 2014, as shown here:

That’s 207 locations and 34 years. The paper starts by taking these temperatures, computing the average temperature at each day of the year at each location, and subtracting this from the actual temperatures to obtain “temperature anomalies”. In other words, they use a big array of numbers like this: the temperature on March 21st 1990 at some location, minus the average temperature on all March 21sts from 1981 to 2014 at that location.

They process this data as explained here and attempt to use the result to predict the Nino3.4 index:

which is the area averaged sea surface temperature (SST) in the region 5°S-5°N and 170°-120°W.

Here is what they get:

A first look at some of the data

This shows surface air temperatures over the Pacific for 1951:

Yearly mean temperatures over the pacific in 1951

To see how this image was made: R code for pacific1951 image

A second look

To see how this image was made: R code to display 6 years of Pacific temperatures

Third look

Temperatures in the pacific in early 1957 and 1958

The rectangle is roughly the area where the El Niño index NINO3.4 is defined.

Conversion code

Here is an R script to convert netCDF data to a simple “flat” format which can be read by other programs. The output format is described in the code.

Some correlations and covariances

The images below show local correlations and covariances of temperatures over the Pacific. Correlations are on the left and covariances on the right. They are calculated over the year of 1951. The correlations are shown on a scale where black is zero, white is 1. All of the values were positive; the smallest correlation was 0.26. The cube roots of the covariances are shown on a scale where black is zero and white is the maximum value. A linear mapping of values just shows a few pale pixels near the corners, with the rest black. Summary values for a set of covariances:

    Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
 0.09237  0.22520  0.40570  1.28000  1.11200 22.77000 
Correlations and covariances in Pacific temperatures

More covariances

The PDF file Covariances near equator shows covariances between different places near the equator in the Pacific and at two different time delays. Most of the details are in the PDF, but some things are not:

  • the graphs show the median of the covariances over the region

  • black means zero geographical displacement, and paler greys show displacements increasing by 2.5 degrees.

The idea is to plot, for each 5 days from 1951 through 1979, for a region straddling the equator, for delays of 1 and 5 days, and for 0 to 7 eastwards steps of 2.5 degrees, the covariances of the temperature over six months (183 days). (AMBIGUOUS) Here are the results:

category: climate, experiments