Climateprediction.net, or CPDN, is a distributed computing project to investigate and reduce uncertainties in climate modelling. It aims to do this by running hundreds of thousands of different models (a large Climate ensemble) using the donated idle time of ordinary personal computers, thereby leading to a better understanding of how models are affected by small changes in the many parameters known to influence the global climate.
The project relies on the volunteer computing model using the BOINC framework where voluntary participants agree to run some processes of the project at the client-side in their personal computers after receiving tasks from the server-side for treatment.
CPDN, which is run primarily by Oxford University in England, has harnessed more computing power and generated more data than any other climate modelling project. It has produced over 35 million model years of data so far. As of January 2008, there are more than 145,000 participants from 201 countries with a total BOINC credit of around 4.5 billion.
The aim of the Climateprediction.net project is to investigate the uncertainties in various parameterizations that have to be made in state-of-the-art climate models (see " Modelling The Climate"). The model is run thousands of times with slight perturbations to various physics parameters (a 'large ensemble') and the project examines how the model output changes. These parameters are not known exactly, and the variations are within what is subjectively considered to be a plausible range. This will allow the project to improve understanding of how sensitive the models are to small changes and also to things like changes in carbon dioxide and sulphur cycle. In the past, estimates of climate change have had to be made using one or, at best, a very small ensemble (tens rather than thousands) of model runs. By using participants' computers, the project will be able to improve understanding of, and confidence in, climate change predictions more than would ever be possible using the supercomputers currently available to scientists.
The Climateprediction.net experiment should help to "improve methods to quantify uncertainties of climate projections and scenarios, including long-term ensemble simulations using complex models", identified by the Intergovernmental Panel on Climate Change (IPCC) in 2001 as a high priority. Hopefully, the experiment will give decision makers a better scientific basis for addressing one of the biggest potential global problems of the 21st century.
As shown in the graph above, the various models have a fairly wide distribution of results over time. For each curve, on the far right, there is a bar showing the final temperature range for the corresponding model version. As you can see and would expect, the further into the future the model is extended, the wider the variances between them. Roughly half of the variation depends on the future climate forcing scenario rather than uncertainties in the model. Any reduction in those variations whether from better scenarios or improvements in the models are wanted. Climateprediction.net is working on model uncertainties not the scenarios.
The crux of the problem is that scientists can run models and see that x% of the models warm y degrees in response to z climate forcings, but how do we know x% is a good representation of the probability of that happening in the real world? The answer is that scientists are uncertain about this and want to improve the level of confidence that can be achieved. Some models will be good and some poor at producing past climate when given past climate forcings and initial conditions (a hindcast). It does make sense to trust the models that do well at recreating the past more than those that do poorly. Therefore models that do poorly will be downweighted.
The different models that Climateprediction.net has and will distribute are detailed below in chronological order. Therefore, anyone who has joined recently is likely to be running the Transient Coupled Model.
Following a presentation at the World Climate Conference in Hamburg in September 1999 and a commentary in Nature entitled Do it yourself climate prediction in October 1999, thousands signed up to this supposedly imminently available program. The Dot-com bubble bursting did not help and the project realised they would have to do most of the programming themselves rather than outsourcing.
The 2003 launch only offered a Windows "classic" client. On 26 August 2004 a BOINC client was launched which supported Windows, Linux and Mac OS X clients. "Classic" will continue to be available for a number of years in support of the Open University course. BOINC has stopped distributing classic models in favour of sulfur cycle models. A more user friendly BOINC client and website called GridRepublic, which supports climateprediction and other BOINC projects, was released in beta in 2006.
A thermohaline circulation slowdown experiment was launched in May 2004 under the classic framework to coincide with the film The Day After Tomorrow. This program can still be run but is no longer downloadable. The scientific analysis has been written up in Nick Faull's thesis. A paper about the thesis is still to be completed. There is no further planned research with this model.
A sulfur cycle model was launched in August 2005. They took longer to complete than the original models as a result of having five phases instead of three. Each timestep was also more complicated.
By November 2005, the number of completed results totalled 45,914 classic models, 3,455 thermohaline models, 85,685 BOINC models and 352 sulfur cycle models. This represented over 6 million model years processed.
In February 2006, the project moved on to more realistic climate models. A BBC Climate Change Experiment was launched, attracting around 23,000 participants on the first day. The transient climate simulation introduced realistic oceans. This allowed the experiment to investigate changes in the climate response as the climate forcings are changed, rather than an equilibrium response to a significant change like doubling the carbon dioxide level. Therefore, the experiment has now moved on to doing a hindcast of 1920 to 2000 as well as a forecast of 2000 to 2080. This model takes much longer.
The BBC gave the project publicity with over 120,000 participating computers in the first three weeks.
In March 2006, a high resolution model was released as another project, the Seasonal Attribution Project.
Climate sensitivities of greater than 5 °C are widely accepted as being catastrophic. The possibility of such high sensitivities being plausible given observations had been reported prior to the Climateprediction.net experiment but "this is the first time GCMs have produced such behaviour".
Even the models with very high climate sensitivity were found to be "as realistic as other state-of-the-art climate models". The test of realism was done with a root mean square error test. This does not check on realism of seasonal changes and it is possible that more diagnostic measures may place stronger constraints on what is realistic. Improved realism tests are being developed.
It is important to the experiment and the goal of obtaining a probability distribution function (pdf) of climate outcomes to get a very wide range of behaviours even if only to rule out such behaviour as not realistic. Unless you start with the whole range of behaviours, you would not be able to have confidence that a pdf was reliable. Therefore it is good to see that models with climate sensitivity as high as 11 °C are included. More worrying is the lack of models with climate sensitivity of less than 2 °C. The sulfur cycle experiment is likely to extend the range downwards.
When an internally consistent representation of the origins of model-data discrepancy is used to calculate the probability density function of climate sensitivity, the 5th and 95th percentiles are 2.2 K and 6.8 K respectively. These results are sensitive, particularly the upper bound, to the representation of the origins of model data discrepancy.
Each downloaded model comes with a slight variation in the various model parameters.
There is an initial "calibration phase" of 15 model years in which the model calculates the "flux correction"; extra ocean-atmosphere fluxes that are needed to keep the model ocean in balance (the model ocean does not include currents; these fluxes to some extent replace the heat that would be transported by the missing currents).
Then there is a "control phase" of 15 years in which the ocean temperatures are allowed to vary. The flux correction ought to keep the model stable, but feedbacks developed in some of the runs. There is a quality control check, based on the annual mean temperatures, and models which fail this check are discarded.
Then there is a "double CO2 phase" in which the CO2 content is instantaneously doubled and the model run for a further 15 years, which in some cases is not quite sufficient model time to settle down to a new (warmer) equilibrium. In this phase some models which produced physically unrealistic results were again discarded.
The quality control checks in the control and 2*CO2 phases were quite weak: they suffice to exclude obviously unphysical models but do not include (for example) a test of the simulation of the seasonal cycle; hence some of the models passed may still be unrealistic. Further quality control measures are being developed.
The temperature in the doubled CO2 phase is exponentially extrapolated to work out the equilibrium temperature. Difference in temperature between this and the control phase then gives a measure of the climate sensitivity of that particular version of the model.
The real-time desktop visualisation for the model launched in 2003 was developed , by NAG, enabling users to track the progress of their simulation as the cloud cover and temperature changes over the surface of the globe. Other, more advanced visualisation programs in use include CPView and IDL Advanced Visualisation. They have similar functionality. CPView was written by Martin Sykes, a participant in the experiment. The IDL Advanced Visualisation was written by Andy Heaps of the University of Reading (UK), and modified to work with the BOINC version by Tesella Support Services plc.
Only CPView allows you to look at unusual diagnostics, rather than the usual Temperature, Pressure, Rainfall, Snow, and Clouds (Data Index). Up to 5 sets of data can be displayed on a map. It also has a wider range of functions like Max, Min, further memory functions, and other features.
The Advanced Visualisation has functions for graphs of local areas and over 1 day, 2 days, and 7 days, as well as the more usual graphs of season and annual averages (which both packages do). There are also Latitude - Height plots and Time - Height plots.
The download size is much smaller for CPView and CPView works with Windows 98. Running the visualisation/screensaver may slow down the processing and is not recommended to be used.