Welcome to the new CCMC website!

Please note that some pages may have moved during the migration. If you experience any issues with the new website, please reach out to gsfc-ccmc-support@lists.hq.nasa.gov.

Last Updated: 04/29/2022

CME Arrival Time and Impact Working Team


Team Leads: C. Verbeke (KU Leuven), M.L. Mays (CCMC), A. Taktakishvili (CCMC)
Scoreboard Leads: M.L. Mays, co-lead TBD
Communications: See the ISWAT team page for an up-to-date list of participants


Eric Adamson* · Tanja Amerstorfer · Anastasios Anastasiadis · Nick Arge · Michael Balikhin* · David Barnes* · Francois-Xavier Bocquet · Yaireska Collado-Vega* · Pedro Corona-Romero* · Jackie Davies · Curt de Koning* · Craig DeForest* · Manolis K. Georgoulis* · Carl Henney · Bernard Jackson* · Lan Jian · Masha Kuznetsova* · Kangjin Lee · Noé Lugaz · Anthony Mannucci* · Periasamy K Manoharan* · Daniel Matthiä* · Leila Mays* · Mike McAleenan* · Slava Merkin* · Marilena Mierla · Joseph Minow* · Christian Moestl · Karin Muglach* · Teresa Nieves · Nariaki Nitta · Marlon Nunez · Dusan Odstrcil* · Mathew Owens · Evangelos Paouris · Athanasios Papaioannou · Spiros Patsourakos · vic pizzo · Pete Riley · Alexis Rouillard · Camilla Scolini · Howard Singer* · Robert Steenburgh* · Aleksandre Taktakishvili* · Manuela Temmer · W. Kent Tobiska* · Christine Verbeke* · Angelos Vourlidas · Katherine Winters* · Alexandra Wold* · KiChang Yoon · Emiliya Yordanova* · Jie Zhang ·

Tarek Al-Ubaidi* · Suzy Bingham* · Steven Brown* · Baptiste Cecconi · David Falconer · Natalia Ganushkina* · Laura Godoy* · Bernd Heber · Christina Kay · Adam Kellerman* · Burcu Kosar* · Alexander Kosovichev* · Yuki Kubo · Peter MacNeice* · Chigomezyo Ngwira* · Steve Petrinec* · Nikolai Pogorelov* · Lutz Rastaetter* · Ian Richardson* · Neel Savani* · Barbara Thompson* · Karlheinz Trattner* · Rodney Viereck · Brian Walsh · Chunming Wang* · Daniel Welling* · Yongliang Zhang* · Yihua Zheng* ·

*attended 2017 CCMC-LWS working meeting

Working Team Goals

This team will evaluate how well different models/techniques can predict CME arrival time and impact for a set of historical events, with open communication with the community. The work is complementary to the CME Scoreboard activity whose goal is collect and display real-time CME predictions and facilitate the validation of real-time predictions.

  • Evaluate where we stand with CME arrival time and impact prediction
  • Establish metrics agreed upon by the community
  • Provide a benchmark against which future models can be assessed against

Users Needs

  • All of types of CMEs should be taken into account since forecasters deal with all types. We will keep track of different CME types and validation can be performed for different subsets.
  • Modelers, researchers, and users each have different needs. Skill scores will be computed that serve all of their needs. Skill scores useful for users will be clearly described in an easy to understand context.
  • We will compute contingency table skill scores using different hit definition intervals 3, 6, 12, 18, 24, and 36 hours, as different users have different needs.
  • Users would appreciate a confidence interval along with the predicted arrival times. For models that produce a confidence interval, skills scores will also be computed using this interval.
  • Users would appreciate a system that keeps track of CME misses (ICME observed but not predicted), this could be added to the CME scoreboard in some manner. An option of submitted a prediction of "no arrival" into the CME scoreboard is also desired.
  • Users are interested in arrival time but also impact strength and duration (in that order).

Working Team Deliverables

  • Catalog of metrics and how they relate to user needs and validation needs.
  • Model assessments with selected metrics for selected time intervals.
  • Online database of model inputs, outputs, and observations.
  • Publication describing model assessment results summarizing where we stand with CME arrival time and impact prediction.

By April 2017: Deliver the first two items for at least two time intervals and a handful of models, and make a start on the third item through collaboration with the Information Architecture for Interactive Archives (IAIA) working team. Collaborate with the working team: Assessment of Understanding and Quantifying Progress Toward Science Understanding and Operational Readiness.

Summary of Working Team Tasks

  • Identify and discuss user needs
  • Discuss and select time intervals to study — expand as needed
  • Discuss and develop a set of relevant skill scores, and relate them to user needs
  • Identify sources of uncertainty
  • Produce model/technique output for intervals of study
  • Perform model assessments with selected metrics

Participating Models

Draft list of models/techniques participating in this working team:

  • DBM (Vrsnak & Zic)
  • ElEvo (Ellipse Evolution) (Moestl)
  • ElEvoHI (Ellipse Evolution based on HI)
  • Enhanced drag-based model (Hess & Zhang) [set 1 results]
  • SARM (Núñez) [set 1 results]
  • WSA-ENLIL+Cone (Arge, Odstrcil) [set 1 results]   contact us to add your model


List of Time Intervals in this Study
First set of events: a small core selection of 4 events to have modelled by the working meeting in April 2017. For each event, we will release a set of CME model parameters. Users are free to use those or their own parameters for this set of events. The modeled runs will be used for the first tests of our metrics before moving to a larger set of events.

Of the four events, two will be hits, one problematic hit, and one false alarm. The two hits will overlap with the IMF Bz working team’s event list to reduce the overall modeling burden (for those models that predict both arrival and Bz). If desired, the CME parameters provided below may be used as model input.

A) 3 April 2010 10:33 UT (hit)
B) 15 March 2013 07:12 UT (hit)
C) 15 March 2015 01:48 UT (hit; problematic, many models predict a late arrival)
D) 7 January 2014 18:24 UT (false alarm; only a weak discontinuity arrives)

Larger set of events: The selection of a larger set of 100 events is currently under discussion. Please fill out the survey with your thoughts. During the April 2014 meeting, we agreed to take a set of 100 events for statistical significance. Both users as well as scientists agreed that all types of CMEs should be taken into account, but we shall flag each of the CMEs into their corresponding category for later use and statistics.

For the CME arrival time and plasma parameters from observations, the following has been proposed:

  • CME arrival time will be taken from current existing CME catalogues. For the cases where the CME can be found in multiple catalogues, a random catalogue will be chosen.
  • 1 hour averaged OMNI data will be used for solar wind impact parameters such as the peak of the density/velocity.

As our goal is to see where each model stands and what arrival and solar wind parameters at impact of your model are underperforming, and which are performing well, so that we can advance and improve our models, we would like to fix the model input for the 100 events.

We have come up with the following proposal:

  • Each model will provide results for the 100 events for the fixed input parameters that are provided by the 3D CME kinematics team as well as fixed magnetogram. The 3D CME kinematics team will provide the GCS model inputs. In case your model does not use such input, or slightly adjusted input, an “in-between” program/algorithm may be used to process the provided input parameters. This will only be acceptable if all 100 parameters are processed in the exact same manner.
  • For those models that cannot provide results for all 100 events, a subset will be provided.
  • Optionally, modelers can also submit the 100 events for their best parameters.
  • There will be an extra (smaller) set with events that have CME observations at other 1AU spacecraft. It is not necessary for the models to perform their validation on this set. However, it is highly appreciated as it will leave us with a quantification of how well we can perform compared to observations at Earth.

In agreement with the IMF Bz team, we will have some event overlap so that we can lower the burden for those models that are providing data/model outputs for both teams. Considerations for event selection:

CME Parameters for initial event set

See Table (Google Sheets).

Optional CME Parameters for core 4 event subset

See Table (Google Sheets).

Results for core 4 event subset

Summary Tables

See Table (Google Sheets).

Metrics and Quantities

Physical Quantities and Metrics for Model Validation

Quantities: ICME...

  • arrival time (RMSE, Mean absolute error, Mean error)
  • max magnetic field magnitude, speed, density, temperature (correlation coefficient)
  • duration
  • resulting geomagnetic storm strength (Kp, Dst, ...)


  • Categorical (yes/no) predictions:
    • skill scores based on contingency tables (Hit rate (POD), False alarm rate, False alarm ratio, Bias, Accuracy, Threat Score, Base Rate, Proportion Correct, HK, and HSS). Compute all scores and arrival time errors using different hit definition intervals 3, 6, 12, 18, 24, and 36 hours.
    • probabilistic and continuous predictions can be converted to categorial using threshold
  • Probabilistic predictions:
    • Reliability diagram, Brier Skill Score, ...

See related discussion items on the Ongoing Discussion Items page

Model Results

CME Arrival Time and Impact Working Team: Enhanced drag-based model (Hess & Zhang) results for set 1 20100403 Carrington Lon: 260.785 HEEQ Lat: -26.273 Tilt: 14.535 HA: 24.975 Aspect Ratio: .378943

v0: 827.09 km/s vsw: 512.446 km/s

Shock Arrival Time: 2010/04/05 09:25 UTC Shock Speed at 1 AU: 740.526 km/s

Flux Rope Arrival Time: 2010/04/05 15:02 UTC Flux Rope Speed at 1 AU: 651.813 km/s

20130315 Carrington Lon: 57.0204 HEEQ Lat: -6.7086 Tilt: 54.2232 HA: 27.1116 Aspect Ratio: .388

v0: 1199.0272 vsw: 429.3 km/s

Shock Arrival Time: 2013/03/17 02:48 UTC Shock Speed at 1 AU: 767.10 km/s

Flux Rope Arrival Time: 61.35 2013/03/17 20:45 UTC Flux Rope Speed at 1 AU: 508.708 km/s

20140107 Carrington Lon: 140.504 HEEQ Lat: -30.199 Tilt: 45.8388 HA: 35.496 Aspect Ratio:.373

v0: 1611.8 km/s vsw: 380.5 km/s

Shock Arrival Time: 2014/01/09 03:21 UTC Shock Speed at 1 AU: 793.76 km/s

Flux Rope Arrival Time: 2014/01/09 10:25 UTC Flux Rope Speed at 1 AU: 622.94 km/s

20150315 Carrington Lon: 209.07 HEEQ Lat: -11.733 Tilt: -54.783 HA: 27.39 Aspect Ratio: .40

v0: 782.2 km.s vsw: 420.1 km/s

Shock Arrival Time: 2015/03/17 07:32 UTC Shock Speed at 1 AU: 558.87 km/s

Flux Rope Arrival Time: 2015/03/17 12:33 UTC Flux Rope Speed at 1 AU: 504.24 km/s

CME Arrival Time and Impact Working Team: SARM results for set 1

CME observed at 21.5 Rs: 2010-04-03 17:16
Time at C2: 2010-04-03 09:54
Predictions for Earth:
- In-situ shock speed: 549.40 km/s
- Shock arrival time: 2010-04-06 04:23

CME observed at 21.5 Rs: 2013-03-15 09:03
Time at C2: 2013-03-15 06:54
Predictions for Earth:
- In-situ shock speed: 869.46 km/s
- Shock arrival time: 2013-03-17 04:05

CME observed at 21.5 Rs: 2015-03-15 04:40
Time at C2: 2015-03-15 02:00
- Associated flare: C9.1 (S22W29). Peak at 2015-03-15 01:15
Predictions for Earth:
- In-situ shock speed: 585.32 km/s
- Shock arrival time: 2015-03-17 12:38

CME observed at 21.5 Rs: 2014-01-07 19:48
Time at C2: 2014-01-07 18:24
Predictions for Earth:
- In-situ shock speed: 1082.58 km/s
- Shock arrival time: 2014-01-09 7:40

Taking into account the DONKI's IP shock arrival times, the prediction errors were:
CME1: Observed shock arrival time at 2010-04-05T07:54Z (a). Prediction error: 20 h 29 min
CME2: Observed shock arrival time at 2013-03-17T05:59Z (a). Prediction error: -1 h 54 min
CME3: Observed shock arrival time at 2015-03-17T04:05Z (a). Prediction Error: 8 h 33 min
CME4: Observed shock arrival time at 2014-01-09T19:32Z (b). Prediction Error: -11 h 52 min

About these results
SARM makes predictions of shock speeds and arrival times. It is a drag-based model described by a 1D differential equation that was calibrated with a dataset of shocks from 1997 to 2016 observed from 0.72 to 8.7 AU. The most recent calibration using the NASA/GSFC's DONKI database yields a POD of 63%, a FAR of 43% and a mean absolute error of arrival time predictions of 11 h 48 min for the period 2010-2016.

Recently, SARM was set up as a real-time application whose predictions are triggered by CME observations at 21.5 Rs from DONKI. For each CME, SARM automatically checks if there is an associated flare, in which case it also uses flare data (from NOAA/SWPC database) to refine its forecasts. Predictions may be 'no shock' depending on the triggering criteria described in the aforementioned paper, as well as new criteria found from recent data.

The real-time SARM module has a front-end form (http://spaceweather.uma.es/shock/predictions) that allows interested users to run this model with historical data for specific dates. The SARM results are shown below:

The paper on the SARM model is: Núñez, M., T. Nieves-Chinchilla, and A. Pulkkinen (2016), Prediction of Shock Arrival Times from CME and Flare Data, /Space Weather/, 14, 2016, doi: 10.1002/2016SW001361

CME Arrival Time and Impact Working Team: WSA-ENLIL+Cone (time-dependent) (Arge, Odstrcil) results for set 1

Event DateRun NumberKey WordsModel TypeModelModel VersionCarrington Rotation StartCarrington Rotation EndStart DateEnd Date
Modeled RunLeila_Mays_033117_SH_12010-04-03 CME. CME arrival time working team set 1.HeliosphereENLIL2.8f00----
Modeled RunLeila_Mays_033117_SH_22010-04-03 CME. CME arrival time working team set 1.HeliosphereENLIL2.8f00----
Modeled RunLeila_Mays_033117_SH_32014-01-07 CME. CME arrival time working team set 1.HeliosphereENLIL2.8f00----
Modeled RunLeila_Mays_033117_SH_42010-04-03 CME. CME arrival time working team set 1.HeliosphereENLIL2.8f00----

Ongoing Discussion Items

➡ We invite you to voice your opinion on the preliminary decisions that we have made during the April 2017 meeting by filling in this questionnaire (a few minutes of your time).

A draft list of potential discussion items. The team will keep adding items and sort them by priority.

  • Do we want to fix the CME input parameters and input magnetograms (if applicable) for all models? If so, is the team comfortable using CME parameters from a catalog to remove bias? Modelers may also submit another set of results with their best performing CME parameters in addition to the fixed ones.
  • See event selection discussion items.
  • How do interacting or multiple CME events impact the chosen metrics?
  • Do we consider multi-spacecraft event validation?
  • What is a good baseline model or climatology to compare against?
  • What are the effects of the model inputs on the CME arrival time and impacts (model parameters, CME parameters, input magnetograms, etc).
  • For the hit calculation:
    • How to define a categorical yes/no for "model predicted arrival" - human analysis of model results or algorithm? What analysis method?
  • How to quantify uncertainty in the skill score results based on validation sample size, uncertainties in observations, and from any other sources.
  • Consider extracting an "impact parameter" from model results and validating this.
  • Over what interval should average or max in-situ plasma observations be derived?
  • How can we validate and quantify the effect of the background solar wind prediction on the arrival time prediction?
  • Event selection: Should we have a "training set", "validation set", and "test set" — where the "test" set is not revealed until a later stage?
  • What are the best ways to determine the uncertainty/confidence of the arrival time prediction?


Meetings, Reports & Resources



Resources: Observation Data
There are a variety of ICME catalogs that can be used by this working team:

Latest News



  • Publication from our working team: Verbeke et al. (2018), Benchmarking CME Arrival Time and Impact: Progress on Metadata, Metrics, and Events, Space Weather, 17. doi:10.1029/2018SW002046. [arXiv]