Model/Technique Registration (*=required):
Information about your group/project:
Name(s) and e-mail (s) (please list primary contact first)*:
Mike Terkildsen (m.terkildsen@bom.gov.au)
Graham Steward (g.steward@bom.gov.au)
Associated Institution/Project name/Group name*:
Space Weather Services
Australian Bureau of Meteorology
Website url(s):
www.ips.gov.au
Information about your method:
Forecasting method name*:
Data-driven probabilistic flare forecast model
Shorthand unique identifier for your method (methodname_version, e.g. ASSA_1, ASAP_201201):
BoM_flare1
Short description*:
The BoM-flare1 flare forecast model is based on a logistic regression, driven by recent flare history and region characteristics such as magnetic and Macintosh classification, area, and spot numbers. It is a fully automated model, without forecaster input.
References:
Further Model Details:
(1)* Please specify if the forecast is human generated, human generated but model-based, model-based, or other.
BoM-flare1 is purely model-based
(2)* Does your flare prediction method forecast -Y´M1.0-9.9¡ or ´M and above¡? Would it be difficult for you to adapt your method from one to another? Which forecast binning method do you prefer?
M and above (>=M1.0)
The statistical model is trained on the basis of >=M1.0. We could re-train the model on M1.0-9.9 if needs be.
We prefer the >=M approach since our expectation is that users who are impacted by M-class flares will similarly be impacted by X-class (eg HF users are impacted by anything >=M). Our expectation therefore is that the >=M class probability covers users interested in flares at a lower cutoff threshold (and above), whereas the >=X class probability covers users interested in only the very large flares.
(3)* How do you specify active regions in your model? Do you relate their location to other schemes such as NOAA or Catania? If so, what is your criterea to relate them?
We independently identify regions based on analysis of solar magnetograms, and match them to NOAA regions based on angular distance (threshold < 6 degrees for a match)
(4)* Uncertainties given as an upper and lower bound is an optional field. Does your model provide uncertainties for the forecasted probability?
The current model does not produce uncertainties for the forecasted probabilities. Representing uncertainty in the forecasts is a work in progress. We currently use the model validation metrics to understand the uncertainties on the predictions.
(5)* For each forecast, what prediction window(s) does your method use? E.g. Does your method predict for the next 24 hours, 48, and 72 hours or the next 24 hours, 24-48 hours, 48-72 hours? (This information is useful for displaying only comparable methods together).
The current model forecasts flare probabilities for the next 24 hours.
(6)* Calibration levels are optional fields. Do you have calibration for the probabilities from your model? E.g. is a 40% forecast a ´high¡ probability for your method?
We do not provide general calibration levels for the probabilities. The models give explicit probabilities for flare events to occur, reflecting our true certainty (or uncertainty) that a flare will occur.