I’ve recently gotten quite into the interpretability problem of Time-Series Modelling, i.e. how might I develop a system to interpret the output for a blackbox model that is able to classify an input time series ?

I recently found LASTS, a similar endeavour focused on looking into single-sample based counterfactual analysis. In essence, LASTS provides 3 values:

  1. The saliency map,
  2. The exemplar values and the counterexamplar values
  3. The Shapelet Tree Classifier

1 item under this folder.