The Nash–Sutcliffe Efficiency (NSE) is a widely used metric for evaluating hydrological models, particularly for comparing observed and simulated streamflows. It measures how well a model's predictions match observed data.
NSE Formula
The NSE is given by:
$$
N S E=1-\frac{\sum_{i=1}^n\left(Q_{obs, i}-Q_{sim, i}\right)^2}{\sum_{i=1}^n\left(Q_{obs, i}-\bar{Q}_{obs}\right)^2}
$$
where:
- $Q_{\text {obs }, i}=$ observed streamflow at time $i$
- $Q_{\text {sim }, i}=$ simulated streamflow at time $i$
- $\bar{Q}_{\text {obs }}=$ mean of observed streamflows
- $n =$ number of observations
NSE Interpretation
- $N S E=1 \rightarrow$ Perfect model (simulated $=$ observed)
- $N S E>0.75 \rightarrow$ Good model performance
- $0.5 \leq N S E<0.75 \rightarrow$ Acceptable performance
- $N S E=0 \rightarrow$ Model is as good as the mean of observations
- $N S E<0 \rightarrow$ Model is worse than using the mean (unacceptable)
🧠Python Implementation: NSE Calculation
https://gist.github.com/viadean/ffc82b0055092c1445a7a0bb5f3d0ee0
Why Use NSE?
- Easy to interpret (compares to the mean of observations).
- Common in hydrological modeling (e.g., SWAT, HEC-HMS, rainfall-runoff models).