1. The Problem with Before-and-After Comparisons

The simplest approach to measuring energy savings is to compare consumption before and after an intervention. If a facility used 1,000,000 kWh last year and 920,000 kWh this year, did the intervention save 80,000 kWh?

Not necessarily. This year might have been milder — less air conditioning in summer, less heating in winter. Occupancy might have dropped. A production line might have shut down. A new piece of equipment might have been added. Any of these factors could explain the change, wholly or partly, without the intervention having had any effect at all.

To make a credible savings claim, you need to answer a harder question: what would consumption have been this year, under this year’s conditions, if the intervention had never been installed? The difference between that prediction and actual consumption is the true saving.

The core equation
Savings = Predicted baseline consumption − Actual post-intervention consumption ± Adjustments

Where “predicted baseline” is what the facility would have consumed under actual post-period conditions (weather, occupancy, production) if nothing had changed.

2. Weather Normalisation

Weather is the single largest variable affecting energy consumption in most buildings. Weather normalisation is the process of mathematically removing weather’s influence so that genuine savings become visible.

How it works

During the baseline period (typically 12 months before the intervention), metered energy data is paired with weather data from a nearby weather station — temperature, humidity, solar radiation. A regression model is built that describes the mathematical relationship between weather conditions and energy consumption at that specific facility.

After the intervention, the model is fed actual post-period weather data and predicts what consumption would have been without any changes. The difference between this prediction and the actual metered consumption is the weather-normalised saving.

1
Collect baseline data
12+ months of metered energy data paired with concurrent weather data. The baseline must cover the full range of seasonal conditions.
2
Build the regression model
Establish the mathematical relationship between energy use and independent variables (temperature, humidity, occupancy, production). The model must pass statistical validation tests.
3
Install the intervention
Power quality correction, lighting upgrade, HVAC optimisation — whatever the energy conservation measure is.
4
Collect post-period data
Continue metering energy consumption under actual operating conditions. Collect the same weather and operational data.
5
Calculate normalised savings
Feed actual post-period weather into the baseline model to predict “what would have happened.” Subtract actual consumption. The difference is the verified saving.
Why this matters
Without weather normalisation, a mild winter could make an ineffective intervention look successful, or a harsh summer could make a genuinely effective one look like it failed. Weather normalisation removes this noise and reveals the true signal.

3. The Verification Frameworks

Three internationally recognised frameworks govern how energy savings should be measured and verified. They are complementary, not competing.

IPMVP — International Performance Measurement and Verification Protocol

Maintained by the Efficiency Valuation Organization (EVO), IPMVP is the overarching framework used worldwide. It defines four options depending on the type of intervention and available data:

OptionNameMethodBest For
ARetrofit Isolation — Key ParameterMeasure the most critical parameter; estimate the restSimple, single-system changes (e.g., lighting)
BRetrofit Isolation — All ParametersContinuously measure all parameters of the affected systemComplex single systems with variable loads
CWhole FacilityAnalyse utility meter data using regressionMultiple measures, whole-building savings
DCalibrated SimulationCalibrated energy modelWhen baseline metering was unavailable

Option C is the most common for power quality projects because the savings affect the entire facility — reduced I²R losses in every cable, cooler transformers, lower demand charges. These benefits are distributed and best captured at the utility meter level.

ASHRAE Guideline 14

ASHRAE Guideline 14-2014: Measurement of Energy, Demand, and Water Savings provides the detailed statistical criteria that IPMVP references. It specifies how to build regression models, what validation tests they must pass, and how to calculate savings uncertainty. It is the quantitative backbone behind IPMVP’s qualitative framework.

ISO 50015

ISO 50015:2014 (Energy management systems — Measurement and verification of energy performance) provides an internationally recognised M&V framework that complements both IPMVP and ASHRAE. It is particularly relevant for organisations with ISO 50001 energy management systems, as it provides the M&V methodology to demonstrate ongoing energy performance improvement.

4. What Makes Results Credible

A savings claim is only as good as the statistics behind it. The following metrics determine whether results should be trusted.

MetricWhat It MeasuresAcceptable Threshold
CV(RMSE)How well the model fits the data (lower is better)≤15% for monthly data, ≤30% for hourly
NMBESystematic bias in predictions (closer to 0% is better)≤5% for monthly, ≤10% for hourly
p-valueProbability that results are due to chance<0.05 (ideally <0.01)
Proportion of variance explained by the model>0.75 (though CV(RMSE) is more meaningful)
Cohen’s dPractical significance / effect size>0.5 = large effect
Fractional Savings UncertaintyUncertainty range of reported savings<50% of savings at 68% confidence
Statistical significance vs practical significance
A result can be statistically significant (low p-value) but practically insignificant (tiny effect size). Conversely, a large effect size with weak statistics means the effect might be real but the data isn’t conclusive. Credible M&V requires both: a low p-value (the savings aren’t due to chance) and a meaningful effect size (the savings are large enough to matter).

5. Common Pitfalls

Cherry-picking baselines
Selecting an unrepresentative baseline period — for example, a year with unusually high consumption — inflates apparent savings. Baselines must cover the full range of normal operating conditions (all seasons, typical occupancy, representative production).
Insufficient data
A minimum of 12 months baseline is needed for weather-sensitive facilities to capture all seasonal variation. Shorter baselines risk missing heating or cooling seasons entirely, producing models that fail outside the conditions they were trained on.
Ignoring non-routine events
Production changes, equipment additions, occupancy shifts, or operational changes between the baseline and post periods must be identified and adjusted for. Otherwise savings are misattributed — a factory that reduced production will show lower consumption regardless of any intervention.
Overfitting
Including too many variables relative to available data points produces a model that fits historical data perfectly but predicts poorly. A good regression model is parsimonious — it uses only the variables that genuinely drive consumption.

6. Why This Matters for Power Quality

Power quality improvements — harmonic filtering, power factor correction, voltage stabilisation — typically reduce facility consumption by 5–15%. That’s a genuine and valuable saving, but it’s modest enough that weather variation, production changes, or seasonal shifts can easily mask or exaggerate it.

Without rigorous M&V, a facility manager has no way to know whether a 10% drop in consumption was caused by the power quality intervention, a mild winter, or a production slowdown. With it, the savings are isolated, quantified, and statistically validated — giving confidence to everyone from the facility manager to the CFO to the board.

What HarmoniQ provides
Every HarmoniQ deployment includes comprehensive measurement and verification. We meter at 15-minute intervals using Class 0.2 equipment, build weather-normalised regression models, and validate results to recognised statistical thresholds. Our savings reports are independently verifiable, fully documented, and suitable for submission to utility incentive programmes, ESG reporting, and regulatory compliance.

The result: when we say a facility saved 8% on its electricity consumption, that number has been weather-normalised, statistically validated, and verified against international standards. It’s not an estimate. It’s a measurement.

Want this assessed for your own site?

Our engineers run a free site assessment, measure your actual power profile, and give you an exact savings figure.

Book a Free Site Assessment