Can modelling techniques be improved to correctly predict overheating in homes?
Head of Research and Development, Ben Abel
Hilson Moran has since the formation of its specialist building physics team, some 20 years ago, strived to improve the accuracy and robustness of its dynamic thermal modelling techniques and when the opportunity of comparing our modelling against some ‘real life’ measured results, I jumped at the opportunity. As part of the 2018 CIBSE Technical Symposium Ben Roberts, a PhD student from Loughborough University, presented his work where he had replicated the CIBSE TM59 overheating tests in a pair of semi-detached houses. This sparked an idea of modelling these houses in the dominant DTM software (TAS & IES) to see how closely the computer predictions came to the measured ‘real’ results.
The detailed results of this research has been published in CIBSE’s recent BSERT issue and demonstrated that although all the models are in broad terms correctly predicting overheating trends, we are potentially over-predicting the risk.The positive spin is that by using the TM59 methodology the designs we are producing are likely to be on the conservative side (i.e. higher performing glass, more shading, bigger window/vent opening areas etc.) which also builds in some resilience to climate change.
However, this leaves the question what could we be doing better? Are the models themselves correct? This got me thinking, the underlying principle equations and methodology formulations were all developed in the 1980’s and early 90’s, and generally these have not changed. Additionally, these were chiefly constructed around generating heating and cooling demands.
The requirements of predicting overheating in a naturally ventilated room puts far more emphasis on correctly predicting surface temperatures and heat flow into and out of the structure. This is not necessarily the case in a mechanically cooled system where the some of the effects of the surface temperatures will be mitigated by the cooling. The push is now to look more widely at how the calculations are made and are any of the assumptions or empirical formulas used that require greater scrutiny? From this research it points towards that the future of building simulation is going to have to challenge some of the long held assumptions implicit in many simulations codes, if our models are going to more closely match the real world to allow better targeted designs.
“the future of building simulation is going to have to challenge some of the long held assumptions implicit in many simulations codes, if our models are going to more closely match the real world to allow better targeted designs”
As part of this work Hilson Moran collaborated with the keen minds at both Loughborough University and Inkling LLP consultancy to really examine, in detail, the driving forces behind the differences in the modelled and measured results. It should be pointed out that the research is based on a single study of a single building typology, so this result might be an outlying result, even so it points out that what we thought as industry standard approved tools may require further examination/improvement if these tools are able to keep up with the demands and ambitions of building design now and into the future.