Every year at the BuildingEnergy Boston Conference + Trade Show, there are several presentations related to computer simulation. We model energy, air currents, heat, and moisture to reconcile design decisions with intended use. Advancements in modeling have given us a new way to view buildings and an opportunity to build better ones. However, each presentation on this topic calls to mind the often repeated saying “garbage in, garbage out”.
What is the computer assuming? Is the entered data accurate? Are we capturing the accuracy of our data in the model? What value do we assign the modeled result?
I was reminded of the importance of modeling accuracy when investigating the thermal behavior of composite walls on a past project, specifically the relationship between the as-tested and modeled results. The models were used to determine the location within the wall where water will condense: a hygrothermal analysis to address the optimal type and position of the vapor control layer. Lessons learned from the as-tested physical model were different from those of the computer generated model. This raised the question: what is the best way to make the modeled results more compatible with the empirical ones?
I am hoping to hear related commentary at two BuildingEnergy Boston presentations this year, both relying and focusing on modeling. The first is called “Break It or Lose It: Thermal Bridging in Building Envelopes”, which discusses the R-value evaluation of as-built wall systems, compared to the designed R-value. The presenters claim that up to 70% less R-value was observed than intended. However, this is relying on the design R-value, which is a model. It may be a simple model, aggregating the R-values of the wall components one dimensionally, or it may be a more complex two dimensional model.
Either way, the question remains: “was the wall truly so much worse than the model – or was the model bad?”
I’m looking forward to understanding how the presenters deal with this question, and their recommendations. Are they advocating changing our modeling techniques as well as our building techniques? If so, what physical considerations are our models not incorporating? What is the bias? Or are we entering “bad data”?
Another presentation related to modeling is called “A Prototype Visualization Tool for Hygrothermal Analysis”, which discusses a tool that the creators claim can analyze risk of mold growth and building component failure. What I would like to understand following this presentation is how the creators represent the accuracy of their results. Is a probability associated with this analysis? Is this probability based on the accuracy of the information entered or physical phenomena that are always at work? I’m looking forward to hearing how the creators dealt with the inaccuracies inherent in our work on buildings.
One thing is for sure, modeling is to be deployed with care, caution, and experience. Everyone at BuildingEnergy Boston is interested in progress, and not assuming what we’re doing is enough. The presentations discussed here push the envelope and add to our knowledge, and I’m very excited to attend. I hope to see you there, as well!
You can "Subscribe" to this post by clicking "Subscribe," and you will receive an email when new comments are posted. Toggle the setting to "Don't send Email" or "Unsubscribe" if you don't want to receive notifications any more. You can view and edit all of your Content Subscriptions in your My Account page.