In a previous article, we discussed the challenges around establishing the correlation levels across CFD, wind tunnel and racetrack performance. The uncertainty in the measurements, and therefore in any comparison derived from them, is a fundamental aspect when determining correlation levels.

Within the experimental testing domain, the role of random uncertainty is widely discussed in the literature. These include, for example, variability in the measurements of forces & moments due to strain-gage hysteresis, non-linearities, thermal sensitivity shift, flow non-uniformity, run-to-run variations in the freestream conditions, model pitch/roll/yaw alignment, etc. The most effective way of calculating random uncertainty is through “repeat runs”.

Likewise, numerical uncertainty is a commonly discussed subject in the computational domain. They include the variability of the results due to the mesh, numerical schemes, rounding errors, turbulence approximations and others.

When drawing comparisons across domains, epistemic uncertainty becomes a significant factor. Epistemic uncertainty is derived from ignorance or lack of knowledge of the system or environment. These are often called the unknown unknowns.

Here at Sabe, we created a sub-group of epistemic uncertainty called “Model Qualification Uncertainties”. These relate to the uncertainties created when we, as engineers, make assumptions in the construction of a model to represent a real-life phenomenon. It applies to both CFD and Wind Tunnel testing.

Examples of Model Qualification Uncertainties are assuming that a given CAD or wind tunnel model part represents the real surface, or that the surface deformation under aerodynamic load is zero or follow a given function, or assuming a given tyre contact patch shape, ground roughness, freestream conditions and many others.

As we attempted to illustrate here, calculating uncertainty propagation is a highly complex task. Moreover, even if established methodologies are available for certain parts of the process, one has to accept the possibility of “unknown unknowns”.

Most engineers accept the idea that one single CFD simulation does not return the “correct” figure for an aerodynamic variable. But likewise, experience tells us that if the same experiment is repeated in different wind tunnel facilities, we get significantly different figures.

We believe many of the aerodynamic comparisons published are far too simplistic. For example, when comparing CFD to an experimental test, the metric should incorporate an estimate of the numerical error, it should not exclude any modelling assumptions or approximations used in the simulation, it should include an estimate of random errors in the experimental data and, most importantly, it should depend on a number of experimental replications, ideally in different facilities.

Fundamentally, the comparisons should not be mean versus mean; they should always include an estimated uncertainty band for each domain.