Removing the warm up period from the output of a timedependent simulation
A simulation software often computes a configuration in a step response fashion. Indeed, it starts from an approximate initial state and iteratively advances towards a statistically converged state that satisfies the physical constraints of the model. In Computational Fluid Dynamics (CFD), for instance, the conservation laws are progressively enforced, resulting in global signals with similar characteristics. Consequently, before reaching a statistically steady state, there is a warmup period unsatisfactory from the modeling point of view, that should be discarded in our data collection.
Traditionally, this warmup is determined using a rule of thumb, which can be prone to errors. To address this issue, we present in this post a Python implementation of a method that can automatically estimate the appropriate amount of data to be discarded. This method is outlined in the research paper titled “Detection of initial transient and estimation of statistical error in timeresolved turbulent flow data” by Charles Mockett et al. (2010). You can find the paper at the following link: Mockett et al., 2010.
To illustrate these concepts of warmup and statistically converged state, we take the minimal temperature as a function of the time in a combustion chamber:
As you can see, the minimal temperature as a function of the time behaves differently before t≈0.03 and after. Indeed, until t≈0.03, the flow is evolving from the initial conditions until the statistically steady state.
For reliable statistics and quantifications of the minimal temperature, we would like to get rid of the warmup period and find the transient time tt at which the system switches from the warmup state to the statistically steady state.
Mean and standard deviation of the signal
Before diving into finding the transient time of the signal presented, some mathematical background is needed to explain the methodology.
Let’s imagine a timedependent signal f(t) where t is the time. This signal is assumed to be extracted from a continuous stationary ergordic random processes. We can then define the mean μf and the standard deviation σf of the signal as
$$ $$
μf=𝔼[f(t)],
σf=𝔼[(f(t)−μf)2]‾‾‾‾‾‾‾‾‾‾‾‾‾√,
where 𝔼[X] is the expected value of X. μf and σf can only be obtained from infinite sample lengths. For a signal of length T=tend−tini where tini and tend are respectively the beginning and end of the signal, μf and σf can be approximated by
μ̂ T=1T∫tendtinif(t)dt,
σ̂ T=1T∫tendtini(f2(t)−μ̂ 2T)dt‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√.
For a finite number of samples [f(t0),f(t1),…,f(tn)] with t0<t1<⋯<tn and tn−t0=T, this becomes
μ̂ T=∑ni=0f(ti)n,
σ̂ T=∑ni=0(f(ti)−μ̂ T)2n‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾√.
If now, we divide the signal f(t) into timewindows of length Tw, the absolute statistical error on the estimate ϕ̂ Tw (=μ̂ Tw or σ̂ Tw) for the timewindows length Tw compared to the estimate ϕ̂ T (respectively μ̂ T or σ̂ T) for the full length of the signal T is given by
ϵ[ϕ̂ Tw](Tw)=⟨(ϕ̂ Tw−ϕ̂ T)2⟩‾‾‾‾‾‾‾‾‾‾‾‾‾‾√,
where <> denotes the average over all the timewindows. With this equation, the error as a function of the sample length can be estimated. Note that the accuracy of this error decreases as Tw becomes close to T since fewer windows are available to compute the error. For the case of the bandwidthlimited Gaussian white noise, the error on the estimates μ̂ T and σ̂ T as a function of the sample length Tcan directly be obtained through:
ϵ[μ̂ T](T)≈12BT‾‾‾‾√σf,
ϵ[σ̂ T](T)≈14BT‾‾‾‾√σf,
where B is the bandwidth. These approximations are valid for BT≥5. Since σf is not known, this term is replaced by the best standard deviation estimate that is here σ̂ T. Also, to further detect the transient times, the authors of the article decided to use two bandwidth parameters Bμ and Bσ. Thus, the errors on the estimates become:
ϵ[μ̂ T](T)≈12BμT‾‾‾‾‾√σ̂ T
ϵ[σ̂ T](T)≈14BσT‾‾‾‾‾√σ̂ T,
The parameters Bμ and Bσ are obtained as the lowest bandwidth values such as the curves of ϵ[ϕ̂ T](T) intersect the ones of ϵ[ϕ̂ Tw](Tw). Here is the Python program to plot the errors estimated with the windows and with the bandwidthlimited Gaussian white noise:
import numpy as np
import scipy
from matplotlib import pyplot as plt
def eps_mean_windows(window_size: float, t: np.ndarray, samples: np.ndarray) > float:
"""Computes the average error on the mean with all the timewindows
of length window_size of the signal.
"""
mean = samples.mean()
n_windows = int((t[1]  t[0]) / window_size)
t_end = t[0] + n_windows * window_size
tw = np.linspace(t[0], t_end, n_windows + 1)
diff = [(samples[(tw[i] <= t) & (t < tw[i + 1])].mean()  mean) ** 2 for i in range(len(tw)  1)]
return np.sqrt(np.mean(diff))
def eps_std_windows(window_size: float, t: np.ndarray, samples: np.ndarray) > float:
"""Computes the average error on the standard deviation with all the timewindows
of length window_size of the signal.
"""
std = samples.std()
n_windows = int((t[1]  t[0]) / window_size)
t_end = t[0] + n_windows * window_size
tw = np.linspace(t[0], t_end, n_windows + 1)
diff = [(samples[(tw[i] <= t) & (t < tw[i + 1])].std()  std) ** 2 for i in range(len(tw)  1)]
return np.sqrt(np.mean(diff))
def compute_B_mean(window_size: float, t: np.ndarray, samples: np.ndarray) > float:
"""Computes the B_mean parameter such that the errors on the mean given
by the timewindows and the GaussianWhite noise formulas are the same for the
window size considered. This function is needed to further fit the B_mean parameter.
"""
std = samples.std()
eps = eps_mean_windows(window_size, t, samples)
return 1 / (2 * window_size * eps**2) * std**2
def B_mean_fitting(t: np.ndarray, samples: np.ndarray) > float:
"""Finds the lowest B_mean parameter such that the curve of the error
on the mean with the Gaussian white noise formula intersects the curve of the
error on the mean given with the time windows.
"""
# Bounds for the optimization. We choose the lower bound such that there are some samples
# in the window size and not too small for long range signals.
time_step = np.max(np.diff(t))
lb = max(10 * time_step, (t[1]  t[0]) / 1000)
ub = (t[1]  t[0]) / 2
bounds = ((lb, ub),)
# We consider ten starting points for the optimization.
window_size_ini = np.linspace(lb, ub, 10)
# We choose the NelderMead method since gradientbased method is inefficient
# due to the noisy curve of the error with the timewindows.
B_mean = [optimize.minimize(compute_B_mean, window_size_ini[i], args=(t, samples), bounds=bounds, method="NelderMead").fun for i in range(10)]
return np.min(B_mean)
def compute_B_std(window_size: float, t: np.ndarray, samples: np.ndarray) > float:
"""Computes the B_std parameter such that the errors on the standard deviation given
by the timewindows and the GaussianWhite noise formulas are the same for the
window size considered. This function is needed to further fit the B_std parameter.
"""
std = samples.std()
eps = eps_std_windows(window_size, t, samples)
return 1 / (4 * window_size * eps**2) * std**2
def B_std_fitting(t: np.ndarray, samples: np.ndarray) > float:
"""Finds the lowest B_std parameter such that the curve of the error
on the standard deviation with the Gaussian white noise formula intersects the curve of the
error on the standard deviation given with the time windows.
"""
time_step = np.max(np.diff(t))
lb = max(10 * time_step, (t[1]  t[0]) / 1000)
ub = (t[1]  t[0]) / 2
bounds = ((lb, ub),)
window_size_ini = np.linspace(lb, ub, 10)
B_std = [optimize.minimize(compute_B_std, window_size_ini[i], args=(t, samples), bounds=bounds, method="NelderMead").fun for i in range(10)]
return np.min(B_std)
If we go back to the minimal temperature signal presented earlier, by running the following program, we ask this:
data = pd.read_csv('avbp_mmm.csv', sep=',')
t = data['atime'].to_numpy()
samples = data['T_min'].to_numpy()
std = samples.std()
B_mean = B_mean_fitting(t=t, samples=samples)
B_std = B_std_fitting(t=t, samples=samples)
windows_test = np.linspace(np.log(0.0005), np.log(0.99), 100)
windows_test = np.exp(windows_test)
eps_tw_mu_total = [eps_mean_windows(window_size, t, samples) for window_size in windows_test]
eps_tw_sigma_total = [eps_std_windows(window_size, t, samples) for window_size in windows_test]
We obtain the following figure:
On the left, you can observe the original signal and on the right, the errors of the estimates of the mean and standard deviation with the bandwidthlimited Gaussian white noise formulas and windows estimates. Note that the error predicted with the bandwidthlimited Gaussian white noise is overestimated compared to the error computed with the timewindows method. Still, since it is often preferable to overestimate the error than to underestimate it, this failure will not have a significant impact on the methodology.
Method to detect the transient times
To detect the transient times, the data obtained will successively be shortened starting from a time t≥t0
to obtain a signal of length T′=tn−t. If we call tt the time at which the end of the warmup period occurs, and Tt=tn−tt
the length of the signal in the statistically steady state, the following hypotheses are considered:

for t<tt, the total error ϵ[μ̂ T′]×ϵ[σ̂ T′] (computed with the bandwidthlimited Gaussian noise formulas) is higher than ϵ[μ̂ Tt]×ϵ[σ̂ Tt]. . Indeed, the inclusion of the transient period in the computation of the mean and standard deviation is supposed to result in a distortion in at least one of these quantities and increase the errors on these predictions.

for t>tt, as we increase t, the total error ϵ[μ̂ T′]×ϵ[σ̂ T′] begins to rise again since the error is inversely proportional to the time period considered.
For these reasons, the transient time is determined by finding T′, such that the total error ϵ[μ̂ T′]×ϵ[σ̂ T′] is minimized. As T′ becomes closer to T,
the statistical estimates become more inaccurate as fewer samples areprovided, which results in various local optima in the total error. For this reason, as in the original implementation, we set the limit T′<T/2. The following program will be used to find the transient times and to plot the error as a function of the time t.
def compute_transient_and_errors(
t: np.ndarray, samples: np.ndarray, time_step: float = None
) > tuple:
"""Computes the transient time at an accuracy + time_step/2
and the errors for all transient times considered.
"""
t0 = t[0]
if time_step is None:
time_step = (t[1]  t[0]) / 100
eps_list = []
B_mean = B_mean_fitting(t=t, samples=samples)
B_std = B_std_fitting(t=t, samples=samples)
while (t0  t[0]) < (t[1]  t[0]) / 2:
t_eps = t[t > t0]
samples_eps = samples[t > t0]
T = t_eps[1]  t_eps[0]
eps_mean = 1 / (np.sqrt(2 * B_mean * T)) * samples_eps.std()
eps_std = 1 / np.sqrt(4 * B_std * T) * samples_eps.std()
eps_list.append(eps_mean * eps_std)
t0 += time_step
t_t = t[0] + np.argmin(eps_list) * time_step
return t_t, eps_list
import pandas as pd
data = pd.read_csv("avbp_mmm.csv", sep=",")
t = data["atime"].to_numpy()
samples = data["T_min"].to_numpy()
t_t, eps_list = compute_transient_and_errors(t=t, samples=samples)
The output of this program can then be used to build the following plot:
The transient time tt detected with the method is coherent with the visual inspection of the signal (left on the figure). On the right of the figure, you can observe
the total error as a function of the time. The minimum of this curve gives the transient time.
Reverse method
Note that this method can easily be extended to the case where you want to find the end of an initially steady state becoming unsteady (e.g. in an extinction simulation). The method is the same, except that this time we shorten the signal from the ending edge:
def compute_reverse_transient_and_errors(
t: np.ndarray, samples: np.ndarray, time_step: float = None
) > tuple:
"""Computes the transient time from the end of the signal at an accuracy + time_step/2
and the errors for all transient times considered.
"""
t0 = t[1]
if time_step is None:
time_step = (t[1]  t[0]) / 100
eps_list = []
B_mean = B_mean_fitting(t=t, samples=samples)
B_std = B_std_fitting(t=t, samples=samples)
while (t0  t[0]) > (t[1]  t[0]) / 2:
t_eps = t[t < t0]
samples_eps = samples[t < t0]
T = t_eps[1]  t_eps[0]
eps_mean = 1 / (np.sqrt(2 * B_mean * T)) * samples_eps.std()
eps_std = 1 / np.sqrt(4 * B_std * T) * samples_eps.std()
eps_list.append(eps_mean * eps_std)
t0 = time_step
t_t = t[1]  np.argmin(eps_list) * time_step
return t_t, eps_list
If we flip the original signal and we apply this function to the resulting signal, we obtain the following:
reverse_samples = np.flip(samples)
t_t, eps_list = compute_reverse_transient_and_errors(t=t, samples=reverse_samples)
Limitations
We observed however a limitation in this algorithm. Indeed, as said earlier, the warmup period is supposed to distort the mean or standard deviation compared to the statistically steady state. However, if the signal of the warmup period remains in the range of amplitudes of the signal of the statistically steady state, the algorithm will not detect the period change. The following examples show this limitation.
Takeaways
In this post, we were interested in detecting the warmup period in a signal. Despite their limitations previously shown, the Python scripts presented here may help you in getting rid of this period in a signal and automate your posttreatment for reliable statistics.
The error formulas introduced may also be used to build a workflow that would run until the error of the statistics estimates reach a certain threshold or that we have an adequate confidence interval of the desired quantity.
—Antoine Dauptain, Anthony Larroque, Cerfacs
Acknowledgements
Many thanks to Francesco Salvadore and Antonio Memmolo from CINECA for bringing this article to our attention through their initial FORTRAN implementation convdetect
.This work is part of center of excellence EXCELLERAT
Phase 2, funded by the European Union. This work has received funding
from the European High Performance Computing Joint Undertaking (JU) and Germany, Italy, Slovenia, Spain, Sweden, and France under grant agreement No 101092621.