It is known that one can derive the Jarzynski equality in a stochastic process of a classical system by assuming the local detailed balance. We study how the equality is modified in the linear feedback system. There, measurement is performed on the state following a linear Langevin equation, the measured variable is linear to the state variable with white sensor noise, the Kalman filter estimates the state by utilizing measured values in the past, and a linear regulator controls the state dynamics. Although a stochastic process produced by this dynamics is non-Markovian because of the feedback loop, it is known in the control theory that a Markov process for the estimation can be separated from the whole process. To the exponent in the Jarzynski equality, we find an additional term, whose average gives the mutual information between state variables and measured variables in the Markov process for the estimation. The resultant equality holds whether the gain is optimal or not.
ASJC Scopus subject areas