If two variables tend to vary together (that is, when one of them is above its expected value, then the other variable tends to be above its expected value too), then the covariance between the two variables will be positive. On the other hand, when one of them is above its expected value the other variable tends to be below its expected value, then the covariance between the two variables will be negative.
where E is the expected value operator. This can also be written:
Random variables whose covariance is zero are called uncorrelated.
If X and Y are independent, then their covariance is zero. This follows because under independence,
Recalling the final form of the covariance derivation given above, and substituting, we get
The converse, however, is generally not true: Some pairs of random variables have covariance zero although they are not independent. Under some additional assumptions, covariance zero sometimes does entail independence, as for example in the case of multivariate normal distributions.
If X, Y are real-valued random variables and a, b, c, d are constant ("constant" in this context means non-random), then the following facts are a consequence of the definition of covariance:
For sequences X1, ..., Xn and Y1, ..., Ym of random variables, we have
For a sequence X1, ..., Xn of random variables, we have
It can be shown that the covariance is an inner product over some subspace of the vector space of random variables with finite second moment.
For column-vector valued random variables X and Y with respective expected values μ and ν, and respective scalar components m and n, the covariance is defined to be the m×n matrix called the covariance matrix:
For vector-valued random variables, Cov(X, Y) and Cov(Y, X) are each other's transposes.
for all x and y in H. The covariance operator C is then defined by
(from the Riesz representation theorem, such operator exists if Cov is bounded). Since Cov is symmetric in its arguments, the covariance operator is self-adjoint (the infinite-dimensional analogy of the transposition symmetry in the finite-dimensional case). When P is a centred Gaussian measure, C is also a nuclear operator. In particular, it is a compact operator of trace class, that is, it has finite trace.
where is now the value of the linear functional x on the element z.
where is now the value of the function z at the point x, i.e., the value of the linear functional evaluated at z.
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of linear algebra (see linear dependence). When the covariance is normalized, one obtains the correlation matrix. From it, one can obtain the Pearson coefficient, which gives us the goodness of the fit for the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.
Incorporating Ensemble Covariance in the Gridpoint Statistical Interpolation Variational Minimization: A Mathematical Framework
Jul 01, 2010; ABSTRACT Gridpoint statistical interpolation (GSI), a three-dimensional variational data assimilation method (3DVAR) has been...