Vector fields

Let \(\require{physics} \vb{f}(\vb{x})\) be a vector-valued function, i.e. a vector field: \[\begin{equation} \vb{f}: S \rightarrow \mathbb{R}^m: \vb{x} \mapsto \vb{f}(\vb{x}) \thinspace , \end{equation}\] in which \(S\) is an open subset of \(\mathbb{R}^n\). We will also write: \[\begin{align} \label{eq:vector_field} &\vb{f}(\vb{x}) = (f_1(\vb{x}), f_2(\vb{x}), \dots, f_m(\vb{x})) \\ &\forall f_i: \mathbb{R}^n \rightarrow \mathbb{R}: \vb{x} \mapsto f_i(\vb{x}) \thinspace , \end{align}\] in which the functions \(f_i\) are sometimes called coordinate functions (Burden and Faires 2011). In a sense, the vector field associates to every vector \(\vb{x} \in \mathbb{R}^n\) a vector \(\vb{f}(\vb{x}) \in \mathbb{R}^m\). Since this is a generaliation from \(\mathbb{R}\) to \(\mathbb{R}^n\), we can easily generalize the previous formulas for scalar functions to vector functions.

We can now write \[\begin{equation} \vb{f}(\vb{a}; \vb{y}) = \lim_{h \to 0} \qty( \frac{\vb{f}(\vb{a} + h \vb{y}) - \vb{f}(\vb{a})}{h} ) \thinspace . \end{equation}\] The vector function \(\vb{f}\) is now called differentiable at a point \(\vb{a}\) is there exists a linear map called the total derivative of \(f\) at \(\vb{a}\) \[\begin{equation} \vb{T}_{\vb{a}}: \mathbb{R}^n \to \mathbb{R}^n \thinspace , \end{equation}\] such that \(\vb{f}\) admits a Taylor formula: \[\begin{equation} \vb{f}(\vb{a} + \vb{v}) + \vb{T}_{\vb{a}}(\vb{v}) + ||\vb{v}|| \vb{E}(\vb{a}, \vb{v}) \thinspace , \end{equation}\] in which \(\vb{E}(\vb{a}, \vb{v}) \to \vb{0}\) as \(||\vb{v}|| \to \vb{0}\).

We can write this total derivative also as \[\begin{equation} \vb{T}_{\vb{a}} (\vb{y}) = \vb{f}'(\vb{a}, \vb{y}) = \sum_i^n \grad{f_i(\vb{a})} \cdot \vb{y} \vb{e}_i = \vb{J}(\vb{a}) \vb{y} \thinspace , \end{equation}\] which leads to the conclusion that the matrix \(\vb{J}\), which we will call the Jacobian matrix, is the matrix representation of the total derivative for vector fields.

We can also say that first-order derivative of a vector field is the matrix \[\begin{equation} \vb{J} \equiv \pdv{\vb{f}}{\vb{x}} \thinspace , \end{equation}\] which is called the Jacobian and has entries \[\begin{equation} \label{eq:jacobian} \vb{J}(\vb{x})_{ij} = \pdv{f_i(\vb{x})}{x_j} \thinspace . \end{equation}\]

Since the gradient of a scalar function is also a vector, we can take the Jacobian of this gradient, leading to \[\begin{equation} \pdv{\vb{x}} \bigg( \grad{f(\vb{x})} \bigg) = \vb{H}(\vb{x})^\text{T} \thinspace , \end{equation}\] which means that the Jacobian of the gradient is the transpose of the Hessian. So, for twice differentiable functions the Hessian is equal to the Jacobian of the gradient.

A useful formula concerning the derivative of vector fields with respect to a vector is \[\begin{align} \pdv{\vb{x}}{\vb{x}} = \vb{I} \thinspace . \end{align}\]

References

Burden, Richard L., and J. Douglas Faires. 2011. Numerical Analysis. Ninth. Brooks/Cole.