(This article attempts to be the transition from basic vector algebra to tensor algebra.)
In geometry and linear algebra , a Cartesian tensor is a tensor in Euclidean space which transforms from one rectangular coordinate system to another rectangular coordinate system by means of an orthogonal transformation .
The most familiar rectangular coordinate system is the three dimensional Cartesian coordinate system. Higher dimensional generalizations are easily possible by means of the standard basis in any finite dimensional Euclidean space (a vector space over the field of real numbers ).
Dyadic tensors were historically the first approach to formulating second order tensors, similarly triadics for third order tensors, and so on. Cartesian tensors use the powerful tensor index notation and they make the type of a tensor clear, while dyadics could have any type as suggested by the notation.
Basis vectors in 3d [ edit ]
In 3d the standard basis is (e x , e y , e z ) = (e 1 , e 2 , e 3 ). Each basis vector points along the x , y , and z axes, and the vectors are all unit vectors (or "normalized"), so the basis is orthonormal . The basis vectors point in a direction, while the coordinates are real numbers.
For Cartesian tensors of order order 1, a Cartesian vector a can be written algebraically as a linear combination of the basis vectors e x , e y , e z :
a
=
a
x
e
x
+
a
y
e
y
+
a
z
e
z
{\displaystyle \mathbf {a} =a_{x}\mathbf {e} _{x}+a_{y}\mathbf {e} _{y}+a_{z}\mathbf {e} _{z}}
where the coordinates of the vector with respect to the Cartesian basis are denoted ax , ay , az . Representing the basis vectors as column vectors
e
x
=
(
1
0
0
)
,
e
y
=
(
0
1
0
)
,
e
z
=
(
0
0
1
)
{\displaystyle \mathbf {e} _{x}={\begin{pmatrix}1\\0\\0\end{pmatrix}}\,,\quad \mathbf {e} _{y}={\begin{pmatrix}0\\1\\0\end{pmatrix}}\,,\quad \mathbf {e} _{z}={\begin{pmatrix}0\\0\\1\end{pmatrix}}}
we equivalently have a coordinate vector in a column vector representation:
a
=
(
a
x
a
y
a
z
)
{\displaystyle \mathbf {a} ={\begin{pmatrix}a_{x}\\a_{y}\\a_{z}\end{pmatrix}}}
A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used for specific reasons - see Einstein notation and covariance and contravariance of vectors for why.
The term "component" of a vector is ambiguous: it could refer to:
a specific coordinate of the vector such as az (a scalar), and similarly for x and y , or
the coordinate scalar-multiplying the corresponding basis vector, in which case the "y -component" of a is ay e y (a vector), and similarly for y and z .
Dot product, Kronecker delta, and metric tensor[ edit ]
The dot product · of the each basis vector with every other basis vector is very simple to calculate, both pictorially and algebraically since the basis is orthonormal. In cyclic permutations of perpendicular directions we have:
e
x
⋅
e
y
=
e
y
⋅
e
z
=
e
z
⋅
e
x
e
y
⋅
e
x
=
e
z
⋅
e
y
=
e
x
⋅
e
z
=
0
{\displaystyle {\begin{array}{cccc}\mathbf {e} _{x}\cdot \mathbf {e} _{y}&=\mathbf {e} _{y}\cdot \mathbf {e} _{z}&=\mathbf {e} _{z}\cdot \mathbf {e} _{x}\\\mathbf {e} _{y}\cdot \mathbf {e} _{x}&=\mathbf {e} _{z}\cdot \mathbf {e} _{y}&=\mathbf {e} _{x}\cdot \mathbf {e} _{z}&=0\end{array}}}
while for parallel directions:
e
x
⋅
e
x
=
e
y
⋅
e
y
=
e
z
⋅
e
z
=
1
{\displaystyle \mathbf {e} _{x}\cdot \mathbf {e} _{x}=\mathbf {e} _{y}\cdot \mathbf {e} _{y}=\mathbf {e} _{z}\cdot \mathbf {e} _{z}=1}
and all results can be summarized by the rule:
e
i
⋅
e
j
=
[
1
i
=
j
0
i
≠
j
{\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\left[{\begin{array}{cc}1&i=j\\0&i\neq j\end{array}}\right.}
where i and j are placeholders for x , y , z , or rather 1, 2, and 3, the latter system of using numerical labels generalizes to any dimension as shown next. This rule corresponds to the definition of the Kronecker delta . The Cartesian basis can be used to represent δ :
δ
i
j
=
e
i
⋅
e
j
{\displaystyle \delta _{ij}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}}
In addition, the metric tensor components gij with respect to a coordinate system are simply the dot products of each pair of basis vectors:
g
i
j
=
e
i
⋅
e
j
{\displaystyle g_{ij}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}}
and by this definition the metric is always symmetric. For the Cartesian basis the components are very simple, arranging into a matrix:
g
=
(
g
x
x
g
x
y
g
x
z
g
y
x
g
y
y
g
z
z
g
z
x
g
z
y
g
z
z
)
=
(
e
x
⋅
e
x
e
x
⋅
e
y
e
x
⋅
e
z
e
y
⋅
e
x
e
y
⋅
e
y
e
y
⋅
e
z
e
z
⋅
e
x
e
z
⋅
e
y
e
z
⋅
e
z
)
=
(
1
0
0
0
1
0
0
0
1
)
{\displaystyle \mathbf {g} ={\begin{pmatrix}g_{xx}&g_{xy}&g_{xz}\\g_{yx}&g_{yy}&g_{zz}\\g_{zx}&g_{zy}&g_{zz}\\\end{pmatrix}}={\begin{pmatrix}\mathbf {e} _{x}\cdot \mathbf {e} _{x}&\mathbf {e} _{x}\cdot \mathbf {e} _{y}&\mathbf {e} _{x}\cdot \mathbf {e} _{z}\\\mathbf {e} _{y}\cdot \mathbf {e} _{x}&\mathbf {e} _{y}\cdot \mathbf {e} _{y}&\mathbf {e} _{y}\cdot \mathbf {e} _{z}\\\mathbf {e} _{z}\cdot \mathbf {e} _{x}&\mathbf {e} _{z}\cdot \mathbf {e} _{y}&\mathbf {e} _{z}\cdot \mathbf {e} _{z}\\\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\\end{pmatrix}}}
so Cartesian coordinates have the simplest possible metric tensor, the δ itself:
g
i
j
=
δ
i
j
{\displaystyle g_{ij}=\delta _{ij}}
This is not true in general for other curvilinear coordinate systems ; orthogonal coordinates have diagonal metrics containing various scale factors, while general coordinates could have any entries.
Cross and product and the Levi-Civita symbol [ edit ]
For the cross product × of two vectors, the values are (almost) the other way round, cyclic permutations in perpendicular directions yield the next vector in the cyclic collection of vectors:
e
x
×
e
y
=
e
y
×
e
z
=
e
z
×
e
x
−
e
y
×
e
x
=
−
e
z
×
e
y
=
−
e
x
×
e
z
=
1
{\displaystyle {\begin{array}{lllc}\mathbf {e} _{x}\times \mathbf {e} _{y}&=\mathbf {e} _{y}\times \mathbf {e} _{z}&=\mathbf {e} _{z}\times \mathbf {e} _{x}\\-\mathbf {e} _{y}\times \mathbf {e} _{x}&=-\mathbf {e} _{z}\times \mathbf {e} _{y}&=-\mathbf {e} _{x}\times \mathbf {e} _{z}&=1\end{array}}}
while parallel vectors clearly vanish:
e
x
×
e
x
=
e
y
×
e
y
=
e
z
×
e
z
=
0
{\displaystyle \mathbf {e} _{x}\times \mathbf {e} _{x}=\mathbf {e} _{y}\times \mathbf {e} _{y}=\mathbf {e} _{z}\times \mathbf {e} _{z}=0}
and these can be summarized by
e
i
×
e
j
=
[
+
1
cyclic permutations:
(
i
,
j
)
=
(
x
,
y
)
,
(
y
,
z
)
,
(
z
,
x
)
−
1
anticyclic permutations:
(
i
,
j
)
=
(
y
,
x
)
,
(
z
,
y
)
,
(
x
,
z
)
0
i
=
j
{\displaystyle \mathbf {e} _{i}\times \mathbf {e} _{j}=\left[{\begin{array}{cc}+1&{\text{cyclic permutations:}}(i,j)=(x,y),(y,z),(z,x)\\-1&{\text{anticyclic permutations:}}(i,j)=(y,x),(z,y),(x,z)\\\\0&i=j\end{array}}\right.}
where again i and j are placeholders for x , y , z or 1, 2, 3. Similarly
e
k
⋅
e
i
×
e
j
=
[
+
1
cyclic permutations:
(
i
,
j
,
k
)
=
(
x
,
y
,
z
)
,
(
y
,
z
,
x
)
,
(
z
,
x
,
y
)
−
1
anticyclic permutations:
(
i
,
j
,
k
)
=
(
y
,
x
,
z
)
,
(
z
,
y
,
x
)
,
(
x
,
z
,
y
)
0
i
=
j
or
j
=
k
or
k
=
i
{\displaystyle \mathbf {e} _{k}\cdot \mathbf {e} _{i}\times \mathbf {e} _{j}=\left[{\begin{array}{cc}+1&{\text{cyclic permutations:}}(i,j,k)=(x,y,z),(y,z,x),(z,x,y)\\-1&{\text{anticyclic permutations:}}(i,j,k)=(y,x,z),(z,y,x),(x,z,y)\\\\0&i=j{\text{ or }}j=k{\text{ or }}k=i\end{array}}\right.}
These permutation relations and their corresponding values are important, and there is an object coinciding with this property: the Levi-Civita symbol . The Levi-Civita symbol entries can be represented by the Cartesian basis:
ε
i
j
k
=
e
i
⋅
e
j
×
e
k
{\displaystyle \varepsilon _{ijk}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}\times \mathbf {e} _{k}}
so we can write the cross product of two vectors as:
a
×
b
=
a
j
e
j
×
b
k
e
k
=
ε
i
j
k
a
j
b
k
e
i
{\displaystyle \mathbf {a} \times \mathbf {b} =a^{j}\mathbf {e} _{j}\times b^{k}\mathbf {e} _{k}=\varepsilon _{ijk}a^{j}b^{k}\mathbf {e} _{i}}
and the scalar triple product as:
ε
i
j
k
=
e
i
⋅
e
j
×
e
k
{\displaystyle \varepsilon _{ijk}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}\times \mathbf {e} _{k}}
c
⋅
a
×
b
=
c
i
e
i
⋅
a
j
e
j
×
b
k
e
k
=
ε
i
j
k
c
i
a
j
b
k
{\displaystyle \mathbf {c} \cdot \mathbf {a} \times \mathbf {b} =c^{i}\mathbf {e} _{i}\cdot a^{j}\mathbf {e} _{j}\times b^{k}\mathbf {e} _{k}=\varepsilon _{ijk}c^{i}a^{j}b^{k}}
These forms of the dot and cross products, as well as various other identities, most notably:
δ
i
p
δ
j
q
−
δ
i
q
δ
j
p
=
ε
i
j
k
ε
p
q
k
{\displaystyle \delta _{ip}\delta _{jq}-\delta _{iq}\delta _{jp}=\varepsilon _{ijk}\varepsilon _{pqk}}
greatly facilitate the manipulation and derivation of other identities in vector calculus .
Despite it's appearance, the Levi-Civita symbol is not a tensor , but a tensor density . The tensor index notation applies to any object which has entities that form multidimensional arrays - not everything with indices is a tensor by default. Instead, tensors are defined by how their coordinates and basis elements change under a transformation from one coordinate system to another.
Basis vectors in n dimensions [ edit ]
In n dimensions the standard basis is e 1 , e 2 , e 3 ... . Each basis vector e i points along the xi axis, and the basis is still orthonormal.
It is standard to use Einstein notation and tensor index notation for writing vectors as linear combinations of basis vectors in the following way:
a
=
∑
i
a
i
e
i
≡
a
i
e
i
{\displaystyle \mathbf {a} =\sum _{i}a^{i}\mathbf {e} _{i}\equiv a^{i}\mathbf {e} _{i}}
where the upper indices on the coordinates refer to the contravariant components of the vector a , and lower indices on the basis vectors refer to the covariance of these vectors. When there are two repeated indices, the summation sign is suppressed for notational efficiency and clarity. We can also write:
a
=
∑
i
a
i
e
i
≡
a
i
e
i
{\displaystyle \mathbf {a} =\sum _{i}a_{i}\mathbf {e} ^{i}\equiv a_{i}\mathbf {e} ^{i}}
where the lower indices on the coordinates refer to the covariant components of the vector a , and upper indices on the basis vectors refer to the contravariance of these vectors.
In the row and column vector representations, the j -th component of e i is the Kronecker delta:
(
e
i
)
j
=
δ
i
j
{\displaystyle (\mathbf {e} _{i})_{j}=\delta _{ij}}
A powerful advantage of the index notation over coordinate-specific notations is the independence of the dimension of the underlying vector space. Previously, the Cartesian labels x , y , z were just labels and not indices.
The position vector x should take the same form in any coordinate system. Consider the case of rectangular coordinate systems only.
In one rectangular coordinate system x as a contravector has coordinates xi and bases e i , while as a covector it has coordinates xi and bases e i , and we have:
x
=
x
i
e
i
,
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x^{i}\mathbf {e} _{i}\,,\quad \mathbf {x} =x_{i}\mathbf {e} ^{i}}
In another rectangular coordinate system x as a contravector has coordinates x i and bases e i , while as a covector it has coordinates x i and bases e i , and we have:
x
=
x
¯
i
e
¯
i
,
x
=
x
¯
i
e
¯
i
{\displaystyle \mathbf {x} ={\bar {x}}^{i}{\bar {\mathbf {e} }}_{i}\,,\quad \mathbf {x} ={\bar {x}}_{i}{\bar {\mathbf {e} }}^{i}}
A vector is invariant under changes in bases and coordinate systems, so if coordinates transform according to the inverse L , the bases transform according to L −1 , and the converse also applies; if the coordinates transform according to L −1 , the bases transform according to the inverse L . To clarify the difference between each of these transformations, the difference is shown through the indices as superscripts for contravariance and subscripts for covariance, and the components and bases are linearly transformed according to the following rules:
Vector elements
Contravariant transformation law
Covariant transformation law
Coordinates
x
¯
j
=
x
i
L
i
j
{\displaystyle {\bar {x}}^{j}=x^{i}L_{i}{}^{j}}
x
¯
j
=
x
k
(
L
j
k
)
−
1
{\displaystyle {\bar {x}}_{j}=x_{k}\left(L_{j}{}^{k}\right)^{-1}}
Basis
e
¯
j
=
(
L
j
k
)
−
1
e
k
{\displaystyle {\bar {\mathbf {e} }}_{j}=\left(L_{j}{}^{k}\right)^{-1}\mathbf {e} _{k}}
e
¯
j
=
L
i
j
e
i
{\displaystyle {\bar {\mathbf {e} }}^{j}=L_{i}{}^{j}\mathbf {e} ^{i}}
Full vector
x
¯
j
e
¯
j
=
x
i
L
i
j
(
L
j
k
)
−
1
e
k
=
x
i
δ
i
k
e
k
=
x
i
e
i
{\displaystyle {\bar {x}}^{j}{\bar {\mathbf {e} }}_{j}=x^{i}L_{i}{}^{j}\left(L_{j}{}^{k}\right)^{-1}\mathbf {e} _{k}=x^{i}\delta _{i}{}^{k}\mathbf {e} _{k}=x^{i}\mathbf {e} _{i}}
x
¯
j
e
¯
j
=
x
i
(
L
j
i
)
−
1
L
k
j
e
k
=
x
i
δ
i
k
e
k
=
x
i
e
i
{\displaystyle {\bar {x}}_{j}{\bar {\mathbf {e} }}^{j}=x_{i}\left(L_{j}{}^{i}\right)^{-1}L_{k}{}^{j}\mathbf {e} ^{k}=x_{i}\delta ^{i}{}_{k}\mathbf {e} ^{k}=x_{i}\mathbf {e} ^{i}}
Norm
|
x
|
=
x
¯
j
x
¯
j
=
x
i
L
i
j
x
k
(
L
j
k
)
−
1
=
x
i
δ
i
k
x
k
=
x
i
x
i
{\displaystyle |\mathbf {x} |={\bar {x}}^{j}{\bar {x}}_{j}=x^{i}L_{i}{}^{j}x_{k}\left(L_{j}{}^{k}\right)^{-1}=x^{i}\delta _{i}{}^{k}x_{k}=x^{i}x_{i}}
where Li j represents the entries of the transformation matrix (row number is i and column number is j ) and (Li k )−1 denotes the inverse matrix of Li k .
Each new coordinates is a function of all the old ones, and vice versa for the inverse function:
x
¯
i
=
x
¯
i
(
x
1
,
x
2
,
⋯
)
⇌
x
i
=
x
i
(
x
¯
1
,
x
¯
2
,
⋯
)
{\displaystyle {\bar {x}}{}^{i}={\bar {x}}{}^{i}\left(x^{1},x^{2},\cdots \right)\quad \rightleftharpoons \quad x{}^{i}=x{}^{i}\left({\bar {x}}^{1},{\bar {x}}^{2},\cdots \right)}
x
¯
i
=
x
¯
i
(
x
1
,
x
2
,
⋯
)
⇌
x
i
=
x
i
(
x
¯
1
,
x
¯
2
,
⋯
)
{\displaystyle {\bar {x}}{}_{i}={\bar {x}}{}_{i}\left(x_{1},x_{2},\cdots \right)\quad \rightleftharpoons \quad x{}_{i}=x{}_{i}\left({\bar {x}}_{1},{\bar {x}}_{2},\cdots \right)}
and similarly each new basis vector is a function of the old ones, and vice versa for the inverse function:
e
¯
j
=
e
¯
j
(
e
1
,
e
2
⋯
)
⇌
e
j
=
e
j
(
e
¯
1
,
e
¯
2
⋯
)
{\displaystyle {\bar {\mathbf {e} }}{}_{j}={\bar {\mathbf {e} }}{}_{j}\left(\mathbf {e} _{1},\mathbf {e} _{2}\cdots \right)\quad \rightleftharpoons \quad \mathbf {e} {}_{j}=\mathbf {e} {}_{j}\left({\bar {\mathbf {e} }}_{1},{\bar {\mathbf {e} }}_{2}\cdots \right)}
e
¯
j
=
e
¯
j
(
e
1
,
e
2
⋯
)
⇌
e
j
=
e
j
(
e
¯
1
,
e
¯
2
⋯
)
{\displaystyle {\bar {\mathbf {e} }}{}^{j}={\bar {\mathbf {e} }}{}^{j}\left(\mathbf {e} ^{1},\mathbf {e} ^{2}\cdots \right)\quad \rightleftharpoons \quad \mathbf {e} {}^{j}=\mathbf {e} {}^{j}\left({\bar {\mathbf {e} }}^{1},{\bar {\mathbf {e} }}^{2}\cdots \right)}
for all i , j .
Exactly the same transformation rules apply to any vector a : if a does not transform according to this rule, it is not a vector.
If L is orthogonal, the objects transforming by it are defined as Cartesian tensors. This geometrically has has the interpretation that a rectangular coordinate system is mapped to another rectangular coordinate system, in which the norm of the vector x is preserved (and distances are preserved). If L is an orthogonal transformation (orthogonal matrix ) then there are considerable simplifications, the matrix transpose (which is trivial to determine for any matrix) is the inverse (usually not as trivial to calculate) by definition:
L
T
=
L
−
1
⇒
(
L
i
j
)
−
1
=
(
L
i
j
)
T
=
L
j
i
{\displaystyle \mathbf {L} ^{\mathrm {T} }=\mathbf {L} ^{-1}\Rightarrow \left(L_{i}{}^{j}\right)^{-1}=\left(L_{i}{}^{j}\right)^{\mathrm {T} }=L^{j}{}_{i}}
and moreover, the determinant is det(L ) = ±1, which corresponds to two types of orthogonal transformation: (+1) for rotations and (−1) for reflections .
Applying the inverse matrix to each side of each equation in the previous table yeilds the inverse transformations of coordinates and bases. One observes from the previous table that orthogonal transformations of covectors and contravectos are identical.
Derivatives and Jacobian matrix elements [ edit ]
The components of L are partial derivatives of the new or old coordinates with respect to the old or new coordinates, respectively.
Differentiating x i with respect to xk
∂
x
¯
i
∂
x
k
=
∂
∂
x
k
(
L
j
i
x
j
)
=
L
j
i
∂
x
j
∂
x
k
=
L
j
i
δ
k
j
=
L
k
i
{\displaystyle {\frac {\partial {\bar {x}}^{i}}{\partial x^{k}}}={\frac {\partial }{\partial x^{k}}}\left(L_{j}{}^{i}x^{j}\right)=L_{j}{}^{i}{\frac {\partial x^{j}}{\partial x^{k}}}=L_{j}{}^{i}\delta _{k}{}^{j}=L_{k}{}^{i}}
so
L
i
j
=
∂
x
¯
j
∂
x
i
{\displaystyle L_{i}{}^{j}={\frac {\partial {\bar {x}}^{j}}{\partial x^{i}}}}
is an element of the Jacobian matrix .
There is a (partially mnemonical) correspondence between index positions attached to L and in the partial derivative: i at the top and j at the bottom, in each case. Many sources state transformations in terms of contractions with this partial derivative.
Conversely, differentiating xj with respect to x i :
∂
x
j
∂
x
¯
k
=
∂
∂
x
¯
k
(
(
L
i
j
)
−
1
x
¯
i
)
=
(
L
i
j
)
−
1
∂
x
¯
i
∂
x
¯
k
=
(
L
i
j
)
−
1
δ
k
i
=
(
L
k
j
)
−
1
{\displaystyle {\frac {\partial x^{j}}{\partial {\bar {x}}^{k}}}={\frac {\partial }{\partial {\bar {x}}^{k}}}\left(\left(L_{i}{}^{j}\right)^{-1}{\bar {x}}^{i}\right)=\left(L_{i}{}^{j}\right)^{-1}{\frac {\partial {\bar {x}}^{i}}{\partial {\bar {x}}^{k}}}=\left(L_{i}{}^{j}\right)^{-1}\delta _{k}{}^{i}=\left(L_{k}{}^{j}\right)^{-1}}
so
(
L
i
j
)
−
1
=
∂
x
j
∂
x
¯
i
{\displaystyle \left(L_{i}{}^{j}\right)^{-1}={\frac {\partial x^{j}}{\partial {\bar {x}}^{i}}}}
is an element of the inverse Jacobian matrix. Again, there is an index correspondence.
Contracting partial derivatives gives the Kronecker delta:
∂
x
¯
i
∂
x
¯
k
=
∂
x
¯
i
∂
x
j
∂
x
j
∂
x
¯
k
=
L
j
i
(
L
k
j
)
−
1
=
L
j
i
L
j
k
=
δ
i
k
{\displaystyle {\frac {\partial {\bar {x}}^{i}}{\partial {\bar {x}}^{k}}}={\frac {\partial {\bar {x}}^{i}}{\partial x^{j}}}{\frac {\partial x^{j}}{\partial {\bar {x}}^{k}}}=L_{j}{}^{i}\left(L_{k}{}^{j}\right)^{-1}=L_{j}{}^{i}L^{j}{}_{k}=\delta ^{i}{}_{k}}
also
∂
x
i
∂
x
k
=
∂
x
i
∂
x
¯
j
∂
x
¯
j
∂
x
k
=
(
L
j
i
)
−
1
L
k
j
=
L
i
j
L
k
j
=
δ
i
k
{\displaystyle {\frac {\partial x^{i}}{\partial x^{k}}}={\frac {\partial x^{i}}{\partial {\bar {x}}^{j}}}{\frac {\partial {\bar {x}}^{j}}{\partial x^{k}}}=\left(L_{j}{}^{i}\right)^{-1}L_{k}{}^{j}=L^{i}{}_{j}L_{k}{}^{j}=\delta ^{i}{}_{k}}
which parallels the matrix multiplication of the Jacobian and its inverse (in fact, it's the same equation in this case):
J
J
−
1
=
J
−
1
J
=
I
{\displaystyle \mathbf {J} \mathbf {J} ^{-1}=\mathbf {J} ^{-1}\mathbf {J} =\mathbf {I} }
As a special case;
L
j
i
(
L
i
j
)
−
1
=
L
j
i
L
j
i
=
∂
x
¯
i
∂
x
j
∂
x
j
∂
x
¯
i
=
1
{\displaystyle L_{j}{}^{i}\left(L_{i}{}^{j}\right)^{-1}=L_{j}{}^{i}L^{j}{}_{i}={\frac {\partial {\bar {x}}^{i}}{\partial x^{j}}}{\frac {\partial x^{j}}{\partial {\bar {x}}^{i}}}=1}
Projections along coordinate axes [ edit ]
As with all linear transformations, L depends on the basis chosen. Since
x
¯
=
x
¯
j
e
¯
j
,
x
=
x
j
e
j
{\displaystyle {\bar {\mathbf {x} }}={\bar {x}}^{j}{\bar {\mathbf {e} }}_{j},\mathbf {x} =x^{j}\mathbf {e} _{j}}
, for two orthonormal bases
e
¯
i
⋅
e
¯
j
=
e
i
⋅
e
j
=
δ
i
j
,
|
e
i
|
=
|
e
¯
i
|
=
1
,
{\displaystyle {\bar {\mathbf {e} }}_{i}\cdot {\bar {\mathbf {e} }}_{j}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}\,,\quad \left|\mathbf {e} _{i}\right|=\left|{\bar {\mathbf {e} }}_{i}\right|=1\,,}
projecting x to the x axes:
x
¯
i
=
e
¯
i
⋅
x
=
e
¯
i
⋅
x
j
e
j
=
x
i
L
i
j
,
{\displaystyle {\bar {x}}^{i}={\bar {\mathbf {e} }}_{i}\cdot \mathbf {x} ={\bar {\mathbf {e} }}_{i}\cdot x^{j}\mathbf {e} _{j}=x^{i}L_{i}\,^{j},}
,
projecting x to the x axes:
x
i
=
e
i
⋅
x
=
e
i
⋅
x
¯
j
e
¯
j
=
x
¯
j
(
L
j
i
)
−
1
{\displaystyle x^{i}=\mathbf {e} _{i}\cdot \mathbf {x} =\mathbf {e} _{i}\cdot {\bar {x}}^{j}{\bar {\mathbf {e} }}_{j}={\bar {x}}^{j}\left(L_{j}\,^{i}\right)^{-1}}
,
Hence the components reduce to direction cosines between the x i and xj axes:
L
i
j
=
e
¯
i
⋅
e
j
=
cos
θ
i
j
{\displaystyle L_{i}{}^{j}={\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}=\cos \theta _{ij}}
(
L
i
j
)
−
1
=
e
i
⋅
e
¯
j
=
cos
θ
j
i
{\displaystyle \left(L_{i}{}^{j}\right)^{-1}=\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}=\cos \theta _{ji}}
where θij , θji are not matrix elements (although they could be organized into an array this is not required) the angles between the x i and xj axes. While the numbers e i · e j arranged into a matrix would form a symmetric matrix (a matrix equal to its own transpose) due to the symmetry in the dot products, by contrast e i · e j or e i · e j are each not symmetric in general. Therefore, while the L matrices are still orthogonal, they are not symmetric.
In addition,
e
i
⋅
e
¯
j
≠
δ
i
j
{\displaystyle \mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}\neq \delta _{ij}}
unless the basis vectors are identical (in which case there is only the trivial transformation to begin with, no transformation at all): e i = e i .
Accumulating all results obtained:
The transformation components are
L
i
j
=
∂
x
¯
j
∂
x
i
=
e
¯
i
⋅
e
j
=
cos
θ
i
j
↿⇂
(
L
i
j
)
−
1
=
L
j
i
=
∂
x
j
∂
x
¯
i
=
e
i
⋅
e
¯
j
=
cos
θ
j
i
{\displaystyle {\begin{array}{c}L_{i}{}^{j}={\frac {\partial {\bar {x}}^{j}}{\partial x^{i}}}={\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}=\cos \theta _{ij}\\\upharpoonleft \downharpoonright \\\left(L_{i}{}^{j}\right)^{-1}=L^{j}{}_{i}={\frac {\partial x^{j}}{\partial {\bar {x}}^{i}}}=\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}=\cos \theta _{ji}\end{array}}}
in matrix form
L
=
(
∂
x
¯
1
∂
x
1
∂
x
¯
1
∂
x
2
∂
x
¯
1
∂
x
3
∂
x
¯
2
∂
x
1
∂
x
¯
2
∂
x
2
∂
x
¯
2
∂
x
3
∂
x
¯
3
∂
x
1
∂
x
¯
3
∂
x
2
∂
x
¯
3
∂
x
3
)
=
(
e
¯
1
⋅
e
1
e
¯
1
⋅
e
2
e
¯
1
⋅
e
3
e
¯
2
⋅
e
1
e
¯
2
⋅
e
2
e
¯
2
⋅
e
3
e
¯
3
⋅
e
1
e
¯
3
⋅
e
2
e
¯
3
⋅
e
3
)
=
(
cos
θ
11
cos
θ
12
cos
θ
13
cos
θ
21
cos
θ
22
cos
θ
23
cos
θ
31
cos
θ
32
cos
θ
33
)
{\displaystyle \mathbf {L} ={\begin{pmatrix}{\frac {\partial {\bar {x}}^{1}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{1}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{1}}{\partial x^{3}}}\\{\frac {\partial {\bar {x}}^{2}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{2}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{2}}{\partial x^{3}}}\\{\frac {\partial {\bar {x}}^{3}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{3}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{3}}{\partial x^{3}}}\end{pmatrix}}={\begin{pmatrix}{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{3}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{11}&\cos \theta _{12}&\cos \theta _{13}\\\cos \theta _{21}&\cos \theta _{22}&\cos \theta _{23}\\\cos \theta _{31}&\cos \theta _{32}&\cos \theta _{33}\end{pmatrix}}}
L
−
1
=
L
T
{\displaystyle \mathbf {L} ^{-1}=\mathbf {L} ^{\mathrm {T} }}
The transformation of components can be fully written:
x
¯
j
=
x
i
L
i
j
=
x
i
∂
x
¯
j
∂
x
i
=
x
i
e
¯
i
⋅
e
j
=
x
i
cos
θ
i
j
↿⇂
x
i
=
x
¯
i
(
L
i
j
)
−
1
=
x
¯
i
L
j
i
=
x
¯
i
∂
x
j
∂
x
¯
i
=
x
¯
i
e
i
⋅
e
¯
j
=
x
¯
i
cos
θ
j
i
{\displaystyle {\begin{array}{c}{\bar {x}}^{j}=x^{i}L_{i}{}^{j}=x^{i}{\frac {\partial {\bar {x}}^{j}}{\partial x^{i}}}=x^{i}{\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}=x^{i}\cos \theta _{ij}\\\upharpoonleft \downharpoonright \\x^{i}={\bar {x}}^{i}\left(L_{i}{}^{j}\right)^{-1}={\bar {x}}^{i}L^{j}{}_{i}={\bar {x}}^{i}{\frac {\partial x^{j}}{\partial {\bar {x}}^{i}}}={\bar {x}}^{i}\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}={\bar {x}}^{i}\cos \theta _{ji}\end{array}}}
in matrix form
x
¯
=
L
x
{\displaystyle {\bar {\mathbf {x} }}=\mathbf {L} \mathbf {x} }
(
x
¯
1
x
¯
2
x
¯
3
)
=
(
∂
x
¯
1
∂
x
1
∂
x
¯
1
∂
x
2
∂
x
¯
1
∂
x
3
∂
x
¯
2
∂
x
1
∂
x
¯
2
∂
x
2
∂
x
¯
2
∂
x
3
∂
x
¯
3
∂
x
1
∂
x
¯
3
∂
x
2
∂
x
¯
3
∂
x
3
)
(
x
1
x
2
x
3
)
=
(
e
¯
1
⋅
e
1
e
¯
1
⋅
e
2
e
¯
1
⋅
e
3
e
¯
2
⋅
e
1
e
¯
2
⋅
e
2
e
¯
2
⋅
e
3
e
¯
3
⋅
e
1
e
¯
3
⋅
e
2
e
¯
3
⋅
e
3
)
(
x
1
x
2
x
3
)
=
(
cos
θ
11
cos
θ
12
cos
θ
13
cos
θ
21
cos
θ
22
cos
θ
23
cos
θ
31
cos
θ
32
cos
θ
33
)
(
x
1
x
2
x
3
)
{\displaystyle {\begin{pmatrix}{\bar {x}}^{1}\\{\bar {x}}^{2}\\{\bar {x}}^{3}\end{pmatrix}}={\begin{pmatrix}{\frac {\partial {\bar {x}}^{1}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{1}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{1}}{\partial x^{3}}}\\{\frac {\partial {\bar {x}}^{2}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{2}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{2}}{\partial x^{3}}}\\{\frac {\partial {\bar {x}}^{3}}{\partial x^{1}}}&{\frac {\partial {\bar {x}}^{3}}{\partial x^{2}}}&{\frac {\partial {\bar {x}}^{3}}{\partial x^{3}}}\end{pmatrix}}{\begin{pmatrix}x^{1}\\x^{2}\\x^{3}\end{pmatrix}}={\begin{pmatrix}{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{3}\end{pmatrix}}{\begin{pmatrix}x^{1}\\x^{2}\\x^{3}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{11}&\cos \theta _{12}&\cos \theta _{13}\\\cos \theta _{21}&\cos \theta _{22}&\cos \theta _{23}\\\cos \theta _{31}&\cos \theta _{32}&\cos \theta _{33}\end{pmatrix}}{\begin{pmatrix}x^{1}\\x^{2}\\x^{3}\end{pmatrix}}}
similarly for
x
=
L
−
1
x
¯
=
L
T
x
¯
{\displaystyle \mathbf {x} =\mathbf {L} ^{-1}{\bar {\mathbf {x} }}=\mathbf {L} ^{\mathrm {T} }{\bar {\mathbf {x} }}}
The geometric interpretation is the x i components equal to the sum of projecting the xj components onto the x j axes.
Tensors are defined as quantities which transform in a certain way under linear transformations of coordinates.
Let a = ai e i and b = bi e i be two vectors, so that they transform according to a j = ai Li j , b j = bi Li j .
Taking the tensor product gives:
a
⊗
b
=
a
i
e
i
⊗
b
j
e
j
=
a
i
b
j
e
i
⊗
e
j
{\displaystyle \mathbf {a} \otimes \mathbf {b} =a^{i}\mathbf {e} _{i}\otimes b^{j}\mathbf {e} _{j}=a^{i}b^{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}}
then applying the transformation to the components
a
¯
p
b
¯
q
=
a
i
L
i
p
b
j
L
j
q
=
L
i
p
L
j
q
a
i
b
j
{\displaystyle {\bar {a}}^{p}{\bar {b}}^{q}=a^{i}L_{i}{}^{p}b^{j}L_{j}{}^{q}=L_{i}{}^{p}L_{j}{}^{q}a^{i}b^{j}}
and to the bases
e
¯
p
⊗
e
¯
q
=
(
L
p
i
)
−
1
e
i
⊗
(
L
q
j
)
−
1
e
¯
j
=
(
L
p
i
)
−
1
(
L
q
j
)
−
1
e
i
⊗
e
¯
j
=
L
i
p
L
j
q
e
i
⊗
e
¯
j
{\displaystyle {\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left(L_{p}{}^{i}\right)^{-1}\mathbf {e} _{i}\otimes \left(L_{q}{}^{j}\right)^{-1}{\bar {\mathbf {e} }}_{j}=\left(L_{p}{}^{i}\right)^{-1}\left(L_{q}{}^{j}\right)^{-1}\mathbf {e} _{i}\otimes {\bar {\mathbf {e} }}_{j}=L^{i}{}_{p}L^{j}{}_{q}\mathbf {e} _{i}\otimes {\bar {\mathbf {e} }}_{j}}
gives the transformation law of an order-2 tensor. The tensor a ⊗b is invariant under this transformation:
a
¯
p
b
¯
q
e
¯
p
⊗
e
¯
q
=
L
k
p
L
ℓ
q
a
k
b
ℓ
(
L
p
i
)
−
1
(
L
q
j
)
−
1
e
i
⊗
e
j
=
L
k
p
(
L
p
i
)
−
1
L
ℓ
q
(
L
q
j
)
−
1
a
k
b
ℓ
e
i
⊗
e
j
=
δ
k
i
δ
ℓ
j
a
k
b
ℓ
e
i
⊗
e
j
=
a
i
b
j
e
i
⊗
e
j
{\displaystyle {\begin{array}{cl}{\bar {a}}^{p}{\bar {b}}^{q}{\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}&=L_{k}{}^{p}L_{\ell }{}^{q}a^{k}b^{\ell }\,\left(L_{p}{}^{i}\right)^{-1}\left(L_{q}{}^{j}\right)^{-1}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\&=L_{k}{}^{p}\left(L_{p}{}^{i}\right)^{-1}L_{\ell }{}^{q}\left(L_{q}{}^{j}\right)^{-1}\,a^{k}b^{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\&=\delta _{k}{}^{i}\delta _{\ell }{}^{j}\,a^{k}b^{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\&=a^{i}b^{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\end{array}}}
More generally, for any order-2 tensor
R
=
R
i
j
e
i
⊗
e
j
,
{\displaystyle \mathbf {R} =R^{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,}
the components transform according to;
R
¯
p
q
=
L
i
p
L
j
q
R
i
j
{\displaystyle {\bar {R}}^{pq}=L_{i}{}^{p}L_{j}{}^{q}R^{ij}}
,
and the basis transforms by:
e
¯
p
⊗
e
¯
q
=
(
L
i
p
)
−
1
e
i
⊗
(
L
j
q
)
−
1
e
j
{\displaystyle {\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left(L_{i}{}^{p}\right)^{-1}\mathbf {e} _{i}\otimes \left(L_{j}{}^{q}\right)^{-1}\mathbf {e} _{j}}
If R does not transform according to this rule - whatever quantity R may be, it's not an order 2 tensor.
Now suppose we have an additional vector c = ci e i which transforms according to c j = ci Li j .
Taking the tensor product with the other two vectors a and b above gives:
a
⊗
b
⊗
c
=
a
i
e
i
⊗
b
j
e
j
⊗
c
k
e
k
=
a
i
b
j
c
k
e
i
⊗
e
j
⊗
e
k
{\displaystyle \mathbf {a} \otimes \mathbf {b} \otimes \mathbf {c} =a^{i}\mathbf {e} _{i}\otimes b^{j}\mathbf {e} _{j}\otimes c^{k}\mathbf {e} _{k}=a^{i}b^{j}c^{k}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}}
then applying the transformation to the components gives the transformation law of an order-3 tensor:
a
¯
p
b
¯
q
c
¯
r
=
a
i
L
i
p
b
j
L
j
q
c
k
L
k
r
=
L
i
p
L
j
q
L
k
r
a
i
b
j
c
k
{\displaystyle {\bar {a}}^{p}{\bar {b}}^{q}{\bar {c}}^{r}=a^{i}L_{i}{}^{p}b^{j}L_{j}{}^{q}c^{k}L_{k}{}^{r}=L_{i}{}^{p}L_{j}{}^{q}L_{k}{}^{r}a^{i}b^{j}c^{k}}
and to the bases
e
¯
p
⊗
e
¯
q
⊗
e
¯
r
=
(
L
p
i
)
−
1
e
i
⊗
(
L
q
j
)
−
1
e
¯
j
⊗
(
L
r
k
)
−
1
e
¯
k
=
(
L
p
i
)
−
1
(
L
q
j
)
−
1
(
L
r
k
)
−
1
e
i
⊗
e
¯
j
⊗
e
¯
k
=
L
i
p
L
j
q
L
k
r
e
i
⊗
e
¯
j
⊗
e
¯
k
{\displaystyle {\begin{array}{cl}{\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}\otimes {\bar {\mathbf {e} }}_{r}&=\left(L_{p}{}^{i}\right)^{-1}\mathbf {e} _{i}\otimes \left(L_{q}{}^{j}\right)^{-1}{\bar {\mathbf {e} }}_{j}\otimes \left(L_{r}{}^{k}\right)^{-1}{\bar {\mathbf {e} }}_{k}\\&=\left(L_{p}{}^{i}\right)^{-1}\left(L_{q}{}^{j}\right)^{-1}\left(L_{r}{}^{k}\right)^{-1}\,\mathbf {e} _{i}\otimes {\bar {\mathbf {e} }}_{j}\otimes {\bar {\mathbf {e} }}_{k}\\&=L^{i}{}_{p}L^{j}{}_{q}L^{k}{}_{r}\,\mathbf {e} _{i}\otimes {\bar {\mathbf {e} }}_{j}\otimes {\bar {\mathbf {e} }}_{k}\end{array}}}
gives the transformation law of an order-3 tensor. The tensor a ⊗b ⊗c is invariant under this transformation:
a
¯
p
b
¯
q
c
¯
r
e
¯
p
⊗
e
¯
q
⊗
e
¯
r
=
L
ℓ
p
L
m
q
L
n
r
a
ℓ
b
m
c
n
(
L
p
i
)
−
1
(
L
q
j
)
−
1
(
L
r
k
)
−
1
e
i
⊗
e
j
⊗
e
k
=
L
ℓ
p
(
L
p
i
)
−
1
L
m
q
(
L
q
j
)
−
1
L
n
r
(
L
r
k
)
−
1
a
ℓ
b
m
c
n
e
i
⊗
e
j
⊗
e
k
=
δ
ℓ
i
δ
m
j
δ
n
k
a
ℓ
b
m
c
n
e
i
⊗
e
j
⊗
e
k
=
a
i
b
j
c
k
e
i
⊗
e
j
⊗
e
k
{\displaystyle {\begin{array}{cl}{\bar {a}}^{p}{\bar {b}}^{q}{\bar {c}}^{r}{\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}\otimes {\bar {\mathbf {e} }}_{r}&=L_{\ell }{}^{p}L_{m}{}^{q}L_{n}{}^{r}a^{\ell }b^{m}c^{n}\left(L_{p}{}^{i}\right)^{-1}\left(L_{q}{}^{j}\right)^{-1}\left(L_{r}{}^{k}\right)^{-1}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\\&=L_{\ell }{}^{p}\left(L_{p}{}^{i}\right)^{-1}L_{m}{}^{q}\left(L_{q}{}^{j}\right)^{-1}L_{n}{}^{r}\left(L_{r}{}^{k}\right)^{-1}\,a^{\ell }b^{m}c^{n}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\\&=\delta _{\ell }{}^{i}\delta _{m}{}^{j}\delta _{n}{}^{k}\,a^{\ell }b^{m}c^{n}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\\&=a^{i}b^{j}c^{k}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\end{array}}}
For any order-3 tensor
S
=
S
i
j
k
e
i
⊗
e
j
⊗
e
k
,
{\displaystyle \mathbf {S} =S^{ijk}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\,,}
the components transform according to:
S
¯
p
q
r
=
L
i
p
L
j
q
L
k
r
S
i
j
k
{\displaystyle {\bar {S}}^{pqr}=L_{i}{}^{p}L_{j}{}^{q}L_{k}{}^{r}S^{ijk}}
and the basis transforms by:
e
¯
p
⊗
e
¯
q
⊗
e
¯
r
=
(
L
p
i
)
−
1
e
i
⊗
(
L
q
j
)
−
1
e
j
⊗
(
L
r
k
)
−
1
e
k
{\displaystyle {\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}\otimes {\bar {\mathbf {e} }}_{r}=\left(L_{p}{}^{i}\right)^{-1}\mathbf {e} _{i}\otimes \left(L_{q}{}^{j}\right)^{-1}\mathbf {e} _{j}\otimes \left(L_{r}{}^{k}\right)^{-1}\mathbf {e} _{k}}
.
If S does not transform according to this rule - whatever quantity S may be, it's not an order-3 tensor.
More generally, for any order p tensor
T
=
T
i
1
i
2
⋯
i
p
e
i
1
⊗
e
i
2
⊗
⋯
e
i
p
{\displaystyle \mathbf {T} =T^{i_{1}i_{2}\cdots i_{p}}\mathbf {e} _{i_{1}}\otimes \mathbf {e} _{i_{2}}\otimes \cdots \mathbf {e} _{i_{p}}}
the components transform according to;
T
¯
j
1
j
2
⋯
j
p
=
L
i
1
j
1
L
i
2
j
2
⋯
L
i
p
j
p
T
i
1
i
2
⋯
i
p
{\displaystyle {\bar {T}}^{j_{1}j_{2}\cdots j_{p}}=L_{i_{1}}{}^{j_{1}}L_{i_{2}}{}^{j_{2}}\cdots L_{i_{p}}{}^{j_{p}}T^{i_{1}i_{2}\cdots i_{p}}}
and the basis transforms by:
e
¯
i
1
⊗
e
¯
i
2
⋯
⊗
e
¯
j
p
=
(
L
j
1
i
1
)
−
1
e
j
1
⊗
(
L
j
2
i
2
)
−
1
e
j
2
⋯
⊗
(
L
j
p
i
p
)
−
1
e
j
p
{\displaystyle {\bar {\mathbf {e} }}_{i_{1}}\otimes {\bar {\mathbf {e} }}_{i_{2}}\cdots \otimes {\bar {\mathbf {e} }}_{j_{p}}=\left(L_{j_{1}}{}^{i_{1}}\right)^{-1}\mathbf {e} _{j_{1}}\otimes \left(L_{j_{2}}{}^{i_{2}}\right)^{-1}\mathbf {e} _{j_{2}}\cdots \otimes \left(L_{j_{p}}{}^{i_{p}}\right)^{-1}\mathbf {e} _{j_{p}}}
If T does not transform according to this rule - whatever quantity T may be it's not an order-3 tensor.
Invariance, covariance, contravariance, and tensor type[ edit ]
The transformation of generally mixed tensor of type (p , q ) is as follows.
Tensor elements
Transformation law
Tensor components
T
i
¯
j
¯
k
¯
⋯
⏟
q
i
n
d
i
c
e
s
a
¯
b
¯
c
¯
⋯
⏞
p
i
n
d
i
c
e
s
=
T
i
j
k
⋯
⏟
q
i
n
d
i
c
e
s
a
b
c
⋯
⏞
p
i
n
d
i
c
e
s
L
a
a
¯
L
b
b
¯
L
c
c
¯
⋯
⏟
p
f
a
c
t
o
r
s
L
i
¯
i
L
j
¯
j
L
k
¯
k
⋯
⏟
q
f
a
c
t
o
r
s
{\displaystyle T_{\underbrace {\scriptstyle {\overline {i}}{\overline {j}}{\overline {k}}\cdots } _{q\,\mathrm {indices} }}^{\overbrace {\scriptstyle {\overline {a}}{\overline {b}}{\overline {c}}\cdots } ^{p\,\mathrm {indices} }}=T_{\underbrace {\scriptstyle ijk\cdots } _{q\,\mathrm {indices} }}^{\overbrace {\scriptstyle abc\cdots } ^{p\,\mathrm {indices} }}\underbrace {L_{a}{}^{\overline {a}}L_{b}{}^{\overline {b}}L_{c}{}^{\overline {c}}\cdots } _{p\,\mathrm {factors} }\underbrace {L_{\overline {i}}^{i}L_{\overline {j}}{}^{j}L_{\overline {k}}{}^{k}\cdots } _{q\,\mathrm {factors} }}
Tensor basis
e
a
¯
b
¯
c
¯
⋯
i
¯
j
¯
k
¯
⋯
=
e
a
b
c
⋯
i
j
k
⋯
L
a
¯
a
L
b
¯
b
L
c
¯
c
⋯
⏟
p
f
a
c
t
o
r
s
L
i
i
¯
L
j
j
¯
L
k
k
¯
⋯
⏟
q
f
a
c
t
o
r
s
{\displaystyle \mathbf {e} _{{\overline {a}}{\overline {b}}{\overline {c}}\cdots }^{{\overline {i}}{\overline {j}}{\overline {k}}\cdots }=\mathbf {e} _{abc\cdots }^{ijk\cdots }\underbrace {L_{\overline {a}}{}^{a}L_{\overline {b}}{}^{b}L_{\overline {c}}{}^{c}\cdots } _{p\,\mathrm {factors} }\underbrace {L_{i}{}^{\overline {i}}L_{j}{}^{\overline {j}}L_{k}{}^{\overline {k}}\cdots } _{q\,\mathrm {factors} }}
where
e
a
¯
b
¯
c
¯
⋯
i
¯
j
¯
k
¯
⋯
=
e
a
¯
⊗
e
b
¯
⊗
e
c
¯
⋯
⏟
p
i
n
d
i
c
e
s
⊗
e
i
¯
⊗
e
j
¯
⊗
e
k
¯
⋯
⏞
q
i
n
d
i
c
e
s
e
a
b
c
⋯
i
j
k
⋯
=
e
a
⊗
e
b
⊗
e
c
⋯
⏟
p
i
n
d
i
c
e
s
⊗
e
i
⊗
e
j
⊗
e
k
⋯
⏞
q
i
n
d
i
c
e
s
{\displaystyle {\begin{array}{c}\mathbf {e} _{{\overline {a}}{\overline {b}}{\overline {c}}\cdots }^{{\overline {i}}{\overline {j}}{\overline {k}}\cdots }=\underbrace {\textstyle \mathbf {e} _{\overline {a}}\otimes \mathbf {e} _{\overline {b}}\otimes \mathbf {e} _{\overline {c}}\cdots } _{p\,\mathrm {indices} }\otimes \overbrace {\textstyle \mathbf {e} ^{\overline {i}}\otimes \mathbf {e} ^{\overline {j}}\otimes \mathbf {e} ^{\overline {k}}\cdots } ^{q\,\mathrm {indices} }\\\mathbf {e} _{abc\cdots }^{ijk\cdots }=\underbrace {\textstyle \mathbf {e} _{a}\otimes \mathbf {e} _{b}\otimes \mathbf {e} _{c}\cdots } _{p\,\mathrm {indices} }\otimes \overbrace {\textstyle \mathbf {e} ^{i}\otimes \mathbf {e} ^{j}\otimes \mathbf {e} ^{k}\cdots } ^{q\,\mathrm {indices} }\end{array}}}
Full tensor (components coupled to basis)
T
i
¯
j
¯
k
¯
⋯
a
¯
b
¯
c
¯
⋯
e
a
¯
b
¯
c
¯
⋯
i
¯
j
¯
k
¯
⋯
=
T
i
j
k
⋯
a
b
c
⋯
e
a
b
c
⋯
i
j
k
⋯
{\displaystyle T_{{\overline {i}}{\overline {j}}{\overline {k}}\cdots }^{{\overline {a}}{\overline {b}}{\overline {c}}\cdots }\mathbf {e} _{{\overline {a}}{\overline {b}}{\overline {c}}\cdots }^{{\overline {i}}{\overline {j}}{\overline {k}}\cdots }=T_{ijk\cdots }^{abc\cdots }\mathbf {e} _{abc\cdots }^{ijk\cdots }}
since
L
a
a
¯
L
a
¯
a
=
1
,
L
b
b
¯
L
b
¯
b
=
1
,
⋯
{\displaystyle L_{a}{}^{\overline {a}}L_{\overline {a}}{}^{a}=1\,,\,L_{b}{}^{\overline {b}}L_{\overline {b}}{}^{b}=1\,,\,\cdots }
In the special case of Cartesian tensors, covariant and contravariant components and bases are identical. This is not true for general curvilinear coordinate systems.
Second order Cartesian tensors in 3d [ edit ]
Tensor product of Cartesian basis in 3d [ edit ]
A dyadic tensor T is an order 2 tensor formed by the tensor product (designated the symbol ⊗) of two Cartesian vectors a and b , again it can be written as a linear combination of the tensor basis e x ⊗ e y , e x ⊗ e z ... e y ⊗ e z , e z ⊗ e z
T
=
a
⊗
b
=
(
a
x
e
x
+
a
y
e
y
+
a
z
e
z
)
⊗
(
b
x
e
x
+
b
y
e
y
+
b
z
e
z
)
=
a
x
x
e
x
⊗
e
x
+
a
x
y
e
x
⊗
e
y
+
a
x
z
e
x
⊗
e
z
+
a
y
x
e
y
⊗
e
x
+
a
y
y
e
y
⊗
e
y
+
a
y
z
e
y
⊗
e
z
+
a
z
x
e
z
⊗
e
x
+
a
z
y
e
z
⊗
e
y
+
a
z
z
e
z
⊗
e
z
{\displaystyle {\begin{array}{ccl}\mathbf {T} &=&\mathbf {a} \otimes \mathbf {b} =\left(a_{x}\mathbf {e} _{x}+a_{y}\mathbf {e} _{y}+a_{z}\mathbf {e} _{z}\right)\otimes \left(b_{x}\mathbf {e} _{x}+b_{y}\mathbf {e} _{y}+b_{z}\mathbf {e} _{z}\right)\\&=&a_{xx}\mathbf {e} _{x}\otimes \mathbf {e} _{x}+a_{xy}\mathbf {e} _{x}\otimes \mathbf {e} _{y}+a_{xz}\mathbf {e} _{x}\otimes \mathbf {e} _{z}\\&&+a_{yx}\mathbf {e} _{y}\otimes \mathbf {e} _{x}+a_{yy}\mathbf {e} _{y}\otimes \mathbf {e} _{y}+a_{yz}\mathbf {e} _{y}\otimes \mathbf {e} _{z}\\&&+a_{zx}\mathbf {e} _{z}\otimes \mathbf {e} _{x}+a_{zy}\mathbf {e} _{z}\otimes \mathbf {e} _{y}+a_{zz}\mathbf {e} _{z}\otimes \mathbf {e} _{z}\end{array}}}
representing each basis tensor as a matrix:
e
x
⊗
e
x
=
(
1
0
0
0
0
0
0
0
0
)
,
e
x
⊗
e
y
=
(
0
1
0
0
0
0
0
0
0
)
,
e
x
⊗
e
z
=
(
0
0
1
0
0
0
0
0
0
)
{\displaystyle \mathbf {e} _{x}\otimes \mathbf {e} _{x}={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{x}\otimes \mathbf {e} _{y}={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{x}\otimes \mathbf {e} _{z}={\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\\\end{pmatrix}}}
e
y
⊗
e
x
=
(
0
0
0
1
0
0
0
0
0
)
,
e
y
⊗
e
y
=
(
0
0
0
0
1
0
0
0
0
)
,
e
y
⊗
e
z
=
(
0
0
0
0
0
1
0
0
0
)
{\displaystyle \mathbf {e} _{y}\otimes \mathbf {e} _{x}={\begin{pmatrix}0&0&0\\1&0&0\\0&0&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{y}\otimes \mathbf {e} _{y}={\begin{pmatrix}0&0&0\\0&1&0\\0&0&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{y}\otimes \mathbf {e} _{z}={\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\\\end{pmatrix}}}
e
z
⊗
e
x
=
(
0
0
0
0
0
0
1
0
0
)
,
e
z
⊗
e
y
=
(
0
0
0
0
0
0
0
1
0
)
,
e
z
⊗
e
z
=
(
0
0
0
0
0
0
0
0
1
)
{\displaystyle \mathbf {e} _{z}\otimes \mathbf {e} _{x}={\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{z}\otimes \mathbf {e} _{y}={\begin{pmatrix}0&0&0\\0&0&0\\0&1&0\\\end{pmatrix}}\,,\quad \mathbf {e} _{z}\otimes \mathbf {e} _{z}={\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\\\end{pmatrix}}}
it can be represented more systematically as a matrix :
T
=
(
a
x
b
x
a
x
b
y
a
x
b
z
a
y
b
x
a
y
b
y
a
y
b
z
a
z
b
x
a
z
b
y
a
z
b
z
)
{\displaystyle \mathbf {T} ={\begin{pmatrix}a_{x}b_{x}&a_{x}b_{y}&a_{x}b_{z}\\a_{y}b_{x}&a_{y}b_{y}&a_{y}b_{z}\\a_{z}b_{x}&a_{z}b_{y}&a_{z}b_{z}\\\end{pmatrix}}}
See matrix multiplication for the notational correspondence between matrices and the dot and tensor products.
Second order reducible tensors in 3d [ edit ]
Cartesian dyadic tensors of second order are reducible, which means they can be re-expressed in terms of the two vectors as follows:
T
=
T
(
1
)
+
T
(
2
)
+
T
(
3
)
{\displaystyle \mathbf {T} =\mathbf {T} ^{(1)}+\mathbf {T} ^{(2)}+\mathbf {T} ^{(3)}}
T
i
j
(
1
)
=
a
k
b
k
3
δ
i
j
{\displaystyle T_{ij}^{(1)}={\frac {a_{k}b_{k}}{3}}\delta _{ij}}
T
i
j
(
2
)
=
1
2
[
a
i
b
j
−
a
j
b
i
]
=
a
[
i
b
j
]
{\displaystyle T_{ij}^{(2)}={\frac {1}{2}}[a_{i}b_{j}-a_{j}b_{i}]=a_{[i}b_{j]}}
T
i
j
(
3
)
=
1
2
(
a
i
b
j
+
a
j
b
i
)
−
a
k
b
k
3
δ
i
j
=
a
(
i
b
j
)
−
T
i
j
(
1
)
{\displaystyle T_{ij}^{(3)}={\frac {1}{2}}(a_{i}b_{j}+a_{j}b_{i})-{\frac {a_{k}b_{k}}{3}}\delta _{ij}=a_{(i}b_{j)}-T_{ij}^{(1)}}
where δij is the Kronecker delta , the components of the identity matrix . These three terms are irreducible representations, which means they cannot be decomposed further and still be tensors satisfying the defining transformation laws under which they must be invariant. Each of the irreducible representations transform like angular momentum according to the number of independent components.
D.C. Kay (1988). Tensor Calculus . Schaum’s Outlines. McGraw Hill. p. 18-19, 31-32. ISBN 0-07-033484-6 .
M.R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector analysis . Schaum’s Outlines (2nd ed.). McGraw Hill. p. 227. ISBN 978-0-07-161545-7 . {{cite book }}
: CS1 maint: multiple names: authors list (link )
Further reading and applications [ edit ]
S. Lipcshutz, M. Lipson (2009). Linear Algebra . Schaum’s Outlines (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1 .