Saturday, April 1, 2017

Basics of Tensors: An Attempt of Making Sense of Tensors



Basics of Tensors: An Attempt of Making Sense of Tensors

First post of April 2017.  No joke. 

Introduction

Trying to finally make some sense of tensor calculus, here is what I learned so far.  I still consider myself a novice when it comes to tensor calculus theory.  In tensors, both subscripts and superscripts are used as indices.  To be honest, that throws me off since I’m used to seeing superscripts as symbols for power. 

When I first learned about tensors, I was tensors were just matrices.  It turns out, matrices are a subset of tensors and whether we’ve been knowing it or not, we use tensors every day in mathematics: scalars (numbers), vectors, and numbers.  In a way, scalars can be thought of as 0-dimensional tensors (points), vectors are 1-dimensional tensors, and matrices are 2-dimesnional tensors. 

To maintain clarity, I will use strictly subscripts, followed by underscores ( _ ).

The General Definition of Tensors

Einstein Sums:  Note that a and x may not necessarily be scalars, but also functions.
Σ a_i * x_i from i=1 to n = a_1*x_1 + a_2*x_2 + a_3*x_3 + … + a_n*x_n
In tensor calculus, the sum is simply expressed as a_i * x_i. 

Double Sums: 
a_i,j
= Σ a_i,j * x_i * y_j (sum over i)
= Σ a_i,j * x_i * y_j  (sum over j)

On a personal note, I prefer using the Σ symbol, only for clarity.  Because without context a_n*x_n to me a single term and not representing a sum.

Kronercker Delta: fundamental function in tensor calculus

δ_i, j = { 1 if i=j; 0 if i≠j

Tensor:  A nth rank tensor in m-dimensional space that has n indices m^n components of each tensor.  The components can be numbers or functions.

Included in the family of tensors are:
Scalars:  0 indices
Vectors: 1 index
Matrices:  2 indices

Matrices in Tensor Notation

The notation [ a_i,j ]_m,n is a matrix with dimensions m * n with indices i and j for each components.

Multiplication of Matrices A and B with:
A = [a_i,j]_m,n
B = [b_i,j]_n,k
AB = [ a_i,r * b_r,j ] _m,k

Identity Matrix:
I = [ δ_i,j ]_n,n

Transpose Matrix:
A^T = [ a_j,I ]_n,m

Determinant:
det A = e_i,1 * e_i,2 * e_i,3 * … * e_i,n * a_1_i,1 * a_2_i,2 * a_3_i,3 * … * a_n_i,n

where e is the Levi-Civita symbol with represents a sign of permutations of numbers 1, 2, 3, …, n

In general,

e = {
 +1 for even permutations (even number of two-number swaps),
-1 for odd permutations (odd number of two-number swaps),
0 otherwise
= Π sign(a_j – a_i) for 1≤i<j≤n
= sign(a_2 – a_1) * sign(a_3 – a_1) * … * sign(a_n – a_1) * sign(a_3 – a_2) * sign(a_4 – a_2) * …  * sign(a_n – a_2) * … * sign(a_n – a_n-1)

For 2 dimensions:
e_i,j = { +1 for i=1, j=2; -1 for i=2, j=1, 0 for i=j

For 3 dimensions:
e_i,j,k = { + 1 for permutations (1,2,3), (2,3,1), (3,1,2),
-1 for permutations (3,2,1), (1,3,2), (2,1,3), 0 for other permutations

Contravariant and Covariant Transformations

Let T_i and W_i be vector fields with systems x_i and y_i respectively.  For the transformation T to W:

Contravariant Vector Transformation:  W_i = T_r * (δy_i / δx_r)

Covariant Transformation:  W_i = T_r * (δx_r / δy_i)

The difference between the two transformations is in the partial deritival and what variable gets differentiated in respect to its corresponding function.  Note that W and T are assumed to contain functions.

Cartesian and Affine Tensors

Cartesian Tensors:  A tensor in 3-dimensional Euclidian space ([x, y, z]).  With Cartesian tensors, there is no distinction between covariant and contravariant indices.

Affine Tensors:  Affine tensors are a specialized version of Cartesian tensors formed by:

T: y_i = a_i,j * x_j   (in the context of sums, hence:  T:  y_i = Σ a_i,j * x_j)

Where det a_i,j ≠ 0.  In an affine tensor, x and y are considered functions.  In an affine transformation, lines that are parallel are preserved. 

The Jacobian of an affine tensor is:

J = [ [ ∂y_1/∂x_1 … ∂y_1/∂x_n ] … [ ∂y_n/∂x_1 … ∂y_n/∂x_n] ]

And it’s inverse:

J^-1 = [ [ ∂x_1/∂y_1 … ∂x_1/∂y_n ] … [ ∂x_n/∂y_1 … ∂x_n/∂y_n] ]

Metric Tensors

Metric Tensor:  type of function which takes a pair of tangent vector V and W with scalars g which familiarizes dot products. 

In the following, x_i is assumed to be a function of an independent variable, like t.

Arc Length: 
ds^2 = δ_i,j * dx_i * dx_j = Σ (dx_i/dt)^2

Norm:
||V|| = √(V^2) = √(Σ v_i * v_i)

Angle Between Vectors:
cos θ = (U * V) / (||U|| * ||V||)

Area:
∫ ∫ | R_u x R_v| du dv

That is some of the basics of tensor calculus. 

Source:  Kay, David C. Ph. D.  Schaum’s Outlines: Tensor Calculus  McGraw Hill.  New York, 2011


Eddie

This blog is property of Edward Shore, 2017