Tuesday, 19 May 2020

Unique Transformations, Matrix Diagonalization and Infinite Identical transformations


Documenting my today learning from BITS WILP webinar on MFDS.


Construct a linear transformation T : V --> W, where V and W are vector
spaces over F such that the dimension of the kernel space of T is 666. Is such a
transformation unique? Give reasons for your answer.

if we have to divide the problem, we can do as following,
1. What do we mean by Unique Transformation.
2. What is kernel Space?
3. Does T : V --> W, where V and W are vectorspaces over F has any special meaning with F (i.e field)
4. What is the significance of Dimension of kernel space of T being 666

we will not be able to address the above question without knowing vector space and linear independence and linear transformation

What is a Field, Vector space and Subspace?
* identity,  - > * closure, -> * associative, -> * inverse -> Group -> (+,x) commutative -> Field -> (*,x) commutative -> Abelian Group.

* denotes any operation like + and x.

In vector space we have vectors (value + direction) defined with two operations vector addition and scalar multiplication. And it is a field as it obeys above said axioms.

A vector subspace is a non empty subset of vector space and it is a field with two operations vector addition and scalar multiplication.

What is linear independence?

if linear combination of these vectors with coefficients zero results in zero and there is no other coefficient that make it zero then it is called Linearly Independent otherwise it is called linearly dependent. if coefficient are non zero but the combination results in zero, it has one or more solution other than trivial solution.

c1a(1) + c2 a(2) + … + cma(m) = 0

If at least one of the vectors as a linear combination of the other vectors, then the vectors are called Linearly Dependent.

The rank of a matrix A is the maximum number of linearly independent row vectors of A.

The max of linearly independent vectors which can be put in combination in V is Dimension of V and the such set of vectors is called the basis. 

Anything lesser than max no. of linearly independent vectors can span a small subset of vectors in space, this no. of subset vectors spanned by the linearly independent vectors is span.

span = dimension, when subset of vector space forming linear independent vector & basis.

Standard basis are mostly the axis of Co-ordinate system like R^1, R^2, R^3 or 1D, 2D, 3D. Apart from that we have vector space of polynomial P1, P2, P3 or linear, quadratic, cubic and vector space of matrices M12, M13, M21, M22, M23, M31, M2. We can also extend the like upto R^n and P^n.

subspace of R^3 is R^2 and R^1 (a reduction in dimension is observed)

It is always good to check for the axioms getting satisfied or not before declaring a space as vector space.

Linear Transformation

Now we know what is vector space and linear independence. only after knowing it we could increase momentum and jump to understand Linear Transformation.

A Transformation is a function mapping domain to co-domain. Here a domain space is a Vector Space, it has a set of vectors, and co-domain space is also a Vector space, it has a set of vectors.

T: V -> U

A Transformation is linear only if it satisfies, 
T(U+V) = T(U) + T(V)
T(kV) = kT(V)

Remember the Matrix X is the Actual Linear Transformer X: A=>B. AX=B has 3 vector spaces row space, column space and null space.

Let T : V =>W be linear transformation
Range(T) is subspace of W  [column space of rref matrix]
Kernel(T) is subspace of V  [solution space of matrix in rref]
Nullity (T) = dim(Kernel(T))
Rank(T) = dim(Range(T))

dim(Kernel(T) + dim(Range(T)) = dim V -> Rank Nullity Theorem.

Now let us try to answer the sub questions,

1. When do we call a Transformation Unique.

A Transformation is unique if the transformation is done with the basis of Vector space V maps to nullity of Vector Space U. This is my assumption based on answer in quora

2. What is a Kernel space?
In Transformation T: V => U
Kernal space is set of vectors of V that maps to null space in U.

3. Does T : V --> W, where V and W are vectorspaces over F has any special meaning with F (i.e field)
There is no special meaning in F or Field. Vector space is a special field, instead of cross multiplication we have scalar multiplication to make the transformation linear.

4. What is the significance of Dimension of kernel space of T being 666

We can have any other number other than 666. Dimension of kernel space will be dimension is the Nullity of T and is also the subspace of V. In order for the transformation to be unique happen it should form the basis. Basis of any matrix can be identified with RREF of matrix or the transformation should result in one to one onto. To prove this we should take a smaller similar matrix similar to 666 dimension of kernel space with diagonalization and power reduction.

The actual solution for entire problem is said to be NOT POSSIBLE to have UNIQUE TRANSFORMATION, because the ---. I still did not understand.

UPDATE 1:
Post clarification with lecturer, it seems the question seems to be questioning the understanding of the completeness of the problem, with all the given constraints. The given constraint Dimension of Kernel space of T being 666, is alone not enough to determine whether it could map to UNIQUE elements in CO-DOMAIN vector space. For me IMAGE OR RANGE looks to be not enough to determine UNIQUE TRANFORMATION. The involvement of BASIS seems to be an approach, a good approach to reduce the no. of mappings from DOMAIN to CO-DOMAIN while trying to understand whether the transformation is UNIQUE especially when we are dealing with 100s of dimensions.



The complete solution is present in the below blog
https://medium.com/@andrew.chamberlain/the-linear-algebra-view-of-the-fibonacci-sequence-4e81f78935a3

It was an application of diagonalization property of similar matrix.

D (diagonal matrix) = X−1AX  Here X is matrix constructed from eigen vectors placed column vise.

Eigen vectors and values can be obtained with A - lambda.I = 0 equation factorization.


The above is a series convergence problem. Again an application of Diagonalization which is applicable only for similar matrix (transformation only rotates or translates no scaling or change in angle) and for eigen vector matrix (which is a similar matrix for the given matrix)

Here Lambda is given in eigen notation. since lamda1. labmda2. lamda3...... lambdan = 0 as it is the determinant of matrix as A^k tends to zero when k tends to infinite. modulus of lambda should be less than 1 or greater than -1 for alternating series. hence modulus is less than 1. 

There was an question from exam paper related to rotation of values which also required application of Diagonalization of Matrix and euler formula application.

Markov Process, Information Theory and Entropy

I often remembered Markov Process during this session due to the application of diagonalization. But Markov process is bit more than linear system. The Markov chain happens to become linearly independent only when it attains the state of equilibrium and the relation between part of the system can be determined by probability of occurrence identified during equilibrium. I also learnt that I have forgotten that Gaussian distribution has Binomial distribution as another name and it is related to Binomial theorem as it take binary random variable. I came to know that One of Bernoulli did came up Expectation as an equilibrium state to find the ratio of given 2 number of items, with Weak Law of Large Numbers. I was also prompted to look into Central Limit Theorem.



Formation of Information theory from Markov Process.


Reduction is entropy takes place when the data sequence is predictable. It can be related to principle component analysis as we could reduce dimensions of data based on principle components, based on reduction found in entropy we can reduce the data sequence / amount of data that is the place where we apply encoding techniques to increase entropy again without loosing information. This is the conclusion, i could arrive with this blog last paragraph.


No comments:

Post a Comment

Skill, Knowledge and Talent

I kept overwhelming with data, information, knowledge and wisdom over a period of time. And I really wanted to lean towards skilling on few ...