Linear Algebra and Optimization
Chapter Two
Problem 2.1.
1. To prove that $(H,\cdot)$ is a group we must show that the following properties hold:- Closure: if $A,B \in H$ then $A \cdot B \in H$. To show this note that any upper triangular matrix $A$ has the property $$ \begin{bmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & a & b \\ d & 1 & c \\ e & f & 1 \end{bmatrix} \odot \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} $$ where $\odot$ denotes the Hadamard product of two matrices and the first matrix is non-singular. In index notation we write this as $$ A_{ij} = a_{ij} \mathbb{1}_{j \geq i} $$ where $\mathbb{1}_{\text{statement}}=1$ if the statement is true and zero otherwise. Then $$ \begin{align*} (A \cdot B)_{ij}&=\sum_{k=1}^n A_{ik}B_{kj}=\sum_{k=1}^n a_{ik}b_{kj} \mathbb{1}_{k \geq i}\mathbb{1}_{j \geq k} \\ & =\sum_{k=1}^n a_{ik}b_{kj}\mathbb{1}_{j \geq i} =(a \cdot b)_{ij}\mathbb{1}_{j \geq i} \end{align*} $$ and so, $A \cdot B \in {H}$.
- Since $H$ is a subspace of the vector space of $3 \times 3$ matrices it inherits associativity and the identity element.
- The inverse $A^{-1}$ of $A \in H$ must have the property $$ A \cdot A^{-1} = A^{-1} \cdot A = I, \quad A^{-1} \in H. $$ We note that $$ (A \cdot A^{-1})_{ij}=\sum_{k=1}^n a_{ik}a^{-1}_{kj} \mathbb{1}_{k \geq i} \mathbb{1}_{j \geq k} = \sum_{k=1}^n \delta_{ij}\mathbb{1}_{k \geq i} \mathbb{1}_{j \geq k} =\delta_{ij} $$ where $\delta_{ij}=1$ if $i=j$ and zero otherwise. Therefore the inverse exists and is an element of $H$. The following explicit form of the inverse $$ \begin{bmatrix} 1 & a' & b' \\ 0 & 1 & c' \\ 0 & 0 & 1 \end{bmatrix} $$ satisfies the diagonal and lower triangular conditions. For the upper triangular elements we must have $$ \begin{align*} a'+a & = 0 \\ b'+ac'+b & = 0 \\ c'+c & = 0 \end{align*} $$ and so $$ A^{-1}=\left[\begin{matrix}1 & - a & a c - b\\0 & 1 & - c\\0 & 0 & 1\end{matrix}\right]. $$ If $({H},\cdot)$ is abelian then $A \cdot B = B \cdot A$ or $$ \begin{align*} (A\cdot B)_{ij} & = \sum_{k=1}^n a_{ik} b_{kj} \mathbb{1}_{k \geq i} \mathbb{1}_{j \geq k} = \mathbb{1}_{j \geq i}\sum_{k=1}^n a_{ik} b_{kj} = \mathbb{1}_{j \geq i} (a \cdot b)_{ij},\\ (B \cdot A)_{ij} & = \sum_{k=1}^n b_{ik} a_{kj} \mathbb{1}_{k \geq i} \mathbb{1}_{j \geq k} = \mathbb{1}_{j \geq i}\sum_{k=1}^n b_{ik} a_{kj} =\mathbb{1}_{j \geq i} (b \cdot a). \end{align*} $$ Since $a \cdot b \neq b \cdot a$ we conclude that $H$ is not abelian.
Since $a a^{-1} = e_1$ $$ \phi(a a^{-1})=\phi(a) \phi(a^{-1})=\phi(e_1). $$ We have already shown that $\phi(e_1)=e_2$. Therefore $\phi(a^{-1})=\phi(a)^{-1}$.
3. For any $A \in H$, elements $a, b , c \in \mathbb{R}$. Since if $b \in \mathbb{R}$, $e^{ib}$ generates all elements of $S^1$ we conclude that $\phi$ is surjective.
To prove that $(G,\cdot)$ first note that closure is a consequence of the closure of $\mathbb{R}$ w.r.t. addition and multiplication and the fact that if $u_1,u_2 \in S^1$ then, since $e^{ix_1y_2} \in S_1$, $e^{ix_1y_2}u_1 u_2 \in S_1$ (closure of $S^1$ under multiplication). To prove associativity write $$ \begin{align*} & ((x_1,y_1,u_1)\cdot(x_2,y_2,u_2))\cdot(x_3,y_3,u_3)=\\ & \kern3ex= (x_1+x_2,y_1+y_2,e^{ix_1y_2}u_1u_2)\cdot(x_3,y_3,u_3)=\\ & \kern3ex= (x_1+x_2+x_3,y_1+y_2+y_3,e^{i(x_1+x_2)y_3}e^{ix_1y_2}u_1u_2u_3)=\\ & \kern3ex= (x_1+(x_2+x_3),y_1+(y_2+y_3),e^{ix_1(y_2+y_3)}u_1(e^{ix_2y_3}u_2u_3))=\\ & \kern3ex= (x_1,y_1,u_1)\cdot(x_2+x_3,y_2+y_3,e^{ix_2y_3}u_2u_3)=\\ & \kern3ex= (x_1,y_1,u_1)\cdot((x_2,y_2,u_2)\cdot(x_3,y_3,u_3)). \end{align*} $$ The identity element of $G$, $(0,0,1)$ borrows the identity elements of$(\mathbb{R},+)$ and $(S^1,\cdot)$. The inverse of $(x,y,u)$ must have the property $$ (x,y,u)\cdot(x',y',u')=(x+x',y+y',e^{ixy'}uu')=(0,0,1) $$ and so $x'=-x,\ y'=-y, \ u'=e^{ixy}u^{-1}$. $\phi$ is a homomorphism if for $A,B \in H$ $$ \phi(A \cdot B) = \phi(A) \cdot \phi(B). $$ Since $$ \begin{bmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & d & e \\ 0 & 1 & f \\ 0 & 0 & 1 \end{bmatrix}= \begin{bmatrix} 1& a+ d & af +b + e \\ 0 & 1 & c+ f \\ 0 & 0 & 1 \end{bmatrix}, $$ we have $$ \begin{align*} \phi(A \cdot B) & =(a+d, c+f,e^{i(af+b+e)}) \\ & =(a,c,e^{ib}) \cdot (d,f,e^{ie}) \\ & = \phi(A) \cdot \phi(B). \end{align*} $$
4. We are looking for a set of matrices $A \in H$ such that $$ \phi(A)=(0,0,1) $$ the identity element in $G$. From the definition of $\phi$ we must have $$ a=0, c=0, b =2\pi n $$ where $n \in \mathbb{Z}$. For any $A \in \ker\phi$ we can write $A=I + 2\pi n E_{13}$. Closure follows since if $A,B\in\ker\phi$ $$ \begin{align*} A \cdot B & = (I + 2 \pi E_{13})\cdot(I + 2\pi m E_{13})\\ & = I +2 \pi (n+m) E_{13} + (2\pi)^2nm E_{13}^2 \\ & =I +2 \pi (n+m) E_{13} \in \ker\phi. \end{align*} $$ Associativity is inherited; the identity element is $I$ ($n=0$) while the inverse of $A=I + 2\pi n E_{13}$ is $A^{-1}=I - 2\pi n E_{13}$. Hence $\ker\phi$ is a subgroup of $H$.
Problem 2.2.
To check that $m\mathbb{Z}=\{mk \vert k\in \mathbb{Z}\}$ is an abelian subgroup of $\mathbb{Z}$ first note that $mk_1+mk_2=m(k_1+k_2)$ (closure). Addition is commutative/associative in $\mathbb{Z}$ and $m\mathbb{Z}$ inherits these properties. It also inherits the identity element ($k=0$) and the inverse ($k^{-1}=-k$).
1. $\phi(mk):=k$; this is an homomorphism since $$ \phi(mk_1+mk_2)=\phi(m(k_1+k_2))=k_1+k_2=\phi(mk_1)+\phi(mk_2). $$ Since $\ker\phi=\{0\}$ it is also an isomorphism.
2. If $i(mk):=mk$ then $$ i(mk_1+mk_2)=i(m(k_1+k_2))=m(k_1+k_2)=\\mk_1+mk_2= i(mk_1)+i(mk_2) $$ hence the inclusion map is a group homomorphism. If $p:\mathbb{Z} \to m\mathbb{Z}$ is a homomorphism then $$ p (l_1+l_2)=p(l_1)+p(l_2)=mk_1+m k_2 $$ where $l_1,l_2,k_1,k_2 \in \mathbb{Z}$ (for example $k=\lfloor l/m \rfloor$). If $p \circ i={\rm id}$ then $p$ is the left inverse of $i$. By Proposition 2.16 $i$ must be an isomorphism. However, unless $m=1$, $i$ is not an isomorphism since it is not surjective.
Problem 2.3.
As defined $E$ is an abelian group w.r.t. $+$. From the definition of $\cdot$ we have $$ \lambda\cdot((x_1,y_1)+(x_2,y_2))=\lambda\cdot(x_1+x_2,y_1+y_2) \\ =(\lambda(x_1+x_2),y_1+y_2)=(\lambda x_1+\lambda x_2,y_1+y_2)\\ =(\lambda x_1,y_1)+(\lambda x_2,y_2)=\lambda\cdot(x_1,y_1)+\lambda\cdot(x_2,y_2) $$ so axiom $\text{(V1)}$ holds. $$ (\lambda + \mu) \cdot (x,y)=((\lambda+\mu)x,y)=(\lambda x + \mu x,y) \\=(\lambda x,y)+(\mu x,0)=\lambda\cdot(x,y)+\mu\cdot(x,0)\neq\lambda\cdot(x,y)+\mu\cdot(x,y) $$ so axiom $\text{(V2)}$ fails. $$ (\lambda \mu)\cdot(x,y)=((\lambda\mu)x,y)=(\lambda(\mu x),y)\\=\lambda\cdot(\mu x,y)=\lambda\cdot(\mu\cdot(x,y)) $$ so axiom $\text{(V3)}$ holds. $$ 1\cdot(x,y)=(1*x,y)=(x,y) $$ so axiom $\text{(V4)}$ holds.
Problem 2.4.
1. Below we (mostly) omit steps that use the properties of a field. $$ \begin{align*} \alpha\cdot 0&=\alpha\cdot(u-u), \quad \text{any vector $u$ has an additive inverse} \\ &=\alpha \cdot u - \alpha \cdot u, \quad \text{ (V1)} \\ & =0, \quad \text{the additive inverse of $\alpha \cdot u$ is $-\alpha \cdot u$} \end{align*} $$ $$ \begin{align*} 0 \cdot v &= (\alpha - \alpha) \cdot v, \quad \text{$K$ is a field} \\ & = \alpha \cdot v - \alpha \cdot v, \quad \text{(V2)} \\ & =0, \quad \text{the additive inverse of $\alpha \cdot v$ is $-\alpha \cdot v$} \end{align*} $$ $$ \begin{align*} \alpha \cdot (-v) & = \alpha \cdot (-1 \cdot v) \\ & = (\alpha * -1) \cdot v, \quad \text{ (V3)} \\ & = - (\alpha \cdot v) \end{align*} $$ $$ \begin{align*} (-\alpha) \cdot v & = (-1 *\alpha) \cdot v \\ & = -1\cdot (\alpha\cdot v), \quad \text{(V3)} \\ & = - (\alpha \cdot v), \quad \text{ the additive inverse of $\alpha \cdot v$ is $-1\cdot(\alpha \cdot v)$} \end{align*} $$
2. Axiom $\text{(V2)}$ states that $$ \alpha\cdot(u+v)=\alpha\cdot u + \alpha \cdot v $$ where $u,v$ are vectors and $\alpha$ is a scalar. Then $$ \begin{align*} \alpha \cdot x &= \alpha \cdot (x_1 e_1 + x_2 e_2 + \cdots + x_n e_n) \\ & = \alpha \cdot (x_1 e_1 + ( x_2 e_2 + \cdots + x_n e_n)) \\ & = \alpha \cdot (x_1 e_1)+ \alpha \cdot (x_2 e_2 + \cdots + x_n e_n) \\ & \kern15ex \vdots \\ & = \alpha \cdot (x_1 e_1)+\cdots + \alpha \cdot (x_{n-2} e_{n-2})+ \alpha \cdot(x_{n-1}e_{n-1}+ x_n e_n) \\ & = \alpha\cdot (x_1 e_1)+ \cdots + \alpha\cdot(x_n e_n). \end{align*} $$ Since for any $i=1,2,\ldots,n$, $$ \alpha\cdot(x_i e_i)=\alpha\cdot(x_i\cdot e_i)=(\alpha * x_i) \cdot e_i \\ =(x_i * \alpha) \cdot e_i=x_i\cdot(\alpha\cdot e_i) $$ given $\{\alpha \cdot e_i\}_{i=1}^n$ we can determine the result of scalar multiplication on any vector by its action on the basis vectors.
3. Lets work backwards. We want to define scalar multiplication so that $$1 \cdot u \neq u;$$ if we denote $1\cdot u=\nu(u)$ then if axiom $\text{(V3)}$ holds $$ (1*1)\cdot u=1\cdot(1 \cdot u)=1\cdot \nu(u)=\nu^2(u). $$ Since $1*1=1$ we conclude that $\nu(u)=\nu^2(u)$ and so, in this setting, scalar multiplication must be a projection. One example is $$ \alpha \cdot x=(\alpha x_1,\ldots,\alpha x_{n-1},0) $$ which for $\alpha=1$ gives the desired result. All other axioms follow; for example $$ \begin{align*} \alpha\cdot(x+y)&=(\alpha(x_1+y_1),\ldots,\alpha(x_{n-1}+y_{n-1}),0) \\ & =(\alpha x_1 ,\alpha x_{n-1},0)+(\alpha y_1,\ldots,\alpha y_{n-1},0) \\ & = \alpha \cdot x + \alpha \cdot y. \end{align*} $$
4. If $n \in \mathbb{N}$ then according to $\text{(V2)}$ $$ \begin{align*} n\cdot x & = (1+ \cdots + 1) \cdot x \\ & = 1 \cdot x + \cdots + 1 \cdot x \\ & = (y_1,\ldots,y_n)+ \cdots+(y_1,\ldots,y_n)\\ & = (n y_1,\ldots, n y_n)=n(y_1,\ldots,y_n)\\ &= n(1 \cdot x). \end{align*} $$ Next use $\text{(V3)}$: $$ 1\cdot x =\left( \frac{1}{n}* n\right)\cdot x=n\cdot\left(\frac{1}{n} \cdot x\right)=(1+\cdots+1)\cdot \left(\frac{1}{n}\cdot x\right). $$ If $\frac{1}{n}\cdot x=(y_1',\ldots,y_n')$ then since $1\cdot x=(y_1,\ldots,y_n)$ we must have $$ (y_1,\ldots,y_n)=(ny_1',\ldots,n y_n'). $$ This is possible only if $y_i'=y_i/n$ for $i=1,\ldots,n$ and so we find that $$ \frac{1}{n}\cdot x=\frac{1}{n}(1 \cdot x). $$
A rational number $r=m/n$ where $m,n \in \mathbb{N}$; then $$ \begin{align*} \frac{m}{n}\cdot x &= \left( m * \frac{1}{n} \right) \cdot x =m \cdot \left( \frac{1}{n} \cdot x\right)=m \cdot \left(\frac{1}{n} (1\cdot x) \right) \\ & =\frac{1}{n}\left(m \cdot (1 \cdot x)\right) =\frac{1}{n}\left(1 \cdot (m \cdot x)\right)=\frac{1}{n}\left( 1 \cdot m (1\cdot x) \right) \\ &=\frac{m}{n}(1\cdot(1\cdot x))=r((1*1)\cdot x)=r(1 \cdot x). \end{align*} $$ Any vector $x$ has the following expansion given a basis $\{e_1,\ldots,e_n\}$: $$ x= x_1 e_1 + \cdots + x_n e_n. $$ Given $1\cdot x =y$ and $$ y=y_1 e_1 + \cdots + y_n e_n $$ we have $$ 1\cdot x = x_1 (1\cdot e_1)+\cdots+x_n (1\cdot e_n). $$ Define $1 \cdot e_i = \sum_{j=1}^n \nu_{ji} e_j$ where $\nu_{ji}$ are scalars. Then $$ y_i = \sum_{j=1}^n \nu_{ji}x_i. $$ So, given any vector $x$, and the coefficients $\nu$ that determine the action of 1 on a basis of $\Bbb{R}^n$ we can determine $1 \cdot x$. Then since $r \cdot x=r y$ we can determine the action of any scalar multiplication.
Problem 2.5.
Linear independence requires that $$ a\begin{pmatrix}2\\1\\-3 \end{pmatrix} + b\begin{pmatrix}3\\2\\-5 \end{pmatrix} + c\begin{pmatrix}1\\-1\\1 \end{pmatrix}=0 \Rightarrow a=b=c=0. $$ Starting from $$ \begin{align*} 2a+3b+c & = 0 \\ -3a-5b+c &=0 \end{align*} $$ we obtain $$ 5a-8b=0. $$ Then from $$ \begin{align*} 2a+3b+c & = 0 \\ a+2b-c &=0 \end{align*} $$ we have $$ 3a+5b=0. $$ The only solution to this system of equations is $a=b=0$ since $$ 3(5a-8b)-5(3a+5b)=-b=0 $$ and then $5a=0$. Substitute $a=b=0$ in any of the three equations to obtain $c=0$. The basis expansion of $x=(6,2,-7)$ in terms of these three independent vectors requires that $$ a\begin{pmatrix}2\\1\\-3 \end{pmatrix} + b\begin{pmatrix}3\\2\\-5 \end{pmatrix} + c\begin{pmatrix}1\\-1\\1 \end{pmatrix}= \begin{pmatrix}6\\2\\-7 \end{pmatrix}. $$ It is easy to show that $a=b=c=1$.
Comments
Post a Comment