Unlock the Secrets of Orthonormal Vectors: The Ultimate Guide to Understanding Vector Relationships

Here is a persuasive advertisement for ghostwriting services specifically for linear algebra:

**Unlock the Secrets of Linear Algebra with Our Expert Ghostwriting Services!**

Struggling to grasp orthonormal vectors and their applications? Our team of expert ghostwriters is here to help! With our comprehensive writing services, you’ll get:

✍️ High-quality, plagiarism-free essays and assignments on linear algebra topics, including orthonormal vectors, eigenvalues, and matrix operations.
✍️ Customized solutions tailored to your specific needs and academic level.
✍️ Timely delivery to ensure you meet your deadlines.

Our advantages:

✨ Expertise: Our ghostwriters hold advanced degrees in mathematics and have extensive experience in linear algebra.
✨ Confidentiality: Your work is 100% confidential and secure.
✨ Quality Guarantee: We ensure that your work meets the highest academic standards.

Don’t let linear algebra challenges hold you back. Let our experts help you succeed! Order now and get top-notch ghostwriting services that will take your academic performance to the next level!

### Orthonormal Vectors

A list \(e_{1},\ldots,e_{n}\) of vectors in \(V\) is called _orthonormal_ if

\[\langle e_{j},e_{k}\rangle=\begin{cases}1&\text{if}\;\;j=k,\\ 0&\text{if}\;\;j\neq k\end{cases}.\]

**Example 69**.: The standard basis \(\{e_{1},e_{2},\ldots,e_{n}\}\) in \(\mathbb{R}^{n}\) is an orthonormal list, where by \(e_{i}\) we mean a vector in \(\mathbb{R}^{n}\) with \(i\)-th component equal to \(1\) and all other components are zero. For example \(\{e_{1}=(1,0,0),e_{2}=(0,1,0),e_{3}=(0,0,1)\}\) is an orthonormal list in \(\mathbb{R}^{3}\). Another example of an orthonormal list in \(\mathbb{R}^{3}\) is

\[\left\{\left(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right), \;\left(\frac{-1}{\sqrt{2}},\frac{1}{\sqrt{2}},0\right),\;\left(\frac{-1}{ \sqrt{6}}\,\frac{-1}{\sqrt{6}},\frac{2}{\sqrt{6}}\right)\right\}.\]

Orthonormal lists are easy to work with because an orthonormal list of the right length is an orthonormal basis of \(V\). Observe that an orthonormal list is linearly independent because if we set \(a_{1}e_{1}+\cdots+a_{m}e_{m}=0\), then the fact that

\[||a_{1}e_{1}+\cdots+a_{m}e_{m}||^{2}=|a_{1}|^{2}+\cdots+|a_{m}|^{2}\]

for all scalars \(a_{1},\ldots,a_{m}\) yields that all \(a_{j}\)’s are zero. Thus \(\{e_{1},e_{2},\ldots,e_{m}\}\) is linearly independent. Another important property of orthonormal bases is the fact that one can write a given vector \(v\in V\) as a linear combination of orthonormal basis elements, namely:

**Theorem 73**.: _Let \(v\in V\) and \(e_{1},\ldots,e_{n}\) be an orthonormal basis for \(V\). Then_

\[v=\langle v,e_{1}\rangle e_{1}+\cdots+\langle v,e_{n}\rangle e_{n}\]

_and_

Proof

: Because \(e_{1},\ldots,e_{n}\) is an orthonormal basis for \(V\), there exist scalars \(a_{1},\ldots,a_{n}\) such that

\[v=a_{1}e_{1}+\cdots+a_{n}e_{n}.\]

Taking the inner product of both sides of this equation with \(e_{j}\) yields \(\langle v,e_{j}\rangle=a_{j}\) which proves the first equation in the above theorem. To show

\[||v||^{2}=|\langle v,e_{1}\rangle|^{2}+\cdots+|\langle v,e_{n}\rangle|^{2}\]

set \(v=a_{1}e_{1}+\cdots+a_{n}e_{n}\) and use the fact that \(||a_{1}e_{1}+\cdots+a_{m}e_{m}||^{2}=|a_{1}|^{2}+\cdots+|a_{m}|^{2}\).

Now that we have some idea about how useful it is to have orthonormal bases, how do we go about finding them? The answer to this question is given by the Gram-Schmidt Procedure.

**Theorem 74** (Gram-Schmidt Procedure).: _Let \(\{v_{1},v_{2},\ldots,v_{m}\}\) be a linearly independent set of vectors in an inner product space. Set_

* \(e_{1}=\frac{v_{1}}{||v_{1}||}\)_._
* _For_ \(j=2,3,\ldots,m\) _define:_ \[e_{j}=\frac{v_{j}-\langle v_{j},e_{1}\rangle e_{1}-\cdots-\langle v_{j},e_{j- 1}\rangle e_{j-1}}{||v_{j}-\langle v_{j},e_{1}\rangle e_{1}-\cdots-\langle v_ {j},e_{j-1}\rangle e_{j-1}||}.\] _Then_ \(e_{1},e_{2},\ldots,e_{m}\) _is an orthonormal set such that for_ \(j=1,\ldots,m\) _we have_ \[\text{span}(e_{1},\ldots,e_{j})=\text{span}(v_{1},\ldots,v_{j}).\]

The proof of the above theorem can be found in any standard textbook on linear algebra. In the following we give a couple of examples.

**Example 70**.: Let \(\{v_{1},v_{2},v_{3}\}\) be a list of linearly independent vectors in \(\mathbb{R}^{4}\), where \(v_{1}=(1,0,1,0)\), \(v_{2}=(1,1,1,1)\) and \(v_{3}=(0,1,2,1)\). To find the orthonormal list we set

\[e_{1}=\frac{v_{1}}{||v_{1}||}=\frac{(1,0,1,0)}{\sqrt{2}},\]

\[e_{2}=\frac{v_{2}-\langle v_{2},e_{1}\rangle e_{1}}{||v_{2}-\langle v_{2},e_{ 1}\rangle e_{1}||}=\frac{(0,1,0,1)}{\sqrt{2}},\]

and finally

\[e_{3}=\frac{v_{3}-\langle v_{3},e_{1}\rangle e_{1}-\langle v_{3},e_{2}\rangle e _{2}}{||v_{3}-\langle v_{3},e_{1}\rangle e_{1}-\langle v_{3},e_{2}\rangle e_{ 2}||}=\frac{(-1,0,1,0)}{\sqrt{2}}.\]

Observe that each \(e_{j}\) for \(j=1,2,3\) has length \(1\) and

\[\langle e_{1},e_{2}\rangle=\langle e_{1},e_{3}\rangle=\langle e_{2},e_{3} \rangle=0,\]

i.e., \(\{e_{1},e_{2},e_{3}\}\) is an orthonormal list.

**Example 71**.: Consider the finite dimensional vector space \(\mathcal{P}_{2}(\mathbb{R})\), the set of all polynomials with coefficients in \(\mathbb{R}\) and degree at most \(2\). Suppose the inner product on \(\mathcal{P}_{2}(\mathbb{R})\) is given by:

\[\langle p,q\rangle=\int_{-1}^{1}p(x)\,q(x)\,dx.\]

We know that the set \(\{1,x,x^{2}\}\) is a linearly independent list; in fact it is a basis for \(\mathcal{P}_{2}(\mathbb{R})\). To find an orthonormal basis for this space we apply the Gram-Schmidt process and because \(||1||^{2}=\int_{-1}^{1}1\cdot 1\,dx=2\) we have

\[e_{1}=\frac{v_{1}}{||v_{1}||}=\frac{1}{||1||}=\frac{1}{\sqrt{2}}.\]

To find \(e_{2}\), we find \(\langle v_{2},e_{1}\rangle=\int_{-1}^{1}x\frac{1}{\sqrt{2}}=\int_{-1}^{1}x \frac{1}{\sqrt{2}}\,dx=0\) and thus

\[e_{2}=\frac{v_{2}-\langle v_{2},e_{1}\rangle e_{1}}{||v_{2}-\langle v_{2},e_{ 1}\rangle e_{1}||}=\frac{x}{\sqrt{\frac{2}{3}}}=\sqrt{\frac{3}{2}}x.\]

To find \(e_{3}\) we first set \(\langle v_{3},e_{1}\rangle=\int_{-1}^{1}x^{2}\sqrt{\frac{1}{2}}\,dx\) and \(\langle v_{3},e_{2}\rangle=\int_{-1}^{1}x^{2}\sqrt{\frac{3}{2}}x\,dx\) and

\[v_{2}-\langle v_{2},e_{1}\rangle e_{1}-\langle v_{2},e_{2}\rangle e_{2}=x^{2}- \langle x^{2},e_{1}\rangle e_{1}-\langle x^{2},e_{2}\rangle e_{2}=x^{2}-\frac {1}{3}.\]

Therefore \(e_{3}=\frac{x^{2}-\frac{1}{3}}{||x^{2}-\frac{1}{3}||}=\sqrt{\frac{45}{8}} \left(x^{2}-\frac{1}{3}\right)\). Note that the orthonormal set consists of \(\{e_{1},e_{2},e_{3}\}\) where

\[e_{1} = \sqrt{\frac{1}{2}}\] \[e_{2} = \sqrt{\frac{3}{2}}\,x\] \[e_{3} = \sqrt{\frac{45}{8}}\left(x^{2}-\frac{1}{3}\right).\]

**Remark 40**.: Many orthonormal sets of great interest and importance in analysis can be obtained by applying the Gram-Schmidt processes. One such orthonormal set is in \(L^{2}[-1,1]\), where by \(L^{2}[-1,1]\) we mean the space of square Lebesgue integrable functions (see Definition 98 for Lebesgue integrability later in the text).

If we want to obtain an orthonormal set in \(L^{2}[-1,1]\), we start with polynomials

\[x_{0}(t)=1,x_{1}(t)=t,\ldots,x_{j}(t)=t^{j},\ldots\quad t\in[-1,1].\]This sequence of functions is linearly independent (why?) and thus we can apply the Gram-Schmidt process to obtain an orthonormal set. It can be shown that

\[e_{n}(t)=\sqrt{2n+1/2}\,P_{n}(t)\qquad n=0,1,\ldots\]

where \(P_{n}\) is called the _Legendre polynomial_ of order \(n\) given by the formula

\[P_{n}(t)=\frac{1}{2^{n}n!}\frac{d^{n}}{dt^{n}}\left[(t^{2}-1)^{n}\right].\]

By applying the binomial theorem to \((t^{2}-1)^{n}\) and differentiating the result \(n\) times term by term, one can obtain an equivalent form

\[P_{n}(t)=\sum_{j=0}^{N}\frac{(2n-2j)!}{2^{n}j!(n-j)!(n-2j)!}\,t^{n-2j},\]

where if \(n\) is even we take \(N=n/2\) and in case \(n\) is odd \(N=(n-1)/2\). If we list the first few Legendre polynomials, we see that

\[P_{0}(t) = 1,\] \[P_{1}(t) = t,\] \[P_{2}(t) = \frac{1}{2}(3t^{2}-1),\] \[P_{3}(t) = \frac{1}{2}(5t^{3}-3t),\] \[P_{4}(t) = \frac{1}{8}(35t^{4}-30t^{2}+3).\]

Note that \(P_{n}(t)\) is a polynomial of degree \(n\) and it is an orthonormal set in \(L^{2}[-1,1]\). Legendre polynomials are used in analysis and approximation theory; they also have applications in physics, where they are used in the determination of wave functions.

Another important application of the Gram-Schmidt process is the following well known result, known as the Riesz representation theorem.

mathematical concepts

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注