Contents
raw book

The dot product (also called the inner product or scalar product) of two vectors \( \mathbf{a} \) and \( \mathbf{b} \) quantifies their directional similarity. Geometrically, it is defined by

\[\boxed{\; \mathbf{a}\cdot\mathbf{b} \;=\; \lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert\,\cos\theta, \;}\]

where \( \theta \) is the angle between \( \mathbf{a} \) and \( \mathbf{b} \).

From the Law of Cosines to the Coordinate Formula

Figure: Vectors \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}=\mathbf{b}-\mathbf{a}\) forming a triangle.

Drawing the triangle with sides \( \mathbf{a} \), \( \mathbf{b} \), and \( \mathbf{c}=\mathbf{b}-\mathbf{a} \), we can read off the Law of Cosines:

\[ \begin{array}{rrl} & \lVert \mathbf{b}-\mathbf{a} \rVert^{2} &= \lVert \mathbf{a} \rVert^{2} + \lVert \mathbf{b} \rVert^{2} - 2\lVert \mathbf{a} \rVert\,\lVert \mathbf{b} \rVert \cos\theta \\ \Leftrightarrow & \lVert \mathbf{c} \rVert^{2} &= \lVert \mathbf{a} \rVert^{2} + \lVert \mathbf{b} \rVert^{2} - 2(\mathbf{a}\cdot\mathbf{b}) \\ \Leftrightarrow & \mathbf{c}\cdot\mathbf{c} &= \mathbf{a}\cdot\mathbf{a} + \mathbf{b}\cdot\mathbf{b} - 2\,\mathbf{a}\cdot\mathbf{b} \\ &&= (\mathbf{a}-\mathbf{b})\cdot(\mathbf{a}-\mathbf{b}). \end{array} \]

Rearranging the first line gives

\[ -2\,\lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert\,\cos\theta \;=\; \lVert \mathbf{b}-\mathbf{a} \rVert^{2} - \lVert \mathbf{a} \rVert^{2} - \lVert \mathbf{b} \rVert^{2}. \]

Substituting coordinate expressions for lengths yields

\[ \begin{array}{rl} -2\,\lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert\,\cos\theta =& (a_1-b_1)^2 + (a_2-b_2)^2 + \dots \\ &\;-\;(a_1^2+a_2^2+\dots) - (b_1^2+b_2^2+\dots)\\[4pt] =& -2a_1b_1 - 2a_2b_2 - \dots \end{array} \]

Dividing by \(-2\) gives the algebraic (coordinate) expression of the dot product:

\[\boxed{\; \mathbf{a}\cdot\mathbf{b} \;=\; \sum_{i=1}^{d} a_i b_i \;=\; \mathbf{a}^{\mathsf{T}}\mathbf{b}. \;}\]

Coordinate Proof via an Orthonormal Basis

Proof. Let \( \{\mathbf e_i\}_{i=1}^d \) be an orthonormal basis and write \( \mathbf{a}=\sum_{i=1}^d a_i\,\mathbf e_i \) and \( \mathbf{b}=\sum_{i=1}^d b_i\,\mathbf e_i \). Then

\[ \mathbf{a}\cdot\mathbf{b} \;=\; \Big(\sum_{i=1}^d a_i\,\mathbf e_i\Big)\cdot\Big(\sum_{j=1}^d b_j\,\mathbf e_j\Big) \;=\; \sum_{i=1}^d\sum_{j=1}^d a_i b_j\,(\mathbf e_i\cdot \mathbf e_j) \;=\; \sum_{i=1}^d a_i b_i, \]

because \( \mathbf e_i\cdot \mathbf e_j=\delta_{ij} \) by orthonormality. \(\square\)

Properties of the Dot Product

Orthogonality and Sign

If both vectors are orthogonal (i.e. \( \theta=90^\circ \)), then

\[ \mathbf{a}\cdot\mathbf{b} = \lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert\cos\frac{\pi}{2} = 0. \]

We prefer orthogonal over perpendicular to emphasize the dot-product criterion \( \mathbf{a}\cdot\mathbf{b}=0 \), valid even when one vector is the zero vector (while perpendicularity usually excludes zero vectors).

If \( |\theta|<90^\circ \), then \( \mathbf{a}\cdot\mathbf{b}> 0 \).
If \( |\theta|>90^\circ \), then \( \mathbf{a}\cdot\mathbf{b} < 0 \).

If the vectors are contradirectional and parallel (\( \theta=180^\circ \)), then

\[ \mathbf{a}\cdot\mathbf{b} = -\lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert \]

If they are codirectional and parallel (\( \theta=0^\circ \)), then

\[ \mathbf{a}\cdot\mathbf{b} = \lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert \]

Norm from the Dot Product

Taking the dot product of a vector with itself gives its squared length:

\[ \mathbf{a}\cdot\mathbf{a}=\lVert\mathbf{a}\rVert\cdot\lVert\mathbf{a}\rVert=\lVert\mathbf{a}\rVert^2 \]

From which follows that the vector length or norm can therefore be defined as

\[\lVert\mathbf{a}\rVert=\sqrt{\mathbf{a}\cdot\mathbf{a}}\]

Length of a Sum (Parallelogram Expansion)

The length of the sum \( \mathbf{a}+\mathbf{b} \) expands as

\[ \begin{array}{rl} \lVert\mathbf{a}+\mathbf{b}\rVert^2 &= (\mathbf{a}+\mathbf{b})\cdot(\mathbf{a}+\mathbf{b})\\ &= \mathbf{a}\cdot\mathbf{a} + \mathbf{a}\cdot\mathbf{b} + \mathbf{b}\cdot\mathbf{a} + \mathbf{b}\cdot\mathbf{b}\\ &= \lVert\mathbf{a}\rVert^2 + 2\,\mathbf{a}\cdot\mathbf{b} + \lVert\mathbf{b}\rVert^2. \end{array} \]

Algebraic Laws

Commutative Law

The dot product is symmetric:

\[ \mathbf{a}\cdot\mathbf{b}=\mathbf{b}\cdot\mathbf{a}. \]

Proof. \(\mathbf{a}\cdot\mathbf{b}=\lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert\cos\theta =\lVert\mathbf{b}\rVert\,\lVert\mathbf{a}\rVert\cos\theta=\mathbf{b}\cdot\mathbf{a}.\) \(\square\)

Distributive Law

The dot product is distributive over vector addition:

\[ \mathbf{a}\cdot(\mathbf{b}\pm\mathbf{c}) = (\mathbf{a}\cdot\mathbf{b}) \pm (\mathbf{a}\cdot\mathbf{c}). \]

Proof.

The dot product is defined coordinatewise by \(\mathbf{a}\cdot\mathbf{b}=\sum_{i=1}^d a_i b_i\). Thus:

\[ \begin{array}{rl} \mathbf{a} \cdot (\mathbf{b} \pm \mathbf{c}) &= \sum\limits_{i=1}^d a_i (b_i \pm c_i)\\ &= \sum\limits_{i=1}^d (a_i b_i \pm a_i c_i)\\ &= \sum\limits_{i=1}^d a_i b_i \;\pm\; \sum\limits_{i=1}^d a_i c_i\\ &= \mathbf{a} \cdot \mathbf{b} \;\pm\; \mathbf{a} \cdot \mathbf{c}. \;\square \end{array} \]

As an example of expansion by distributivity:

\[ (\mathbf{a}+\mathbf{b})\cdot(\mathbf{c}+\mathbf{d}) = \mathbf{a}\cdot\mathbf{c}+\mathbf{a}\cdot\mathbf{d} +\mathbf{b}\cdot\mathbf{c}+\mathbf{b}\cdot\mathbf{d}. \]

Homogeneity (Mixed Associative Law)

Scaling can be moved between factors:

\[ \alpha(\mathbf{a}\cdot\mathbf{b})=(\alpha\mathbf{a})\cdot\mathbf{b} = \mathbf{a}\cdot(\alpha\mathbf{b}). \]

Bilinearity

Often the mixed associative law and distributive law are combined to show that the dot product is bilinear:

\[ \mathbf{a}\cdot(\alpha\mathbf{b}+\mathbf{c}) = \alpha(\mathbf{a}\cdot\mathbf{b}) + (\mathbf{a}\cdot\mathbf{c}). \]

Scalar Multiplication

\[ (\alpha_1\mathbf{a})\cdot(\alpha_2\mathbf{b}) = \alpha_1\alpha_2\,(\mathbf{a}\cdot\mathbf{b}). \]

Equivalently, the dot product is homogeneous in each argument:

\[\alpha(\mathbf{a}\cdot\mathbf{b}) = (\alpha\mathbf{a})\cdot\mathbf{b} = \mathbf{a}\cdot(\alpha\mathbf{b}).\]

Cauchy–Schwarz Inequality

\[ \big|\mathbf{a}\cdot\mathbf{b}\big| \;\le\; \lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert, \] with equality iff \( \mathbf{a} \) and \( \mathbf{b} \) are linearly dependent.

Triangle Inequality: \(\lVert\mathbf{a}+\mathbf{b}\rVert \le \lVert\mathbf{a}\rVert+\lVert\mathbf{b}\rVert\), obtained by expanding \( \lVert\mathbf{a}+\mathbf{b}\rVert^2 \) and applying Cauchy–Schwarz.

No Cancellation

If \( \mathbf{a}\cdot\mathbf{b}=\mathbf{a}\cdot\mathbf{c} \) and \( \mathbf{a}\neq\mathbf{0} \), then by distributivity \( \mathbf{a}\cdot(\mathbf{b}-\mathbf{c})=0 \), i.e. \( \mathbf{a}\perp (\mathbf{b}-\mathbf{c}) \). This allows \( \mathbf{b}\neq\mathbf{c} \); hence equality of dot products with the same nonzero vector does not force \( \mathbf{b}=\mathbf{c} \).

Applications

Fast Calculation of Cosine

The geometric definition avoids expensive trig calls in graphics: for nonzero vectors,

\[ \cos\theta \;=\; \frac{\mathbf{a}\cdot\mathbf{b}}{\lVert\mathbf{a}\rVert\,\lVert\mathbf{b}\rVert} \;=\; \hat{\mathbf{a}}\cdot\hat{\mathbf{b}}, \]

where \( \hat{\mathbf{a}}=\mathbf{a}/\lVert\mathbf{a}\rVert \) is the unit direction. From this is clear that working with normalized vectors gives the cosine between these two vectors by taking the dot product directly.

Making Perpendicular Vectors

Since \( \mathbf{a}\cdot\mathbf{b}=0 \) characterizes orthogonality, we can construct simple perpendiculars by flipping two components and negating one. For \( \mathbf{a}=(a_1,a_2,\dots,a_n) \), the vectors \( (-a_2,a_1,0,\dots,0) \) or \( (0,-a_3,a_2,0,\dots,0) \), etc., are all perpendicular to \( \mathbf{a} \) (in \(n\ge2\)). Indeed, each has zero dot product with \( \mathbf{a} \).

Vector Projection

The dot product can be used to determine the projection of one vector onto another, like for shadows. Given two vectors \(\mathbf{a}\) and \(\mathbf{b}\) we want to determine the projection of \(\mathbf{a}\) onto \(\mathbf{b}\), which is the vector \(\mathbf{a}'\) of length \(a'\) parallel to the vector \(\mathbf{b}\).

Projection of \(\mathbf{a}\) onto \(\mathbf{b}\).

The length of the projection of \(\mathbf{a}\) onto \(\mathbf{b}\) here \(a'\) is called the Scalar Projection or Vector Component and is determined by

\[a' = |\mathbf{a}|\cos\theta = |\mathbf{a}|\underbrace{\frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{a}|\cdot|\mathbf{b}|}}_{\cos\theta}=\frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{b}|}=\mathbf{a}\cdot\frac{\mathbf{b}}{|\mathbf{b}|}=\mathbf{a}\cdot\hat{\mathbf{b}}=:\text{comp}_{\mathbf{b}}(\mathbf{a})\]

Having the Scalar Projection, we can then calculate the Vector Projection of \(\mathbf{a}\) onto \(\mathbf{b}\), highlighted in blue:

\[\mathbf{a}' = a'\cdot\hat{\mathbf{b}} = (\mathbf{a}\cdot\hat{\mathbf{b}})\cdot\hat{\mathbf{b}}=\frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{b}|}\cdot\frac{\mathbf{b}}{|\mathbf{b}|}=\frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{b}|^2}\mathbf{b}=\frac{\mathbf{a}\cdot\mathbf{b}}{\mathbf{b}\cdot\mathbf{b}}\mathbf{b}=:\text{proj}_{\mathbf{b}}(\mathbf{a})\]

Intuitively, the term \((\mathbf{a}\cdot\hat{\mathbf{a}})\cdot\hat{\mathbf{b}}\) represents how much of \(\mathbf{a}\) lies in the direction of \(\mathbf{b}\). Scalar Projection Percentage: The length of the distance \(a'\) to the total length of \(\mathbf{b}\) can be expressed in percent as follows:

\[\frac{a'}{|\mathbf{b}|} = \frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{b}|\cdot |\mathbf{b}|} =\frac{\mathbf{a}\cdot\mathbf{b}}{|\mathbf{b}|^2}=\frac{\mathbf{a}\cdot\mathbf{b}}{\mathbf{b}\cdot\mathbf{b}}\]

Which is interesting. It says that the Vector Projection is just a scaling of vector \(\mathbf{b}\) by the proportion of \({a}'\) to \(\mathbf{b}\).

Even more interesting though, when we multiply \({a}'=|\mathbf{a}|\cos\theta\) with the length of \(\mathbf{b}\), we get that the length of the projection times the length of the vector projected onto, IS the dot product!

\[{a}'|\mathbf{b}| = |\mathbf{a}||\mathbf{b}|\cos\theta = \mathbf{a}\cdot\mathbf{b}\]

Vector Rejection

The rejection of \( \mathbf{a} \) from \( \mathbf{b} \), denoted \( \operatorname{rej}_{\mathbf{b}}(\mathbf{a}) \), is the component of \( \mathbf{a} \) perpendicular to \( \mathbf{b} \).

In 2D, using the 2D Perp Operator \( \mathbf{v}^\perp \), the length of the rejection, known as the Scalar Rejection, is

\[ a' \;=\; \lVert \mathbf{a} \rVert \sin\theta \;=\; \lVert\mathbf{a}\rVert\frac{\mathbf{a}\cdot\mathbf{b}^{\perp}}{\lVert\mathbf{a}\rVert\lVert\mathbf{b}\rVert} \;=\; \frac{\mathbf{a}\cdot\mathbf{b}^{\perp}}{\lVert\mathbf{b}\rVert}. \]

With \(\mathbf{b}^\perp\) being the 2D Perp Operator, the corresponding Vector Rejection of \(\mathbf{a}\) onto \(\mathbf{b}\) is

\[ \mathbf{a}' = a' \cdot \hat{\mathbf{b}}^{\perp} = \frac{\mathbf{a} \cdot \mathbf{b}^{\perp}}{|\mathbf{b}|} \cdot \frac{\mathbf{b}^{\perp}}{|\mathbf{b}|} = \frac{\mathbf{a} \cdot \mathbf{b}^{\perp}}{|\mathbf{b}|^2} = \frac{\mathbf{a} \cdot \mathbf{b}^{\perp}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b}^{\perp} =: \operatorname{rej}_{\mathbf{b}}(\mathbf{a}) \]

In any dimension, every vector \(\mathbf{a}\) can be expressed as the sum of its projection vector \(\operatorname{proj}_{\mathbf{b}}(\mathbf{a})\) and its rejection vector \(\operatorname{rej}_{\mathbf{b}}(\mathbf{a})\) onto another vector \(\mathbf{b}\):

\[ \mathbf{a} \;=\; \operatorname{proj}_{\mathbf{b}}(\mathbf{a})\;+\;\operatorname{rej}_{\mathbf{b}}(\mathbf{a}), \]

so a general (dimension-independent) formula is

\[ \operatorname{rej}_{\mathbf{b}}(\mathbf{a}) \;=\; \mathbf{a}-\operatorname{proj}_{\mathbf{b}}(\mathbf{a}) \;=\; \mathbf{a} - \frac{\mathbf{a}\cdot\mathbf{b}}{\mathbf{b}\cdot\mathbf{b}}\,\mathbf{b} \;=\; \mathbf{a} - (\mathbf{a}\cdot\hat{\mathbf{b}})\,\hat{\mathbf{b}}. \]

Vector Reflection

Given the vector projection \(\operatorname{proj}_{\mathbf{b}}(\mathbf{a})\) and the vector rejection \(\operatorname{rej}_{\mathbf{b}}(\mathbf{a})\), we can also define the Reflection Vector \(\mathbf{a}_{\text{reflected}}\), where vector \(\mathbf{b}\) acts like a mirror, reflecting \(\mathbf{a}\) symmetrically with respect to \(\mathbf{b}\).

The reflection vector is calculated as follows:

\[ \mathbf{a}_{\text{reflected}} = \operatorname{proj}_{\mathbf{b}}(\mathbf{a}) - \operatorname{rej}_{\mathbf{b}}(\mathbf{a}) \]

Expanding and simplifying this equation:

\[ \mathbf{a}_{\text{reflected}} = \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b} - \left( \mathbf{a} - \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b} \right) = 2 \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b} - \mathbf{a} = 2 (\mathbf{a} \cdot \hat{\mathbf{b}}) \hat{\mathbf{b}} - \mathbf{a} \]

The term \(2 (\mathbf{a} \cdot \hat{\mathbf{b}}) \hat{\mathbf{b}}\) doubles the projection of \(\mathbf{a}\) onto \(\mathbf{b}\), effectively flipping \(\mathbf{a}\) across \(\mathbf{b}\). It is interesting that reflecting on a vector is exactly the negation of the reflecting a vector on a plane across its normal.

Vector Refraction

Refraction bends a ray across a boundary between two media (e.g., air \(\to\) water) so that Snell’s law holds:

Incident unit direction \(\hat{\mathbf{a}}\), unit normal \(\hat{\mathbf{n}}\), transmitted unit direction \(\hat{\mathbf{t}}\).

With unit incident direction \( \hat{\mathbf{a}} \) (pointing toward the interface) and unit surface normal \( \hat{\mathbf{n}} \), define the index ratio \[ \eta=\frac{\eta_{\text{in}}}{\eta_{\text{out}}} \]

(index of the medium the ray is leaving divided by the index of the medium it is entering). Decompose \( \hat{\mathbf{a}} \) into normal-parallel and tangential parts via projection:

\[ \hat{\mathbf{a}}_{\parallel}=(\hat{\mathbf{a}}\cdot \hat{\mathbf{n}})\,\hat{\mathbf{n}}, \qquad \hat{\mathbf{a}}_{\perp}=\hat{\mathbf{a}}-\hat{\mathbf{a}}_{\parallel}.\]

In the 2D plane of incidence, \(\|\hat{\mathbf{a}}_{\perp}\|=\sin\theta_i\) and \(|\hat{\mathbf{a}}\cdot\hat{\mathbf{n}}|=\cos\theta_i\), where \(\theta_i\) is the incident angle to the normal. Snell’s law says

\[ \eta_{\text{in}}\sin\theta_i = \eta_{\text{out}}\sin\theta_t \quad\Longrightarrow\quad \sin\theta_t=\eta\,\sin\theta_i. \]

Since tangential directions (perpendicular to \(\hat{\mathbf{n}}\)) are preserved across the interface, the transmitted tangential component scales by \(\eta\):

\[ \hat{\mathbf{t}}_{\perp}=\eta\,\hat{\mathbf{a}}_{\perp}. \]

The transmitted vector \(\hat{\mathbf{t}}\) must remain unit length, so its normal component is fixed (up to sign) by

\[ \|\hat{\mathbf{t}}\|^2 =\|\hat{\mathbf{t}}_{\perp}\|^2+\|\hat{\mathbf{t}}_{\parallel}\|^2 =\eta^2\|\hat{\mathbf{a}}_{\perp}\|^2+\|\hat{\mathbf{t}}_{\parallel}\|^2=1, \]

hence

\[ \|\hat{\mathbf{t}}_{\parallel}\|=\sqrt{\,1-\eta^2\|\hat{\mathbf{a}}_{\perp}\|^2\,}. \]

To point into the transmitted medium (across the interface), the normal component must be opposite \(\hat{\mathbf{n}}\), giving

\[ \boxed{\; \hat{\mathbf{t}} = \underbrace{\eta\,\hat{\mathbf{a}}_{\perp}}_{\text{Snell on tangential}} \;-\; \underbrace{\sqrt{\,1-\eta^2\|\hat{\mathbf{a}}_{\perp}\|^2\,}\;\hat{\mathbf{n}}}_{\text{unit-length normal component}} \;} \]

This is the clean “component view” formula. It is valid whenever the square root is real; otherwise there is TIR (total internal reflection) and no refracted ray exists, which happens only when going from higher to lower index and the incidence angle is above the critical angle: \(\sin\theta_c=\frac{n_{\text{out}}}{n_{\text{in}}}\).

Using \(\|\hat{\mathbf{a}}_{\perp}\|^2=1-(\hat{\mathbf{a}}\cdot\hat{\mathbf{n}})^2\), this becomes the widely used dot-product form.

Dot-Product Form

Define

\[ k \;=\; 1-\eta^{2}\Big(1-(\hat{\mathbf{a}}\cdot \hat{\mathbf{n}})^{2}\Big). \]

Then

\[ \boxed{\; \hat{\mathbf{t}} = \eta\,\hat{\mathbf{a}} -\Big(\eta(\hat{\mathbf{a}}\cdot \hat{\mathbf{n}}) +\sqrt{k}\Big)\hat{\mathbf{n}} \quad\text{if }k\ge 0,\; \text{else TIR.} \;} \]

Relation to Snell’s Law

Snell’s law equates the product of refractive index and sine of the angle to the normal across the interface:

\[ \eta_{\text{in}}\sin\theta_i=\eta_{\text{out}}\sin\theta_t. \]

With \(\cos\theta_i=-\hat{\mathbf{a}}\cdot\hat{\mathbf{n}}\) and \(\sin^2=1-\cos^2\), one finds

\[ \cos^2\theta_t =1-\eta^2(1-\cos^2\theta_i) =1-\eta^2\big(1-(\hat{\mathbf{a}}\cdot\hat{\mathbf{n}})^2\big)=k, \]

which is exactly the radicand in the dot-product formula.

Scaling a Point Along an Arbitrary Axis

In many computer graphics or geometric modeling scenarios, one often needs to scale a point \(\mathbf{p}\) only along a specific axis \(\hat{\mathbf{n}}\) (with \(\|\hat{\mathbf{n}}\|=1\)), leaving all directions orthogonal to \(\hat{\mathbf{n}}\) unchanged.

Any vector \(\mathbf{p}\) can be decomposed into a parallel and orthogonal component with respect to a vector \(\hat{\mathbf{n}}\) as:

\[ \mathbf{p} = \mathbf{p}_{\parallel} + \mathbf{p}_{\perp}. \]

The parallel part is the projection of \(\mathbf{p}\) onto \(\hat{\mathbf{n}}\):

\[ \mathbf{p}_{\parallel} = (\mathbf{p} \cdot \hat{\mathbf{n}})\,\hat{\mathbf{n}}, \]

and the orthogonal part is

\[ \mathbf{p}_{\perp} = \mathbf{p} - \mathbf{p}_{\parallel}. \]

When scaling in the Direction of \(\hat{\mathbf{n}}\), only the parallel component is multiplied by the scaling factor \(k\), while the orthogonal component remains unchanged:

\[ \mathbf{p}'_{\parallel} = k\,\mathbf{p}_{\parallel}, \quad \mathbf{p}'_{\perp} = \mathbf{p}_{\perp}. \]

When recombining both parts, the scaled new point \(\mathbf{p}'\) now is

\[ \mathbf{p}' = \mathbf{p}'_{\parallel} + \mathbf{p}'_{\perp} = k\,\mathbf{p}_{\parallel} + \mathbf{p}_{\perp}. \]

Now substituting \(\mathbf{p}_{\parallel}\) and \(\mathbf{p}'_{\perp}\) and collecting terms gives the formula to scale a point along an arbitrary axis \(\hat{\mathbf{n}}\):

\[ \mathbf{p}' = \mathbf{p} + (k - 1)\,(\mathbf{p} \cdot \hat{\mathbf{n}})\,\hat{\mathbf{n}} \]

Distance between Line and Point

Given a line as \(y=mx+b\) or in form as a line segment between two points \({P}_1\) and \({P}_2\) as well as a point \({P}\), we only need to find a normal \(\mathbf{n}\) to that line and a point \({Q}\) on the line. Given the a segment, a possible point \({Q}={P}_1\) and given the line formula, a possible point is \({Q}=\left(\begin{array}{c}0\\b\end{array}\right)\).

The normal \(\mathbf{n}\) is basically \(\mathbf{b}^\perp\), the perpendicular vector of \(\mathbf{b}={P}_2-{P}_1\) between \({P}_1\) and \({P}_2\). Given the formula, we know that if we multiply the slopes of two perpendicular lines, we get \(−1\). The slope must therefore be \(-\frac{1}{m}\). If we want the line pass point \(\mathbf{Q}\), we have \(y = -\frac{1}{m}x+b\). Therefore \(\mathbf{n}=\left(\begin{array}{c}1\\-\frac{1}{m}\end{array}\right)\).

The distance between the line and point \({P}\) is thus the vector projection from vector \(\mathbf{a}={P}-{Q}\) between point \({Q}\) and \({P}\) onto \(\mathbf{n}\).

\[d = \frac{|\mathbf{n}\cdot \mathbf{a}|}{|\mathbf{n}|}\]

For the case of a line segment, the expression can then be further simplified using the 2D Perp product:

\[d=|\hat{\mathbf{b}}^\perp\cdot\mathbf{a}|\]

Direction Cosines

In three dimensional space, a vector \(\mathbf{a}\) will form the angle \(\alpha\) with the x-axis, \(\beta\) with the y-axis and \(\gamma\) with the z-axis. These angles are called direction angles and the cosine of these angles are called direction cosines.

PLOT

When we use the standard basis vectors for the dimension \(\mathbf{i}, \mathbf{j}\) and \(\mathbf{k}\), the direction cosines are:

\[\begin{array}{rl} \cos\alpha =& \frac{\mathbf{a}\cdot\mathbf{i}}{|\mathbf{a}|\cdot|\mathbf{i}|} = \frac{\mathbf{a}_1}{|\mathbf{a}|}\\ \cos\beta =& \frac{\mathbf{a}\cdot\mathbf{j}}{|\mathbf{a}|\cdot|\mathbf{j}|} = \frac{\mathbf{a}_2}{|\mathbf{a}|}\\ \cos\gamma =& \frac{\mathbf{a}\cdot\mathbf{k}}{|\mathbf{a}|\cdot|\mathbf{k}|} = \frac{\mathbf{a}_3}{|\mathbf{a}|}\\ \end{array}\]

As such, the following properties hold:

  1. The vector \(\mathbf{b}=[\cos\alpha, \cos\beta, \cos\gamma]^T\) is a unit vector.
  2. \(\cos^2\alpha+ \cos^2\beta+ \cos^2\gamma=1\)
  3. \(\mathbf{a} = |\mathbf{a}|[\cos\alpha, \cos\beta, \cos\gamma]^T\)

Linear Combination

A very common use-case of the dot product are linear combinations of single variables \(\mathbf{a}\) and \(\mathbf{b}\). This idea is continued with matrix multiplication, where the dot product is the inner ingredient.

As an example, let \(\mathbf{a}=(20, 30, 40)\) be the number of products in our shop and \(\mathbf{b}=(\$5, \$6, \$7)\) the prices per product. How much is the inventory worth? Obviously \(\$5\cdot 20+\$6\cdot 30+\$7\cdot 40 = \mathbf{a}\cdot\mathbf{b}\)

Equation of a line

The equation of a line in 3D space in the form \(ax+by+cz=d\) can be written as a dot product

\[\mathbf{a}\cdot\mathbf{x}=d\]

where \(\mathbf{a}=(a, b, c)\) and \(\mathbf{x}=(x, y, z)\). Similarly, for 2D lines and any other space.