Posts on Mathematics

The intersection area of two circles


Posted by Diego Assencio on 2017.07.12 under Mathematics (Geometry)

Let $C_1$ and $C_2$ be two circles of radii $r_1$ and $r_2$ respectively whose centers are at a distance $d$ from each other. Assume, without loss of generality, that $r_1 \geq r_2$. What is the intersection area of these two circles?

If $d \geq r_1 + r_2$, the circles intersect at most up to a point (when $d = r_1 + r_2$) and therefore the intersection area is zero. On the other extreme, if $d + r_2 \leq r_1$, circle $C_2$ is entirely contained within $C_1$ and the intersection area is the area of $C_2$ itself: $\pi r_2^2$. The challenging case happens when both $d \lt r_1 + r_2$ and $d + r_2 \gt r_1$ are satisfied, i.e., when the the circles intersect only partially but the intersection area is more than simply a point. Rearranging the second inequality, we obtain $r_1 - r_2 \lt d \lt r_1 + r_2$, so we will assume this to be the case from now on.

To solve this problem, we will make use of a Cartesian coordinate system with origin at the center of circle $C_1$ such that the center of $C_2$ is at $(d,0)$ as shown on figure 1.

Fig. 1: Two intersecting circles $C_1$ (blue) and $C_2$ (red) of radii $r_1$ and $r_2$ respectively. The distance between the centers of the circles is $d = d_1 + d_2$, where $d_1$ is the $x$ coordinate of the intersection points and $d_2 = d - d_1$. Notice that $d_1 \geq 0$ since these points are always located to the right of the center of $C_1$, but $d_2$ may be negative when $r_2 \lt r_1$ since, in this case, the intersection points will eventually fall to the right of the center of $C_2$ as we move $C_2$ to the left, making $d \lt d_1$ and therefore $d_2 \lt 0$.

The circles $C_1$ and $C_2$ are described by the following equations respectively: $$ \begin{eqnarray} x^2 + y^2 &=& r_1^2 \label{post_8d6ca3d82151bad815f78addf9b5c1c6_c1}\\[5pt] (x - d)^2 + y^2 &=& r_2^2 \\[5pt] \end{eqnarray} $$ At the intersection points, we have $x = d_1$. To determine $d_1$, we can replace $x$ with $d_1$ and isolate $y^2$ on both equations above to get: $$ r_1^2 - d_1^2 = r_2^2 - (d_1 - d)^2 $$ Solving for $d_1$ is a simple task: $$ r_1^2 - d_1^2 = r_2^2 - d_1^2 + 2d_1d - d^2 \Longrightarrow d_1 = \displaystyle\frac{r_1^2 - r_2^2 + d^2}{2d} \label{post_8d6ca3d82151bad815f78addf9b5c1c6_eq_d1} $$ From equation \eqref{post_8d6ca3d82151bad815f78addf9b5c1c6_eq_d1}, we can see that $d_1 \geq 0$ since $r_1 \geq r_2$. The intersection area is the sum of the blue and red areas shown on figure 1, which we refer to as $A_1$ and $A_2$ respectively. We then have that: $$ \begin{eqnarray} A_1 &=& 2\int_{d_1}^{r_1} \sqrt{r_1^2 - x^2}dx \label{%INDEX_eq_A1_def} \\[5pt] A_2 &=& 2\int_{d - r_2}^{d_1} \sqrt{r_2^2 - (x - d)^2}dx \end{eqnarray} $$ where the factors of $2$ come from the fact that each integral above accounts for only half of the area of the associated region (only points on and above the $x$ axis are taken into account); the results must then be multiplied by two so that the areas below the $x$ axis are taken into account as well.

The computation of these integrals is straightforward. Before we proceed, notice first that: $$ \begin{eqnarray} A_2 &=& 2\int_{d - r_2}^{d_1} \sqrt{r_2^2 - (x - d)^2}dx \nonumber \\[5pt] &=& 2\int_{- r_2}^{d_1 - d} \sqrt{r_2^2 - x^2}dx \nonumber \\[5pt] &=& 2\int_{d - d_1}^{r_2} \sqrt{r_2^2 - x^2}dx \nonumber \\[5pt] &=& 2\int_{d_2}^{r_2} \sqrt{r_2^2 - x^2}dx \label{%INDEX_eq_A2} \end{eqnarray} $$ where above we used the fact that $d_2 = d - d_1$. This is the same as equation \eqref{%INDEX_eq_A1_def} if we apply the substitutions $d_1 \rightarrow d_2$ and $r_1 \rightarrow r_2$. Therefore, by computing $A_1$, we will immediately obtain $A_2$ as well. Let's then compute $A_1$ first: $$ \begin{eqnarray} A_1 &=& 2\int_{d_1}^{r_1} \sqrt{r_1^2 - x^2}dx \nonumber\\[5pt] &=& 2r_1 \int_{d_1}^{r_1} \sqrt{1 - \left(\frac{x}{r_1}\right)^2}dx \nonumber\\[5pt] &=& 2r_1^2 \int_{d_1/r_1}^{1} \sqrt{1 - x^2}dx \label{%INDEX_eq_A1} \end{eqnarray} $$ All we need to do now is to integrate $\sqrt{1 - x^2}$. The process is straightforward if we use integration by parts: $$ \begin{eqnarray} \int \sqrt{1 - x^2}dx &=& x \sqrt{1 - x^2} - \int x \left(\frac{-x}{\sqrt{1 - x^2}}\right) dx \nonumber\\[5pt] &=& x \sqrt{1 - x^2} + \int \frac{x^2 - 1}{\sqrt{1 - x^2}} dx + \int \frac{1}{\sqrt{1 - x^2}} dx \nonumber\\[5pt] &=& x \sqrt{1 - x^2} - \int \sqrt{1 - x^2} dx + \sin^{-1}(x) \end{eqnarray} $$ Therefore: $$ \int \sqrt{1 - x^2}dx = \frac{1}{2}\left( x \sqrt{1 - x^2} + \sin^{-1}(x) \right) \label{post_8d6ca3d82151bad815f78addf9b5c1c6_int_for_A1_A2} $$ Using equation \eqref{post_8d6ca3d82151bad815f78addf9b5c1c6_int_for_A1_A2} on equation \eqref{%INDEX_eq_A1} yields: $$ \begin{eqnarray} A_1 &=& r_1^2 \left( \frac{\pi}{2} - \frac{d_1}{r_1}\sqrt{1 - \left(\frac{d_1}{r_1}\right)^2} - \sin^{-1}\left(\frac{d_1}{r_1}\right) \right) \nonumber\\[5pt] &=& r_1^2 \left( \cos^{-1}\left(\frac{d_1}{r_1}\right) - \frac{d_1}{r_1}\sqrt{1 - \left(\frac{d_1}{r_1}\right)^2} \right) \nonumber\\[5pt] &=& r_1^2 \cos^{-1}\left(\frac{d_1}{r_1}\right) - d_1 \sqrt{r_1^2 - d_1^2} \label{post_8d6ca3d82151bad815f78addf9b5c1c6_eq_A1_final} \end{eqnarray} $$ where above we used the fact that $\pi/2 - \sin^{-1}(\alpha) = \cos^{-1}(\alpha)$ for any $\alpha$ in $[-1,1]$. This fact is easy to prove: $$ \cos\left(\frac{\pi}{2} - \sin^{-1}(\alpha)\right) = \cos\left(\frac{\pi}{2}\right)\cos(\sin^{-1}(\alpha)) + \sin\left(\frac{\pi}{2}\right)\sin(\sin^{-1}(\alpha)) = \alpha $$ and therefore $\pi/2 - \sin^{-1}(\alpha) = \cos^{-1}(\alpha)$. As discussed above, we can now obtain $A_2$ directly by doing the substitutions $d_1 \rightarrow d_2$ and $r_1 \rightarrow r_2$ on the expression for $A_1$ on equation \eqref{post_8d6ca3d82151bad815f78addf9b5c1c6_eq_A1_final}: $$ A_2 = r_2^2 \cos^{-1}\left(\frac{d_2}{r_2}\right) - d_2 \sqrt{r_2^2 - d_2^2} $$ The sum of $A_1$ and $A_2$ is the intersection area of the circles: $$ \boxed{ \begin{eqnarray} A_{\textrm{intersection}} &=& r_1^2 \cos^{-1}\left(\frac{d_1}{r_1}\right) - d_1\sqrt{r_1^2 - d_1^2} \nonumber \\[5pt] &+& r_2^2\cos^{-1}\left(\frac{d_2}{r_2}\right) - d_2\sqrt{r_2^2 - d_2^2} \nonumber \end{eqnarray} } \label{post_8d6ca3d82151bad815f78addf9b5c1c6_A_intersection} $$ where: $$ \boxed{ d_1 = \displaystyle\frac{r_1^2 - r_2^2 + d^2}{2d} } \quad \textrm{ and } \quad \boxed{ d_2 = d - d_1 = \displaystyle\frac{r_2^2 - r_1^2 + d^2}{2d} } \label{post_8d6ca3d82151bad815f78addf9b5c1c6_eq_d1_final} $$

Summary

Given two circles $C_1$ and $C_2$ of radii $r_1$ and $r_2$ respectively (with $r_1 \geq r_2$) whose center points are at a distance $d$ from each other, the intersection area of the circles is:

1.zero, if $d \geq r_1 + r_2$, since in this case the circles intersect at most up to a point.
2.$\pi r_2^2$, if $d \leq r_1 - r_2$, since in this case $C_2$ is entirely contained within $C_1$.
3.given by equation \eqref{post_8d6ca3d82151bad815f78addf9b5c1c6_A_intersection} in all other cases.
Comments (0) Direct link

An easy derivation of 3D rotation matrices


Posted by Diego Assencio on 2016.09.23 under Mathematics (Linear Algebra)

In this post, we will derive the components of a rotation matrix in three dimensions. Our derivation favors geometrical arguments over a purely algebraic approach and therefore requires only basic knowledge of Analytic Geometry.

Given a vector ${\bf x} = (x,y,z)$, our goal is to rotate it by an angle $\theta \gt 0$ around a fixed axis represented by a unit vector $\hat{\bf n} = (n_x, n_y, n_z)$; we call ${\bf x}'$ the result of rotating ${\bf x}$ around $\hat{\bf n}$. The rotation is such that if we look into $\hat{\bf n}$, the vector ${\bf x}$ will be rotated along the counter-clockwise direction (see figure 1).

Rotation of vector around axis
(a)
Rotation of vector around axis (top view)
(b)
Fig. 1: The vector ${\bf x}$ is rotated by an angle $\theta$ around $\hat{\bf n}$. Figure (a) shows the components ${\bf x}_{\parallel}$ and ${\bf x}_{\perp}$ of ${\bf x}$ which are parallel and perpendicular to $\hat{\bf n}$ respectively. Figure (b) shows the rotation as seen from top to bottom, i.e., from the perspective of an observer looking into the head of $\hat{\bf n}$: ${\bf x}_{\parallel}$ remains unchanged after the rotation; it is only ${\bf x}_{\perp}$ which changes. The unit vector $\hat{\bf q}$ is parallel to $\hat{\bf n} \times {\bf x} = \hat{\bf n} \times {\bf x}_{\perp}$. The rotation is in the counterclockwise direction for $\theta \gt 0$.

Even though we already anticipated the fact that the transformation which rotates ${\bf x}$ into ${\bf x}'$ can be represented as a matrix, we will prove this explicitly by showing that ${\bf x}' = R(\hat{\bf n},\theta){\bf x}$ for a $3 \times 3$ matrix $R(\hat{\bf n},\theta)$ whose components depend only on $\hat{\bf n}$ and $\theta$.

As a first step, notice that ${\bf x}$ can be decomposed into two components ${\bf x}_{\parallel}$ and ${\bf x}_{\perp}$ which are parallel and perpendicular to $\hat{\bf n}$ respectively as shown in figure 1a. This means: $$ {\bf x} = {\bf x}_{\parallel} + {\bf x}_{\perp} $$ Since $\hat{\bf n}$ is a unit vector, then: $$ \begin{eqnarray} {\bf x}_{\parallel} &=& (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_parallel} \\[5pt] {\bf x}_{\perp} &=& {\bf x} - {\bf x}_{\parallel} = {\bf x} - (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_perp} \end{eqnarray} $$ When we rotate ${\bf x}$ around $\hat{\bf n}$, its parallel component ${\bf x}_{\parallel}$ remains unchanged; it is only the perpendicular component ${\bf x}_{\perp}$ that actually rotates around $\hat{\bf n}$. This gives us: $$ {\bf x}_{\parallel}' = {\bf x}_{\parallel} = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_parallel} $$

Let us now define a unit vector $\hat{\bf q}$ which is orthogonal to both $\hat{\bf n}$ and ${\bf x}$ as shown in figure 1b. We can do this by computing and normalizing the cross product of $\hat{\bf n}$ and ${\bf x}$ (below, we implicitly assume that $\hat{\bf n}$ and ${\bf x}$ are not parallel to each other, but if they are, we have trivially that ${\bf x}' = {\bf x} = {\bf x}_{\parallel}$; our final results will be compatible with this corner case as well): $$ \displaystyle\hat{\bf q} = \frac{\hat{\bf n} \times {\bf x}}{\|\hat{\bf n} \times {\bf x}\|} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_def_q} $$ Since a rotation does not change the length of a vector, we have that $\|{\bf x}'\| = \|{\bf x}\|$; in particular, $\|{\bf x}_{\perp}'\| = \|{\bf x}_{\perp}\|$ as shown in figure 1b. When we rotate ${\bf x}_{\perp}$ by an angle $\theta$ around $\hat{\bf n}$, a component proportional to $\|{\bf x}_{\perp}\|\cos\theta$ remains parallel to ${\bf x}_{\perp}$, and a component proportional to $\|{\bf x}_{\perp}\|\sin\theta$ which is parallel to $\hat{\bf q}$ is generated. Therefore: $$ {\bf x}_{\perp}' = \cos\theta\,{\bf x}_{\perp} + \|{\bf x}_{\perp}\|\sin\theta\,\hat{\bf q} = \cos\theta\,{\bf x}_{\perp} + \sin\theta\,(\hat{\bf n}\times{\bf x}) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_perp} $$ where above we used the definition of $\hat{\bf q}$ from equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_def_q} as well as the fact that: $$ \|\hat{\bf n} \times {\bf x}\| = \|\hat{\bf n}\|\|{\bf x}\|\sin\alpha = \|{\bf x}_{\perp}\| $$ with $\alpha$ being the angle between $\hat{\bf n}$ and ${\bf x}$ as shown in figure 1a.

We can now obtain an expression relating ${\bf x}'$ and ${\bf x}$ in terms of $\hat{\bf n}$ and $\theta$. Since ${\bf x}' = {\bf x}_{\parallel}' + {\bf x}_{\perp}'$, and using equations \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_parallel} and \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_perp}, we obtain: $$ {\bf x}' = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \cos\theta\,{\bf x}_{\perp} + \sin\theta\,(\hat{\bf n}\times{\bf x}) $$ and now using equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_perp}, we get: $$ {\bf x}' = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \cos\theta\,({\bf x} - (\hat{\bf n}\cdot{\bf x})\hat{\bf n}) + \sin\theta\,(\hat{\bf n}\times{\bf x}) $$ Rearranging terms, we obtain a very useful expression for computing ${\bf x}'$: $$ \boxed{ {\bf x}' = \cos\theta\,{\bf x} + (1 - \cos\theta)(\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \sin\theta\,(\hat{\bf n}\times{\bf x}) } \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} $$

As we claimed earlier, equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} can be expressed as ${\bf x}' = R(\hat{\bf n},\theta){\bf x}$, where $R(\hat{\bf n},\theta)$ is a $3 \times 3$ matrix. To see that this is true, notice that: $$ \cos\theta\,{\bf x} = \cos\theta \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) $$ and that: $$ (\hat{\bf n}\cdot{\bf x})\hat{\bf n} = \left( \begin{matrix} (\hat{\bf n}\cdot{\bf x})n_x \\ (\hat{\bf n}\cdot{\bf x})n_y \\ (\hat{\bf n}\cdot{\bf x})n_z \end{matrix} \right) = \left( \begin{matrix} n_x^2 & n_y n_x & n_z n_x \\ n_x n_y & n_y^2 & n_z n_y \\ n_x n_z & n_y n_z & n_z^2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_mat1} $$ and that: $$ \hat{\bf n}\times{\bf x} = \left( \begin{matrix} n_y z - n_z y \\ n_z x - n_x z \\ n_x y - n_y x \end{matrix} \right) = \left( \begin{matrix} 0 & -n_z & n_y \\ n_z & 0 & -n_x \\ -n_y & n_x & 0 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_mat2} $$ Therefore ${\bf x'} = R(\hat{\bf n},\theta){\bf x}$, with: $$ \boxed{ \begin{eqnarray} R(\hat{\bf n},\theta) &=& \cos\theta \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right) + (1 - \cos\theta) \left( \begin{matrix} n_x^2 & n_y n_x & n_z n_x \\ n_x n_y & n_y^2 & n_z n_y \\ n_x n_z & n_y n_z & n_z^2 \end{matrix} \right) \nonumber \\[5pt] &+& \sin\theta \left( \begin{matrix} 0 & -n_z & n_y \\ n_z & 0 & -n_x \\ -n_y & n_x & 0 \end{matrix} \right) \nonumber \end{eqnarray} } $$

Whenever $\hat{\bf n}$ and ${\bf x}$ are parallel, we have ${\bf x} = {\bf x}_{\parallel}$ and $\hat{\bf n} \times {\bf x} = {\bf 0}$, so equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} together with equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_parallel} gives us ${\bf x}' = {\bf x}$, as expected. Additionally, our derivation did not actually rely on the assumption that $\theta \gt 0$, so it is valid for arbitrary values of $\theta$. Finally, notice that by changing $\hat{\bf n} \rightarrow -\hat{\bf n}$ and $\theta \rightarrow -\theta$, $R(\hat{\bf n},\theta)$ does not change, i.e., $R(\hat{\bf n},\theta) = R(-\hat{\bf n},-\theta)$, so we can always convert a rotation with $\theta \lt 0$ into an equivalent one having $\theta \gt 0$ by inverting the direction of $\hat{\bf n}$ and negating $\theta$.

Comments (1) Direct link

Surface normals and linear transformations


Posted by Diego Assencio on 2016.01.15 under Mathematics (Linear Algebra)

Suppose we have a surface $S$ and that ${\bf n}$ is a unit vector normal to $S$ at a point ${\bf x} \in S$. There are many common transformations we can perform on $S$: rotation, scaling, shear etc. After we transform $S$ into a new surface $S'$, the point ${\bf x}$ from $S$ will end up at a new position ${\bf x}'$ in $S'$. This post answers the following question: when we apply an invertible linear transformation to all points of $S$, what is the relation between ${\bf n}$ and the corresponding ${\bf n}'$ which is a unit vector normal to $S'$ at ${\bf x}'$?

We will illustrate our points in this post using a one-dimensional surface on a two-dimensional plane, but everything here applies to any surface on an $n$-dimensional space which undergoes an invertible linear transformation $M$. A natural but incorrect assumption would be that ${\bf n}$ is converted to ${\bf n}' = M{\bf n}$ since every point ${\bf x}$ in $S$ is converted to ${\bf x}' = M{\bf x}$ in $S'$. In fact, as we will show, ${\bf n}'$ is actually parallel to $(M^{-1})^T{\bf n}$, which is in general not parallel to $M{\bf n}$.

Consider first what happens when we apply a shear transformation to a surface $S$ in a two-dimensional space, with the transformation matrix $M$ having the following form (here we assume $a \gt 0$): $$ M = \left(\begin{matrix} 1 & a \\ 0 & 1 \end{matrix}\right) \label{post_a2386030d8d82e457a8c7dc124d00564_shear_matrix} $$ A point ${\bf x}$ in $S$ is transformed into ${\bf x}' = M{\bf x}$: $$ {\bf x} \;\longrightarrow\; {\bf x}' = M{\bf x} = \left(\begin{matrix} x + ay \\ y \end{matrix}\right) $$ The transformation $M$ preserves the $y$ coordinates of all points in $S$, but moves the $x$ coordinates by $ay$. Figure 1 shows an example of how a unit square is transformed by $M$.

Fig. 1: A shear transformation $M$ given in equation \eqref{post_a2386030d8d82e457a8c7dc124d00564_shear_matrix} is applied to all points of a unit square $S$, yielding a new surface $S'$. Surface normals are shown as red arrows. Notice how the direction of a surface normal ${\bf n}_{\textrm{t}}$ on the top edge stays the same, but the direction of a surface normal ${\bf n}_{\textrm{r}}$ on the right edge changes under $M$.

A normal vector ${\bf n}_{\textrm{t}}$ on the top edge of $S$ has coordinates $(0,1)$, and $M{\bf n}_{\textrm{t}} = (a,1)$, which is not parallel to ${\bf n}_{\textrm{t}}'$ since ${\bf n}_{\textrm{t}}' = {\bf n}_{\textrm{t}}$ and $a \neq 0$. Also, ${\bf n}_{\textrm{r}}$ on the right edge has coordinates $(1,0)$, $M{\bf n}_{\textrm{r}} = (1,0) = {\bf n}_{\textrm{r}}$, which is not parallel to ${\bf n}_{\textrm{r}}'$ because ${\bf n}_{\textrm{r}}' \neq {\bf n}_{\textrm{r}}$. In other words, surface normals do not in general transform like points from $S$, so we cannot just take a normal vector ${\bf n}$ and expect that $M{\bf n}$ has the same direction as the corresponding normal ${\bf n}'$ in $S'$.

Fortunately, finding an expression for ${\bf n}'$ in terms of ${\bf n}$ and $M$ is easy. All we need to do is use the fact that tangent vectors in $S$ are transformed into tangent vectors in $S'$. In fact, consider a tangent vector ${\bf t}$ connecting ${\bf x}$ and an infinitesimally close point $\tilde{\bf x} = {\bf x} + \Delta{\bf x}$ from $S$, i.e., ${\bf t} = \tilde{\bf x} - {\bf x} = \Delta{\bf x}$ (${\bf t}$ is not a unit vector, but we only care about its direction here). Since ${\bf x}$ and $\tilde{\bf x}$ are converted into ${\bf x}' = M{\bf x}$ and $\tilde{\bf x}' = M({\bf x} + \Delta{\bf x}) = {\bf x}' + M\Delta{\bf x}$ respectively, the infinitesimal tangent vector ${\bf t}'$ connecting ${\bf x}'$ and $\tilde{\bf x}'$ is then: $$ {\bf t}' = \tilde{\bf x}' - {\bf x}' = M{\Delta{\bf x}} = M{\bf t} $$ In other words, an infinitesimal vector ${\bf t}$ tangent to ${\bf x} \in S$ is transformed into an infinitesimal vector ${\bf t}' = M{\bf t}$ which is tangent to ${\bf x}' \in S'$. As a matter of fact, since $S$ undergoes a linear transformation, the argument applies even if ${\bf t}$ is not an infinitesimal vector, in which case ${\bf t}' = M{\bf t}$ is also not infinitesimal. Representing vectors as columns, we have that: $$ {\bf t}\cdot{\bf n} = {\bf t}^T{\bf n} = 0 \label{post_a2386030d8d82e457a8c7dc124d00564_dotp1} $$ because ${\bf t}$ and ${\bf n}$ are orthogonal. Equivalently, we also have that: $$ {\bf t}' \cdot {\bf n}' = (M{\bf t})^T {\bf n}' = 0 \label{post_a2386030d8d82e457a8c7dc124d00564_dotp2} $$ Equation \eqref{post_a2386030d8d82e457a8c7dc124d00564_dotp2} is satisfied if we take ${\bf n}' = (M^T)^{-1}{\bf n}$. Indeed: $$ (M{\bf t})^T {\bf n}' = (M{\bf t})^T (M^T)^{-1}{\bf n} = {\bf t}^T M^T (M^T)^{-1}{\bf n} = {\bf t}^T{\bf n} = 0 $$ Therefore, we have that $(M^T)^{-1}{\bf n}$ is normal to the surface $S'$ at ${\bf x}'$. All we need to do now is normalize it: $$ \boxed{ {\bf n}' = (M^T)^{-1}{\bf n} \; \big/ \; \big\| (M^T)^{-1}{\bf n} \big\| } $$ It is important to say, however, that if $M$ is an orthogonal matrix (e.g. a rotation), then $(M^T)^{-1} = M$ and $\|M{\bf n}\| = \|{\bf n}\| = 1$, so in this case surface normal vectors transform just like regular points from $S$.

Comments (0) Direct link