10 3D Math Interview Questions and Answers
Prepare for technical interviews with this guide on 3D math concepts, covering vectors, matrices, transformations, and more.
Prepare for technical interviews with this guide on 3D math concepts, covering vectors, matrices, transformations, and more.
3D math is a fundamental component in various fields such as computer graphics, game development, virtual reality, and robotics. Mastery of 3D math concepts like vectors, matrices, transformations, and quaternions is essential for creating realistic simulations and animations. Understanding these principles allows developers and engineers to manipulate objects in a three-dimensional space with precision and efficiency.
This article offers a curated selection of interview questions designed to test and enhance your knowledge of 3D math. By working through these questions, you will gain a deeper understanding of the mathematical foundations required for technical roles that involve 3D computations and spatial reasoning.
The dot product of two vectors is a scalar value obtained by multiplying corresponding components and summing the results. It is significant for calculating angles between vectors, projecting one vector onto another, and computing work in physics. For example, the dot product of vectors [1, 2, 3] and [4, 5, 6] is 32.
import numpy as np def dot_product(vector_a, vector_b): return np.dot(vector_a, vector_b) vector_a = np.array([1, 2, 3]) vector_b = np.array([4, 5, 6]) result = dot_product(vector_a, vector_b) print(result) # Output: 32
The cross product of two 3D vectors results in a vector perpendicular to the plane formed by the original vectors. It is used in physics and engineering to find orthogonal vectors. For vectors A = (1, 2, 3) and B = (4, 5, 6), the cross product is (-3, 6, -3).
def cross_product(A, B): return ( A[1] * B[2] - A[2] * B[1], A[2] * B[0] - A[0] * B[2], A[0] * B[1] - A[1] * B[0] ) # Example usage: A = (1, 2, 3) B = (4, 5, 6) result = cross_product(A, B) print(result) # Output: (-3, 6, -3)
Translation, rotation, and scaling matrices are essential in 3D transformations, manipulating an object’s position, orientation, and size. Translation matrices move objects, rotation matrices change orientation, and scaling matrices adjust size.
Translation: | 1 0 0 Tx | | 0 1 0 Ty | | 0 0 1 Tz | | 0 0 0 1 | Rotation (z-axis): | cos(θ) -sin(θ) 0 0 | | sin(θ) cos(θ) 0 0 | | 0 0 1 0 | | 0 0 0 1 | Scaling: | Sx 0 0 0 | | 0 Sy 0 0 | | 0 0 Sz 0 | | 0 0 0 1 |
Homogeneous coordinates in graphics allow for complex transformations using matrix multiplication by adding an extra dimension to coordinates. This enables translation to be represented as a matrix operation, integrating it with other linear transformations like rotation and scaling.
import numpy as np # Translation matrix T = np.array([ [1, 0, 0, tx], [0, 1, 0, ty], [0, 0, 1, tz], [0, 0, 0, 1] ]) # Point in homogeneous coordinates P = np.array([x, y, z, 1]) # Translated point P_translated = np.dot(T, P)
Quaternions extend complex numbers and are used to represent rotations in 3D space. They consist of four components and offer advantages like avoiding gimbal lock, requiring less computational power, and allowing smooth interpolation between rotations.
import numpy as np from scipy.spatial.transform import Rotation as R # Define a quaternion (w, x, y, z) q = [0.707, 0.0, 0.707, 0.0] # Create a rotation object from the quaternion rotation = R.from_quat(q) # Apply the rotation to a vector vector = np.array([1, 0, 0]) rotated_vector = rotation.apply(vector) print(rotated_vector) # Output: [0. 0. 1.]
Eigenvalues and eigenvectors are important in rigid body transformations. An eigenvector of a transformation matrix does not change direction during the transformation, and the eigenvalue indicates the scaling factor. In rotation matrices, eigenvectors can represent axes of rotation, and eigenvalues provide information about the angle of rotation.
Principal Component Analysis (PCA) emphasizes variation and identifies patterns in data by transforming it into a new coordinate system. It is used for dimensionality reduction, noise reduction, and visualization. The process involves standardizing data, computing the covariance matrix, and transforming the dataset using selected eigenvectors.
from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler import numpy as np # Sample data data = np.array([[2.5, 2.4], [0.5, 0.7], [2.2, 2.9], [1.9, 2.2], [3.1, 3.0], [2.3, 2.7], [2, 1.6], [1, 1.1], [1.5, 1.6], [1.1, 0.9]]) # Standardize the data scaler = StandardScaler() data_standardized = scaler.fit_transform(data) # Apply PCA pca = PCA(n_components=2) principal_components = pca.fit_transform(data_standardized) print(principal_components)
A Bézier curve is defined by control points and evaluated using a parameter t
. The De Casteljau’s algorithm is a recursive method for evaluating these curves.
def bezier_curve(control_points, t): n = len(control_points) - 1 points = control_points while n > 0: new_points = [] for i in range(n): x = (1 - t) * points[i][0] + t * points[i + 1][0] y = (1 - t) * points[i][1] + t * points[i + 1][1] new_points.append((x, y)) points = new_points n -= 1 return points[0] control_points = [(0, 0), (1, 2), (3, 3), (4, 0)] t = 0.5 print(bezier_curve(control_points, t)) # Output: (2.0, 2.0)
Transforming a vector by a matrix involves multiplying the vector by a transformation matrix to produce a new vector. This operation can represent translation, rotation, scaling, or a combination of these.
import numpy as np # Define a 3D vector vector = np.array([1, 2, 3]) # Define a 3x3 transformation matrix (e.g., a rotation matrix) matrix = np.array([ [0, -1, 0], [1, 0, 0], [0, 0, 1] ]) # Transform the vector by the matrix transformed_vector = np.dot(matrix, vector) print(transformed_vector) # Output: [-2, 1, 3]
Barycentric coordinates express a point within a triangle as a weighted average of the triangle’s vertices. They are used for interpolation, point-in-triangle tests, and rasterization in graphics.