Matrix Multiplication Calculator – Step-by-Step Solutions
Linear Algebra Tool

Matrix Multiplication
Calculator

Multiply matrices of any size instantly. Get step-by-step dot product solutions, determinants, and visual breakdowns.

2×2 Matrices 3×3 Matrices 4×4 Matrices Step-by-Step Determinant Transpose
a b ⎤ ⎡ e f ⎤ ⎢ ⎥ × ⎢ ⎥ ⎣ c d ⎦ ⎣ g h ⎦ = ⎡ ae+bg af+bh ⎤ ⎣ ce+dg cf+dh ⎦

Matrix Multiplication Calculator

// A × B = C · Step-by-step · Any size

×

What Is Matrix Multiplication?

Matrix multiplication is a binary operation that takes two matrices and produces a third matrix. Unlike ordinary multiplication of numbers, matrix multiplication is not commutative — the order of the matrices matters. In general, A × B ≠ B × A, and in many cases, if A × B is defined, B × A may not even be possible.

Matrix multiplication is one of the most important operations in mathematics, with applications spanning computer graphics (3D transformations), machine learning (neural network forward passes), physics (quantum mechanics operators), economics (input-output models), and engineering (structural analysis, signal processing).

📐 Compatibility Rule: Two matrices can be multiplied only if the number of columns in the first matrix equals the number of rows in the second. If A is m×n and B is n×p, then A×B is defined and produces an m×p matrix. If A is 3×4 and B is 4×2, then A×B is 3×2.

The Matrix Multiplication Formula

Given matrix A of size m×n and matrix B of size n×p, the product C = A × B is an m×p matrix where each element C[i][j] is the dot product of the i-th row of A with the j-th column of B:

C[i][j] = Σ(k=1 to n) A[i][k] × B[k][j]

In plain terms: to find any element in the result matrix, multiply each element of a row from the left matrix by the corresponding element of a column from the right matrix, then sum all those products. This is repeated for every row-column pair combination.

Worked 2×2 Example

For the multiplication of two 2×2 matrices:

A = [[1,2],[3,4]] × B = [[5,6],[7,8]]

C[1,1] = (1×5) + (2×7) = 5 + 14 = 19
C[1,2] = (1×6) + (2×8) = 6 + 16 = 22
C[2,1] = (3×5) + (4×7) = 15 + 28 = 43
C[2,2] = (3×6) + (4×8) = 18 + 32 = 50

Result C = [[19,22],[43,50]]
Matrix Multiplication — Key Properties Reference
PropertyFormulaNotes
Not CommutativeA×B ≠ B×A (generally)Order matters in matrix multiplication
Associative(A×B)×C = A×(B×C)Grouping doesn’t change the result
DistributiveA×(B+C) = A×B + A×COver matrix addition
Identity MatrixA×I = I×A = AI is the identity matrix
Transpose Rule(A×B)ᵀ = Bᵀ×AᵀOrder reverses under transpose
Determinantdet(A×B) = det(A)×det(B)For square matrices
Dimensions(m×n) × (n×p) = (m×p)Inner dimensions must match

How to Use This Matrix Calculator

1

Choose Mode

Select Multiply, Matrix Power, Properties, or Scalar multiplication from the tabs.

2

Set Dimensions

Choose the row and column count for each matrix. Inner dimensions must match for multiplication.

3

Enter Values

Click each cell and type your values, or hit 🎲 Random to populate with random integers.

4

Calculate

Click Calculate to see the result matrix and full step-by-step dot product breakdown.

5

Explore Steps

Expand each step accordion to see exactly how each result cell was computed.

6

Read Properties

Use Properties mode to get trace, determinant, transpose, and rank for any square matrix.

⚡ Common Mistake: The most frequent error in matrix multiplication is multiplying element-by-element (Hadamard product) rather than performing the correct row-by-column dot product. Always ensure you’re summing the products across a full row-column pair, not just multiplying matching positions.

Applications of Matrix Multiplication

Computer Graphics and 3D Transformations

Every rotation, scaling, and translation in 3D graphics is represented as a matrix multiplication. When you rotate a 3D object in a video game or CAD software, the GPU multiplies millions of vertex coordinate vectors by transformation matrices per second. Composing multiple transformations (rotate, then scale, then translate) is done by multiplying the transformation matrices together first — a direct application of matrix associativity.

Machine Learning and Neural Networks

The forward pass of a neural network is fundamentally a sequence of matrix multiplications. Each layer’s operation is: output = activation(W × input + b), where W is the weight matrix, input is a column vector (or batch matrix), and b is the bias vector. Training on modern GPUs is fast precisely because these accelerators are optimized for massive matrix multiplication (GEMM — General Matrix Multiply).

Systems of Linear Equations

Any system of linear equations can be written as Ax = b, where A is the coefficient matrix, x is the unknown vector, and b is the right-hand side. Solving the system involves matrix inversion or decomposition — operations that rely on matrix multiplication throughout. Gaussian elimination, LU decomposition, and QR decomposition all center on matrix operations.

Economics — Input-Output Analysis

Wassily Leontief’s input-output model of an economy, for which he won the Nobel Prize in Economics, uses matrix multiplication to model interdependencies between industrial sectors. If A is the technology matrix and d is the final demand vector, total production x satisfies x = Ax + d, solved as x = (I – A)⁻¹d — requiring matrix inversion through repeated matrix multiplication.

Operation Count — 3×3 Matrix Multiplication

Computational Complexity of Matrix Multiplication

The naive algorithm for multiplying an n×n matrix requires O(n³) operations — for 1000×1000 matrices, that’s a billion arithmetic operations. This motivated decades of research into faster algorithms.

In 1969, Volker Strassen discovered an algorithm running in O(n^2.807) time by reducing the number of multiplications needed for 2×2 block matrices from 8 to 7. While constant factors make it impractical for small matrices, the algorithm demonstrated that the naive O(n³) bound was not fundamental.

Modern research continues — in 2022, DeepMind’s AlphaCode/AlphaTensor discovered new matrix multiplication algorithms through reinforcement learning, finding a 4×4 complex matrix multiplication in 47 operations (the previous best was 49). The theoretical minimum for general n×n matrix multiplication remains an open problem in computer science.

Determinant and Matrix Invertibility

The determinant of a square matrix is a scalar value that encodes important geometric and algebraic properties. For a 2×2 matrix [[a,b],[c,d]], the determinant is ad – bc. For larger matrices, the determinant is computed recursively via cofactor expansion.

A matrix is invertible (non-singular) if and only if its determinant is non-zero. An invertible matrix A has a unique inverse A⁻¹ such that A × A⁻¹ = A⁻¹ × A = I (the identity matrix). Non-invertible matrices (det = 0) are called singular and arise when rows or columns are linearly dependent.

🔬 Geometry Insight: The absolute value of the determinant represents the scale factor by which the matrix transformation changes areas (in 2D) or volumes (in 3D). A negative determinant indicates the transformation includes a reflection. A zero determinant means the transformation collapses the space into a lower dimension.

Special Matrices in Multiplication

Identity Matrix

The identity matrix I has 1s on the main diagonal and 0s everywhere else. It is the multiplicative identity: A × I = I × A = A for any compatible matrix A. It plays the same role in matrix multiplication that 1 plays in scalar multiplication.

Zero Matrix

A matrix of all zeros. Any matrix multiplied by the zero matrix gives the zero matrix — but unlike scalars, a product of non-zero matrices can be the zero matrix: [[1,0],[0,0]] × [[0,0],[0,1]] = [[0,0],[0,0]].

Orthogonal Matrices

A matrix Q is orthogonal if Q × Qᵀ = I, meaning Q⁻¹ = Qᵀ. Orthogonal matrices represent rotations and reflections, and they preserve lengths and angles. Their determinant is always +1 (pure rotation) or -1 (rotation + reflection).

Frequently Asked Questions

Questions from students, engineers, and programmers working with matrix multiplication.

The fundamental rules for matrix multiplication are:

  • Compatibility: To multiply A×B, the number of columns in A must equal the number of rows in B. If A is m×n and B is n×p, then A×B is valid and produces an m×p matrix.
  • Not commutative: In general, A×B ≠ B×A. Even when both products are defined, they may produce different results.
  • Associative: (A×B)×C = A×(B×C), so parentheses can be rearranged freely.
  • Distributive: A×(B+C) = A×B + A×C
  • Identity: For any matrix A, A×I = A and I×A = A, where I is the appropriately sized identity matrix.

Matrix multiplication is not commutative because each element of the result depends on an entire row from the first matrix and an entire column from the second — swapping the matrices changes which rows and columns are combined.

A geometric intuition: rotate 90° then reflect is not the same transformation as reflect then rotate 90°. Since each geometric transformation corresponds to a matrix, and composing transformations corresponds to matrix multiplication, the non-commutativity of transformations directly gives the non-commutativity of matrix multiplication.

Simple counterexample: A = [[1,2],[0,0]], B = [[0,0],[3,4]]. Then A×B = [[6,8],[0,0]] but B×A = [[0,0],[3,6]]. Clearly not equal.

To multiply two 3×3 matrices A and B to get result C:

  • C[1,1] = A[1,1]×B[1,1] + A[1,2]×B[2,1] + A[1,3]×B[3,1]
  • C[1,2] = A[1,1]×B[1,2] + A[1,2]×B[2,2] + A[1,3]×B[3,2]
  • C[1,3] = A[1,1]×B[1,3] + A[1,2]×B[2,3] + A[1,3]×B[3,3]
  • … (repeat for rows 2 and 3 of C)

In total, a 3×3 × 3×3 multiplication requires 27 multiplications and 18 additions to compute the 9 result elements. Use the calculator above with step-by-step mode to see each calculation shown explicitly.

Matrix multiplication (A×B) — the standard mathematical operation described above. Element C[i,j] is the dot product of row i of A with column j of B. Requires inner dimensions to match. The result dimensions are m×p when A is m×n and B is n×p.

Element-wise multiplication (Hadamard product, A⊙B) — simply multiplies corresponding elements. Requires A and B to have identical dimensions. Result C[i,j] = A[i,j] × B[i,j]. Used in neural networks (applying activation masks), image processing, and signal processing.

In Python/NumPy: A @ B or np.matmul(A,B) gives matrix multiplication; A * B gives element-wise multiplication. This distinction trips up many beginners.

For a 2×2 matrix [[a,b],[c,d]], determinant = ad – bc.

For a 3×3 matrix, use cofactor expansion along the first row:

det = a(ei-fh) – b(di-fg) + c(dh-eg), where the matrix elements are [[a,b,c],[d,e,f],[g,h,i]].

For larger matrices, determinant is calculated recursively using cofactor expansion, or more efficiently via LU decomposition (O(n³)). The Properties mode in the calculator above computes determinants for 2×2 through 4×4 matrices automatically.

Yes — and in practice, most matrix multiplications in real applications involve non-square matrices. The only requirement is that the inner dimensions match: if A is m×n and B is n×p (any values for m, n, p), then A×B is defined and produces an m×p matrix.

Common real-world examples:

  • Neural network layer: weight matrix W is (neurons_out × neurons_in), input is (neurons_in × batch_size), output is (neurons_out × batch_size)
  • 3D projection: 4×4 transformation matrix times 4×1 vertex vector gives 4×1 result
  • Image convolution (as matrix mult): can involve very non-square Toeplitz matrices

The identity matrix I_n is the n×n square matrix with 1s on the main diagonal and 0s elsewhere. For n=3: I = [[1,0,0],[0,1,0],[0,0,1]].

It matters because it is the multiplicative identity for matrix multiplication: A × I = I × A = A for any square matrix A (of compatible size). This mirrors the role of 1 in scalar multiplication.

The identity matrix also defines the concept of matrix invertibility: A⁻¹ is defined as the matrix such that A × A⁻¹ = I. Finding A⁻¹ requires that det(A) ≠ 0. The inverse is computed via Gauss-Jordan elimination or the adjugate formula.

Matrix multiplication is the computational core of virtually all modern machine learning:

  • Linear layers: output = W × input + b (W is weight matrix, input is feature vector or batch matrix)
  • Attention mechanism (Transformers): Attention = softmax(Q × Kᵀ / √d) × V — three matrix multiplications per attention head
  • Backpropagation: Gradient computation involves transposed matrix multiplications
  • Embeddings: Looking up embedding vectors is equivalent to multiplying a one-hot vector by the embedding matrix
  • Convolutional layers: Can be expressed as matrix multiplication via the im2col transformation (though specialized algorithms are faster in practice)

GPU hardware is optimized specifically for large matrix multiplications, which is why GPUs (and TPUs) are the dominant hardware for deep learning training and inference.

The transpose rule for matrix products states: (A × B)ᵀ = Bᵀ × Aᵀ. Note that the order reverses — this is analogous to how (AB)⁻¹ = B⁻¹A⁻¹ for invertible matrices.

For longer products: (A × B × C)ᵀ = Cᵀ × Bᵀ × Aᵀ. This result is important in backpropagation, where gradients are computed by transposing the forward-pass weight matrices.

The transpose of a matrix A (written Aᵀ) is formed by reflecting A over its main diagonal — swapping rows and columns so that Aᵀ[i,j] = A[j,i]. The transpose of an m×n matrix is an n×m matrix.

The trace of a square matrix is the sum of its main diagonal elements: tr(A) = Σ A[i,i]. For a 3×3 matrix [[a,b,c],[d,e,f],[g,h,i]], tr = a + e + i.

Key trace properties related to matrix multiplication:

  • Cyclic property: tr(ABC) = tr(CAB) = tr(BCA) — the trace is invariant under cyclic permutations
  • Trace and transpose: tr(A) = tr(Aᵀ)
  • Trace and eigenvalues: tr(A) equals the sum of all eigenvalues of A (with multiplicity)
  • Frobenius inner product: tr(Aᵀ × B) = Σᵢⱼ A[i,j]B[i,j] — this is the element-wise dot product of two matrices

The trace appears extensively in physics (quantum mechanics trace formalism), statistics (trace of the hat matrix in regression), and optimization (nuclear norm minimization).

Conclusion: Matrices as the Language of Transformation

Matrix multiplication is more than a computational procedure — it is the language in which linear transformations speak. Every time a 3D game renders a frame, every time a machine learning model makes a prediction, every time a structural engineer computes stress distributions, matrices are multiplied. Developing fluency with matrix operations, starting with the fundamental row-column dot product and building toward an intuition for what these operations do geometrically, is one of the highest-leverage mathematical skills for anyone working with quantitative systems.

Use the calculator above to explore — try different matrix sizes, check the step-by-step breakdowns, and use the Properties mode to build intuition for how determinants and traces behave under different matrix constructions.

Leave a Comment

Your email address will not be published. Required fields are marked *