M is the matrix we want to orthonormalize, and R is the rotation matrix closest to M.
Analytic Solution
Matrix R = M*inverse(sqrt(transpose(M)*M));
Iterative Solution
// To re-orthogonalize matrix M, repeat:
M = 0.5f*(inverse(transpose(M)) + M);
// until M converges
M converges to R, the nearest rotation matrix. The number of digits of accuracy will approximately double with each iteration.
Check whether the sum of the squares of the elements of (M - M^-T)/2 is less than the square of your error goal to know when (M + M^-T)/2 meets your accuracy threshold. M^-T is the inverse transpose of M.
Why It Works
We want to find the rotation matrix R which is closest to M. We will define the error as the sum of squared differences of the elements. That is, minimize trace((M - R)^T (M - R)).
The analytic solution is R = M (M^T M)^-(1/2), outlined here.
The problem is that this requires finding the square root of M^T M. However, if we notice, there are many matrices whose closest rotation matrix is R. One of these is M (M^T M)^-1, which simplifies to M^-T, the inverse transpose. The nice thing is that M and M^-T are on opposite sides of R, (intuitively like a and 1/a are on opposite side of 1).
We recognize that the average, (M + M^-T)/2 will be closer to R than M, and because it is a linear combination, will also maintain R as the closest rotation matrix. Doing this recursively, we converge to R.
Worst Case Convergence (Speculative)
Because it is related to the Babylonian square root algorithm, it converges quadratically.
The exact worst case error after one iteration of a matrix M and error e is nextE:
nextE = e^2/(2 e + 2)
e = sqrt(trace((M - R)^T (M - R)))
R = M (M^T M)^-(1/2)