Updating the inverse of a matrix


The Math Overflow link provided by @amoeba on "efficient rank-two updates of an eigenvalue decomposition" is a great first step if you want to start looking deeper into the matter; the first paper provides an explicit solution to your specific question.Just to clarify what rank-one and rank-two mean so you do not get confused, if your new $A^*$ is such that: \begin A^* = A - uv^T \end Where $u$ and $v$ are vectors then you refer to this as a rank-one update (or ).I would argue that we should try to make the information provided on this site as self-sustained as possible.

updating the inverse of a matrix-73

" from the Computational Science SE gives a number of MATLAB and C implementations that you may want to consider. implementation are wrappers around C, C or FORTRAN implementations.

1 to both you and @whuber (and I don't think that "duplicating" any information provided on another SE site is to be avoided!

The accuracy of the trained classifier crucially depends on these features, its time complexity on their number.

As the number of available features is immense in most real-world problems, it becomes essential to use meta-heuristics for feature selection and/or feature optimization.

Otherwise, if I am correct, the formula gives you only a general inverse, and correction using the null space is required to make it the desired pseudo-inverse. This might be useful: "Similarly, it is possible to update the Cholesky factor when a row or column is added, without creating the inverse of the correlation matrix explicitly.