Suppose you’ve got a simple matrix equation, $\boldsymbol{y} = \boldsymbol{\mathsf{A}x}$. Now switch some elements of $\boldsymbol{y}$ with some elements of $\boldsymbol{x}$. How does the matrix change?

This problem seems like it should be neat: if we switch *all* the elements of $\boldsymbol{y}$ with all the elements of $\boldsymbol{x}$, then our new matrix is just $\boldsymbol{\mathsf{A}}^{-1}$. Since we have a full description of how the elements of $\boldsymbol{y}$ depend on $\boldsymbol{x}$ (and vice versa), switching only *some* elements should involve some sort of neat part-inverse of $\boldsymbol{\mathsf{A}}$. But I’m yet to find a neater description of the new matrix than what I’ve worked out below. Surely linear algebra has a method for this? Comment below if you can beat this.

Let me be more clear with the problem by using an example. Consider the matrix equation $$\begin{pmatrix}y_1 \\ y_2 \\ y_3 \\ y_4\end{pmatrix} =

\boldsymbol{\mathsf{A}}

\begin{pmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}.$$Now if I switch $y_3$ and $x_3$,$$\begin{pmatrix}y_1 \\ y_2 \\ x_3 \\ y_4\end{pmatrix} =

\widetilde{\boldsymbol{\mathsf{A}}}

\begin{pmatrix}x_1 \\ x_2 \\ y_3 \\ x_4 \end{pmatrix},$$what is the new matrix $\widetilde{\boldsymbol{\mathsf{A}}}$ in terms of $\boldsymbol{\mathsf{A}}$?

What I’ve found is the following. If your existing matrix equation is$$\begin{pmatrix}y_1 \\ y_2 \\ y_3 \\ y_4\end{pmatrix} =

\begin{pmatrix}a_{11} & a_{12} & \color{red}{a_{13}} & a_{14}\\

a_{21} & a_{22} & \color{red}{a_{23}} & a_{24}\\

\color{blue}{a_{31}}&\color{blue}{a_{32}}&\color{purple}{a_{33}}&\color{blue}{a_{34}}\\

a_{41}&a_{42}&\color{red}{a_{43}}&a_{44}\end{pmatrix}

\begin{pmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix},$$then your new matrix $\widetilde{\boldsymbol{\mathsf{A}}}$, with equivalent terms switched (here, both 3s) is of the form$$

\widetilde{\boldsymbol{\mathsf{A}}} = \boldsymbol{\mathsf{A}} –

\begin{pmatrix}\color{red}{a_{13}} \\ \color{red}{a_{23}} \\ \color{purple}{a_{33}} \\ \color{red}{a_{43}} \end{pmatrix}

\color{purple}{a_{33}^{-1}}

\begin{pmatrix} \color{blue}{a_{31}} & \color{blue}{a_{32}} & \color{purple}{a_{33}} & \color{blue}{a_{34}}\end{pmatrix}

– \begin{pmatrix}

0 & 0 & -\color{red}{a_{13}} \color{purple}{a_{33}^{-1}} & 0\\

0 & 0 & -\color{red}{a_{23}} \color{purple}{a_{33}^{-1}} & 0\\

\color{purple}{a_{33}^{-1}}\color{blue}{a_{31}} & \color{purple}{a_{33}^{-1}}\color{blue}{a_{32}} & -\color{purple}{a_{33}^{-1}} & \color{purple}{a_{33}^{-1}}\color{blue}{a_{34}}\\

0 & 0 & -\color{red}{a_{43}}\color{purple}{a_{33}^{-1}} & 0

\end{pmatrix}.

$$

Multiplication here is the standard matrix multiplication. The slightly odd notation in the first matrix multiplication means that you do normal matrix multiplication with the 4×1 and 1×4 matrices, but put an $a_{33}^{-1}$ between them. So the first element in the resultant 4×4 grid is $a_{13}a_{33}^{-1}a_{31}$. If you’re wondering why I haven’t factored the $a_{33}^{-1}$ out everywhere, it’s because the order of multiplication is sometimes important (if $a_{33}^{-1}$ is itself a matrix, for example).

This form has a nice symmetry to it: you subtract off some sort of exterior product, and then you subtract off some sort of superimposing product.

But what is this? This method is neat and works, but I can’t find a neat algebraic way of expressing it. I look forward to reading your comments below.

One trick could be to define a matrix that picks out a given row, i.e. E = diag([0,0,1,0]) for your case, and E* = I – E. Then you could define new vectors w = E*y + Ex (on the left hand side) and z = E*x + Ey (on the right). Equivalently, x = Ew + E*z and y = E*w+Ez.

Subsitute in and rearrange, I get a new matrix something like

tilde A = (E*-AE)^{-1}(AE* – E)

Of course, then you could interpret postmultiplication by E as picking out the given column, and (presumably) E*-AE will have a simple inverse. But I’m not sure what it is 🙂

According to mathematician Andrew Stacy’s comment on G+:

“This doesn’t work in general. Try with a cyclic permutation and you’ll see what I mean. Basically, it’s mixing the domain and codomain in a non-canonical way. Full inversion swaps the domain and codomain without mixing them which is why it’s a reasonable thing to consider.”

To expand a little on the above comment, try to do this with a *linear transformation* T: R^4 -> R^4 rather than a matrix. Whenever you think you have some neat thing with matrices, the crucial test is to try to do it for linear transformations.

Note that there are two subtly distinct possibilities. Are the two R^4s the same? Namely, is the general case T:U -> U or T:U -> V?

If the latter, then you could do something with SVD, so you need inner products. Assume T is non-zero. Pick an eigenvector of T^*T with non-zero eigenvalue, say u. Then we have orthogonal decompositions of U and V, say U’ + U” and V’ + V”, where U’ is the span of u and V’ of Tu. SVD says that T preserves this decomposition, so we get induced maps T’: U’ -> V’ and T”: U” -> V”. By construction, T’ is invertible so we could construct the map T’^{-1} + T” : V’ + U” -> U’ + V”. Moreover, as U’ and V’ are one dimensional with inner products, they are naturally isomorphic up to choice of orientation. If we choose that so that T’ is orientation preserving, we can identify V’ + U” with U and similarly on the codomain, giving T^# : U -> V as “partially” inverted.

But is this worth it? It needs SVD to work, and once you have SVD you have all the info about T that you could possibly want, so this partial inversion seems a little like too much hard work for no real gain.

(We could avoid SVD by using QR or LU, but a little messier as we have to choose complements rather than being given them. The end result is similar.)

The second case, of an endomorphism, is even less pretty. As we’re over R, we can’t assume the existence of a line that is mapped anywhere near itself (think of generalised rotations). If we prescribe the line and its complement, as your matrix example would have us do, we can easily avoid it – that was the point of the permutation example. If we don’t prescribe anything, our best bet is to look at the Jordan form, whereby we can find a 2-dimensional invariant subspace, but there need not be an invariant complement so we can’t just swap bits of the domain and codomain at will. There might be something to look at with quotients, but then you’re potentially losing information.

As a general rule, you can only mix the domain and codomain of a function if you are very, very careful. Matrices can seem like magic pools wherein many strange beasties lie. But when viewed in the cold, hard light of abstract vector spaces, these chimera have a tendency to softly and silently vanish away.