Answer:
Explanation:
The inverse of matrix A is its adjoint divided by the determinant of A. That means the product of the adjoint and the original matrix is the determinant multiplied by the identity matrix. We can do some interesting things with that fact.
__
We can look at the off-diagonal elements of A·B and B·A. They should be zero in every case.
Element (1, 2) of A·B will be the dot product of row 1 of A and column 2 of B. That is ...
[7+a, -1, 3]·[5, a, -5] = 35 +5a -a -15 = 20 +4a = 4(a +5)
For this to be zero, we need a = -5.
Element (1, 2) of B·A will be the dot product of row 1 of B and column 2 of A. That is ...
[-7, 5, -11]·[-1, 16+b, 2] = 7 +5(16 +b) -22 = -15 +80 +5b = 5(b +13)
For this to be zero, we need b = -13.
__
We can find the determinant from element (1, 1) of B·A. That is the dot product of row 1 of B and column 1 of A:
[-7, 5, -11]·[7+a, a, 1] = -7(7 +a) +5a -11
For a = -5, this is ...
-7(7 -5) +5(-5) -11 = -14 -25 -11 = -50
So, the inverse of A is B/(-50):
_____
Additional comment
It looks pretty simple here, but it took some playing around to find the simplicity of it. It's pretty easy to end up with 2nd-degree equations that are more difficult to solve and give extraneous answers.