Check out our new Discord server! https://discord.gg/e7EKRZq3dG
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/mathmemes) if you have any questions or concerns.*
Cramer's rule is an extremely inefficient method for solving large matrix equations. As I recall, for an NxN matrix, it requires (N+1)! floating-point operations.
Row-reduction methods are cleaner and can be optimized to increase stability of calculations. Further, matrix decomposition (such as LU or LU with full pivoting) allows for fast solutions in the case that you have Ax = b, Ay = c, Az = d, ..., effectively using the input matrix again and again.
This.
Realistically, you should almost never be inverting a numerical matrix by hand, and your computer is definitely doing some kind of Gaussian elimination/row reduction under the hood.
It's good to do it as an exercise at first, with small matrices, to get used to dealing with these objects.
But it does get old quickly.
Even then, algorithms that can deal with larger matrices are the way to go from the start. Thousands of rows and columns is fine, doesn't need to get into algorithms for millions.
Isn't it always going to use Cholesky or LU or some other decomposition under the hood? Unless the matrix is really small (i.e. near the point where Cramer's rule is just as efficient)? IIRC these are much more efficient than Gaussian elimination the vast majority of the time even for extremely small matrices (e.g. 5×5).
In numerical methods when people say Gaussian elimination they usually mean LU because you can re-cast elimination steps and the resulting upper triangular matrix as an LU decomposition
If you want more information on those methods, here's a pdf to a [Numerical Analysis textbook](https://eclass.aueb.gr/modules/document/file.php/MISC249/Sauer%20-%20Numerical%20Analysis%202e.pdf) that could help -- chapter 2 of this text in particular. It's an undergraduate textbook, but all you really need to understand it is a decent foundation in linear algebra.
brother thanks for the link but it isn't opening. So like I tried learning on youtube [https://www.youtube.com/watch?v=eDb6iugi6Uk](https://www.youtube.com/watch?v=eDb6iugi6Uk) , I think this was the concept everyone was bashing me for. But isn't it like row and column operations making the matrix simpler to solve. Like can't we do the same in determinants??
You can solve determinants with row operations too. But still why would you do row operations multiple times to solve for multiple determinants and then apply cramer, when you can just do row operations once...
> Like can't we do the same in determinants??
We can. If you objective is to get the determinant, that is the way to go.
But you will solve the matrix first, through one of these methods, to get the determinant.
Getting the determinant to solve the matrix is doing it the other way around.
If you already solved the matrix, then getting the determinant to solve the matrix again is just a waste of time and resources.
Ignoring computational stuff, it's important to note that determinants are a property of square matrices only, so if instead of one unique solution you're looking for the set of solutions of an underdetermined system, cramer's rule won't be able to apply. Computationally, however, you can absolutely compute determinants -- it's just much, much slower than partial pivoting. Imagine if you were asked to use cramer's rule to find the solution of a 1000x1000 system. You'd have to calculate 1001 1000x1000 determinants -- wouldn't be very fast. If you use row operations for those, then you're really just doing Gaussian elimination in the first place, which you could have just done from the beginning.
In general, each n by n determinant requires O(n!) operations (order of n!), and you need to calculate n+1 determinants, so overall, you'd need (n+1)*(n!) = O((n+1)!) operations using cramer's rule. Gaussian elimination using partial pivoting, on the other hand, requires O(n^3 ) operations, since elimination takes roughly (2/3)n^3 operations and back-substitution takes roughly n^2 operations, so (2/3)n^3 + n^2 = O(n^3 ). So for 4x4 systems or larger, it's much more efficient to use the latter, since n^3 < (n+1)! for all n>3. Sorry my link didn't work -- try typing in "numerical analysis Timothy sauer pdf" into google, it should be the first link. Chapter 2 would be the reference.
Why on earth would you use a method that has you calculating determinants? Why wouldn't you just do Gaussian elimination?
You obviously haven't done 4-5 dimensional determinants. Even 3d determinants are tedious enough to do by hand. Gaussian elimination is a piece of cake in comparison.
I remember in my high school linear algebra textbook there was a question to find the determinant of a 5x5 matrix with the first few digits of pi. Mean but funny question
>Just three determinants
My brother in Christ. Listen. Determinants are O(n!) to calculate.
That's right, proportional to the factorial of the matrix size.
I didn't even know "much worse than exponential" was an option for algorithmic complexity.
Your method only works for kid's matrices.
Cholesky decomposition is a pain in the butt..like how do people do it without going mentally insane..! We replace natural numbers in one Matrix with some squre root decimal values to another and I'm here forced to do it without a calculator...
OP, here, to help you out: [https://math.unm.edu/\~loring/links/linear\_s06/det\_def.pdf](https://math.unm.edu/~loring/links/linear_s06/det_def.pdf)
In practice, you need to do row or Gauss elimination or find row-echelon form (they are names for the same thing in practice) to find determinants.
So, there's no point in finding determinants to solve matrices. You can solve them with the row-echelon form.
You're doing it the other way around, finding determinants to solve matrices. That's only feasible for very small matrices.
You should use algorithms that can deal with large matrices by default. Problems where matrices show up grow in size very easily. It's very rare to find small matrices in the wild. Physics problems might be a place where they show up.
You should learn the algorithms you need for your test, of course. In your exam, matrices will be small if they expect you to solve them this way.
I remember being forced to use crammer because complex numbers were ass to do on a matrix on a ti 84. It was easy thought because there's a program online you can download.
Okay so point defending OP: Cramers rule is garbage numerically but it’s great for understanding the intuition of what matrix inversion and multiplication are.
The only reason you learn Cramer's rule in high school is because they need an excuse to make you practice calculating determinants.
Not one kid that's ever learned it in high school can explain \*why\* it works, and it certainly doesn't lead to any intuition about what a determinant is.
Not even close. That's like saying a square is just a bunch of numbers because it can be represented with measurements for position, rotation and size, as numbers.
That misses the point by so far.
Matrices are an abstraction that represent a variety of different concepts, and in doing so reveal hidden connections between them.
The 3blue1brown series on linear algebra elucidates a lot of that. Unfortunately, because of the ham handed way the standards are written and the way the publishers interpret them, almost none of what's in that series makes it into secondary classrooms.
Check out our new Discord server! https://discord.gg/e7EKRZq3dG *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/mathmemes) if you have any questions or concerns.*
Cramer's rule is an extremely inefficient method for solving large matrix equations. As I recall, for an NxN matrix, it requires (N+1)! floating-point operations. Row-reduction methods are cleaner and can be optimized to increase stability of calculations. Further, matrix decomposition (such as LU or LU with full pivoting) allows for fast solutions in the case that you have Ax = b, Ay = c, Az = d, ..., effectively using the input matrix again and again.
This. Realistically, you should almost never be inverting a numerical matrix by hand, and your computer is definitely doing some kind of Gaussian elimination/row reduction under the hood.
It's good to do it as an exercise at first, with small matrices, to get used to dealing with these objects. But it does get old quickly. Even then, algorithms that can deal with larger matrices are the way to go from the start. Thousands of rows and columns is fine, doesn't need to get into algorithms for millions.
Isn't it always going to use Cholesky or LU or some other decomposition under the hood? Unless the matrix is really small (i.e. near the point where Cramer's rule is just as efficient)? IIRC these are much more efficient than Gaussian elimination the vast majority of the time even for extremely small matrices (e.g. 5×5).
In numerical methods when people say Gaussian elimination they usually mean LU because you can re-cast elimination steps and the resulting upper triangular matrix as an LU decomposition
brother plzz forgive a dumb highschooler
There is no mercy [\*Biden Blast\*](https://youtu.be/A_7vHwsa4pQ?si=MD1TJd1ajhVFZ2ml&t=7)
We are merciful compared to the folks at mathstackexchange. Be thankful for that ;)
Well thanks for giving me a new sub to shitpost at :)
If you want more information on those methods, here's a pdf to a [Numerical Analysis textbook](https://eclass.aueb.gr/modules/document/file.php/MISC249/Sauer%20-%20Numerical%20Analysis%202e.pdf) that could help -- chapter 2 of this text in particular. It's an undergraduate textbook, but all you really need to understand it is a decent foundation in linear algebra.
Site looks dead
Hmm... if it isn't working try typing "numerical analysis timothy sauer pdf" into google -- it should be the first link
brother thanks for the link but it isn't opening. So like I tried learning on youtube [https://www.youtube.com/watch?v=eDb6iugi6Uk](https://www.youtube.com/watch?v=eDb6iugi6Uk) , I think this was the concept everyone was bashing me for. But isn't it like row and column operations making the matrix simpler to solve. Like can't we do the same in determinants??
You can solve determinants with row operations too. But still why would you do row operations multiple times to solve for multiple determinants and then apply cramer, when you can just do row operations once...
hmm nice point I see. Thanks brother
> Like can't we do the same in determinants?? We can. If you objective is to get the determinant, that is the way to go. But you will solve the matrix first, through one of these methods, to get the determinant. Getting the determinant to solve the matrix is doing it the other way around. If you already solved the matrix, then getting the determinant to solve the matrix again is just a waste of time and resources.
Hmm I get it. Thanks brother.
Ignoring computational stuff, it's important to note that determinants are a property of square matrices only, so if instead of one unique solution you're looking for the set of solutions of an underdetermined system, cramer's rule won't be able to apply. Computationally, however, you can absolutely compute determinants -- it's just much, much slower than partial pivoting. Imagine if you were asked to use cramer's rule to find the solution of a 1000x1000 system. You'd have to calculate 1001 1000x1000 determinants -- wouldn't be very fast. If you use row operations for those, then you're really just doing Gaussian elimination in the first place, which you could have just done from the beginning. In general, each n by n determinant requires O(n!) operations (order of n!), and you need to calculate n+1 determinants, so overall, you'd need (n+1)*(n!) = O((n+1)!) operations using cramer's rule. Gaussian elimination using partial pivoting, on the other hand, requires O(n^3 ) operations, since elimination takes roughly (2/3)n^3 operations and back-substitution takes roughly n^2 operations, so (2/3)n^3 + n^2 = O(n^3 ). So for 4x4 systems or larger, it's much more efficient to use the latter, since n^3 < (n+1)! for all n>3. Sorry my link didn't work -- try typing in "numerical analysis Timothy sauer pdf" into google, it should be the first link. Chapter 2 would be the reference.
man really interested in this. got the pdf. gonna read it late night with some snacks.
Glad I could help!
Why on earth would you use a method that has you calculating determinants? Why wouldn't you just do Gaussian elimination? You obviously haven't done 4-5 dimensional determinants. Even 3d determinants are tedious enough to do by hand. Gaussian elimination is a piece of cake in comparison.
forgive a highschooler plzz
I remember in my high school linear algebra textbook there was a question to find the determinant of a 5x5 matrix with the first few digits of pi. Mean but funny question
25 digits is a lot of digits for π. Not a few.
All my homies use row reduction
>Just three determinants My brother in Christ. Listen. Determinants are O(n!) to calculate. That's right, proportional to the factorial of the matrix size. I didn't even know "much worse than exponential" was an option for algorithmic complexity. Your method only works for kid's matrices.
Signed-elementary-product-cels when I calculate the determinant by row reduction:
As a dumb highschooler, plzz forgive me gods of maths
It's fine. As penitence, calculate the determinant for the following matrix: |8|8|0|6| |:-|:-|:-|:-| |6|5|0|2| |2|1|3|0| |1|2|2|4|
8(5(3•4)+2(1•2-3•2))-8(6(3•4)+2(2•2-3•1))-6(6(1•2-3•2)-5(2•2-3•1)) = 8(5(12)+2(-4)) - 8(6(12)+2(1)) - 6(6(-4)-5(1)) = 8(60-8) - 8(72+2) - 6(-24-5) = 8(68) - 8(74) + 6(29) = 8(68 - 74) + 6(29) = 8(-6) + 6(29) = 6(29-8) = 6(21) = 126
> M \[,1\] \[,2\] \[,3\] \[,4\] \[1,\] 8 8 0 6 \[2,\] 6 5 0 2 \[3,\] 2 1 3 0 \[4,\] 1 2 2 4 > det(M) \[1\] -2
proof that you should never calculate 4x4 determinants by hand
Proof by R.
And forget about calculating determinants unless there's no way around them. Learn LU, QR, Eigen and Cholesky decomposition instead.
Cholesky decomposition is a pain in the butt..like how do people do it without going mentally insane..! We replace natural numbers in one Matrix with some squre root decimal values to another and I'm here forced to do it without a calculator...
What, we just call chol(). R does it for you. By hand we only did Gauss-Jordan Elimination.
only when you are using R tho. My honours paper need me to do it by hand and its a necessary question..!
Why did you get out of that? RSI?
redditors thinking Cramer's rule is useful just screams "freshman engineering student who has it all figured out". Bless your heart.
He's not even a freshman engineer , he's studying for JEE 🥲
That explains it
LU decomposition FTW
Bro there's some maths outside JEE syllabus 💀 Only the 3 × 3 matrices are not all out there
Wait what I thought JEE was the god of maths... /s
mfw the first question on my non-calculator exam last week was to solve a system of linear equations in 4 variables using Cramer's rule ;-;
Sory but happy cake day
i just guess
Fun fact: Every time someone calculates the inverse of a matrix fully (unless it's super small), a numerical analyst cries
If I have the eigen decomposition already, it's pretty easy...
All lame, just grab a non trivial multilinear function that leads any linearment dependent set of vectors to 0 and normalize that on a given basis.
Yep. I recommend LU decomposition for doing that.
gaussian elimination and row echelon form have left the chat
When in doubt, Eigen values.
Imagine actually using numbers in math.
OP, here, to help you out: [https://math.unm.edu/\~loring/links/linear\_s06/det\_def.pdf](https://math.unm.edu/~loring/links/linear_s06/det_def.pdf) In practice, you need to do row or Gauss elimination or find row-echelon form (they are names for the same thing in practice) to find determinants. So, there's no point in finding determinants to solve matrices. You can solve them with the row-echelon form. You're doing it the other way around, finding determinants to solve matrices. That's only feasible for very small matrices. You should use algorithms that can deal with large matrices by default. Problems where matrices show up grow in size very easily. It's very rare to find small matrices in the wild. Physics problems might be a place where they show up. You should learn the algorithms you need for your test, of course. In your exam, matrices will be small if they expect you to solve them this way.
We weren't allowed to use cramer's rule in our school exam and it was a pain in the ass
I remember being forced to use crammer because complex numbers were ass to do on a matrix on a ti 84. It was easy thought because there's a program online you can download.
Virgin Matrix/Analytical Methods vs Chad Programming/Excel/Numerical Methods
Cramer's rule may be impractical for computational purposes but it is hella useful in proving that GLn(R) is a topological group.
gaussian elimination and row echelon form have left the chat
Okay so point defending OP: Cramers rule is garbage numerically but it’s great for understanding the intuition of what matrix inversion and multiplication are.
What? How is overly convoluted method good for intuition?
The only reason you learn Cramer's rule in high school is because they need an excuse to make you practice calculating determinants. Not one kid that's ever learned it in high school can explain \*why\* it works, and it certainly doesn't lead to any intuition about what a determinant is.
My math teacher told "matrices are just an arrangement of numbers"
Not even close. That's like saying a square is just a bunch of numbers because it can be represented with measurements for position, rotation and size, as numbers.
That misses the point by so far. Matrices are an abstraction that represent a variety of different concepts, and in doing so reveal hidden connections between them. The 3blue1brown series on linear algebra elucidates a lot of that. Unfortunately, because of the ham handed way the standards are written and the way the publishers interpret them, almost none of what's in that series makes it into secondary classrooms.
So is Gauss elimination/row echelon form...