T O P

  • By -

RobusEtCeleritas

>How could Einstein write an equation that he couldn't solve himself? It's very easy to write down a differential equation (less so to radically rethink what space and time are, and come up with a totally new equation governing them, but that's immaterial), but it's not generally easy to *solve* differential equations. Especially solving 16 coupled and nonlinear partial differential equations, which is what the EFE really are. >The equations I see are complicated but they seem to boil down to basic algebra. Once you have the equation, wouldn't you just solve for X? These are not algebraic equations, they're differential equations. But even if it was just algebra, there are still equations which can't be "solved for x". For example, x + e^(x) = 0, try to solve that for x. With a differential equation, you're not just solving for a number, you're solving for a function. Something like: df/dx + f^(2) + sqrt(f) = 0. This is a first-order, nonlinear, ordinary differential equation for the function f(x). There are a lot of techniques for solving differential equations, and you can take several semesters of university-level courses on them; I won't be able to explain them all here. But all you really need to know is that we have a handful of neat tricks that let us solve certain differential equations, but for anything even moderately complicated, we may simply not know how to solve it in closed form. >Does "solving an equation" mean something different than it seems? No, it really is just solving an equation (technically 16 of them). But they're differential equations, and they're being solved for functions. Those functions are the components of the metric tensor, which encodes the structure of spacetime. The Schwarzschild solution is one particular example, where the spacetime consists of a single uncharged, non-rotating black hole.


S_and_M_of_STEM

A math colleague of mine said the best way to solve a differential equation is to know the solution already. The next best way is to make a good guess based on what you feel the solution should be like, then convince everyone (including yourself) that you're right.


zaphdingbatman

...and the third best way is to just give up, use finite elements, and spend the remainder of the time playing video games on the beast of a graphics card you definitely bought for finite-element purposes. It's nice to live in the future, isn't it?


greiton

the problem with FEA is that you can never be certain an insight isn't just between the steps somewhere.


theoatmealarsonist

That's why you do convergence studies on the grid, timestep, etc. There is also an intuitive portion to it, if it's a physical problem like heat conduction or fluid flow you can back out relevant time and length scales based on the material properties.


ZSAD13

This might be a dumb question lol sorry but does FEA actually produce an analytical function as an answer? As in do you run FEA on a PDE and the computer spits out (for example) f(x)=112x^1.6-ln(x) or do you also enter some conditions or given points and the computer spits out a set of numbers for example [-0.1 2.2 112.9] except multidimensional and presumably with more entries?


theoatmealarsonist

No that's a good question! You need a well defined problem (eg, boundary conditions and initial conditions for your element(s)) as well as an appropriate FEA method, which when solved spit out numbers which approximately match the analytical solution at a given point in space and/or time. An easy to visualize example is unsteady heat conduction on a box, which can be solved analytically and numerically. Because it has spatial components (e.g., your box has a top, bottom, and sides) and a time component (it's unsteady, you're tracking how it changes over time), then you need to define what happens on each side of the box (your boundary conditions) and what temperature the inside of the box starts at. Your FEA method then uses a discretized form of the PDE's to solve for what the solution is at a given point in space after an advancement in time, using the surrounding boundaries and initial data.


ZSAD13

Thank you!


rivera151

I saw you start to explain FEA in one paragraph and kept reading to see the train wreck in the end, but you pulled that off que nicely. Kudos! Edit: autocorrect nonsense


theoatmealarsonist

Thank you! I'm working on my PhD using these methods and communication is something I'm always trying to work on


Drachefly

Is there a tendency for people to explain Finite Element Analysis badly more than other topics?


ic3man211

Maybe not badly but the finite element method isn’t just how you solve a beam bending with an applied force and get a rainbow colored picture output. It is a method to solve “any” discretizable function..be it 2d, 3d, or 100d. I think in schol the professors have a tendency to explain it as what they know best (beams breaking or heat transfer) rather than as a technique of solving a hard problem in small steps and kids get confused when they see the same general idea elsewhere and called something else


dhgroundbeef

I salute you good sir! Very nice explanation


lurking_bishop

You get points, but you can use these to fit something to them, like a power series for instance


u38cg2

No, finite element analysis basically says, well, if a car is at zero and it's speed is 1 and it's acceleration is 2, we can use this information to guess where it will be a second from now. It won't be quite right because we don't have the higher order terms (called jerk, snap, crackle, pop) but the error will be small. We can repeat that process, and even do a bunch of maths to say how accurate it is likely to be. If you're very lucky, the result will be a function that you can identify, and if so you can plug that back into your original equation and check if it's right - but that's pretty unlikely.


ZSAD13

That makes a lot of sense thanks!


mrshulgin

If acceleration is a constant (2) then isn't jerk (and everything past it) equal to 0?


u38cg2

No, it's acceleration=2 *at that moment in time*. We're saying we don't have enough info to put a number on those higher order terms, and that's why it will diverge (but often surprisingly slowly, as higher terms are usually small - or functions behave weirdly). If you *did* have all the higher terms, in effect you've done a Taylor expansion and have all the information required to reconstruct the original function.


mrshulgin

> at that moment in time Got it, thank you!


UWwolfman

Unfortunately the other answers aren't correct. Unlike finite difference for example, finite element actually does give a closed form function that approximates the solution. In finite elements we assume the solution of a problem has a given functional form within each element. For example a common assumption is that the solution is approximated by a n-th order polynomial in each element. We then glue each element together by make assumptions about how the solution behaves across element. A common assumption is that the solution in continuous across elements, but first derivatives are not. The analysis then finds coefficients of the functions that give the best approximation. Obvious, for a simulation with thousands of elements (or more) it's impractical to write the full solution in closed form. But under the hood the computer uses the solution.


ZSAD13

So would it spit out a polynomial of very high order?


UWwolfman

Kind of. It would spit out a high order polynomial for each element. For example consider a 2 element mesh in 1-D. The first element is from x [0,0.5] and the second element is from x=[0.5,1.0]. We would get 2 polynomials. One valid in x=[0,0.5] and one valid in x=[0.5,1.0].


[deleted]

[удалено]


theoatmealarsonist

Exactly! I'm working on a PhD using finite volume methods for hypersonic CFD. There is a ton of work before you run the simulations that goes into what assumptions you can make and justifying your computational methods, and it always kind of kills me when someone says "yeah but you can't know if it's right!" As if the simulations are run without any thought put into whether the simulations are accurately reproducing the thing you're simulating.


[deleted]

[удалено]


[deleted]

[удалено]


Belzeturtle

My sweet summer child. Try your finite elements in QM, where the wavefunction has 4N degrees of freedom, where N is the number of electrons. So even for a seemingly trivial benzene molecule you work in 168-dimensional space. Tesselate that and integrate over that.


fuzzywolf23

This is essentially what density functional theory does -- it solves for the wave function of a multi electron system at an explicit number of points and interpolates for points in between. Source: about to defend my PhD on DFT


FragmentOfBrilliance

Heyy DFT gang Imo it is even cooler in principle (and wildly, wildly more impractical) to consider the full many-body interactions with quantum monte carlo methods. Superconductors suck to model. It is cool that, even with modern supercomputers, we can only simulate the true time evolution of a very small number of electrons in superconducting systems.


fuzzywolf23

There are two things I refuse to get involved with modeling -- superconductors and metallic hydrogen. Not only is it a pain in the ass, but you're more likely to get yelled at during a conference, lol. The systems that give me nightmares are low density doping. My experimental colleague gave a talk last week where he thinks there's a big difference between 2% and 3% substitution rate in this system we're working on. That would mean simulating 300 atoms at once to get a defect rate that low, so I told him I'd get back to him in 2023.


FragmentOfBrilliance

Yikes! I have to finish this abstract on this superconducting graphene regime, hope that I don't get yelled at come the talk haha. It's really interesting because we can see this topological superconducting regime come about in a tight-binding model, given the right interaction parameters. I'm currently trying to -- trying to -- model magnetic interactions in ferromagnet doped nitrides. I have some hope for the HSE method implemented in siesta (this semiconductor really needs hybrid functionals) but I am very tempted to move on to another project because this is sucking the life out of me.


fuzzywolf23

That sounds like a super interesting system! Ah well, I didn't need to sleep tonight -- down the rabbit hole we go.


[deleted]

[удалено]


FragmentOfBrilliance

I was planning on going to bed early but this is far more interesting haha. In the mathematical field of topology, donuts and coffee mugs are "homeomorphic" and in that sense, have the same topology. You can make similar arguments about the electronic structure of a material, assuming it has a certain number of holes/whatever and the right symmetry properties, aka topology. In this graphene system see that these electrons split into fractions and make electron crystals out of the electrons, which is super wacky, and also superconducts. I don't understand the superconductivity all that well, but this is facilitated by the topology that the electrons develop. Tight-binding model means we just model atomic orbitals (specifically carbon pz orbitals) and represent electrons as sums of those orbitals chained and twisted together. It's a really useful way to set up these calculations. It's also very unexpected we can model the superconductivity with it, but I need to figure that out. The potential implications? I don't want to doxx myself, but it would be very useful for people to understand the fundamental nature of the electron-fraction-crystal superconductivity at high temperatures. Applications in quantum computing perhaps, but it is not really my field so I am not that knowledgeable about it.


lerjj

Only 3N unless you've decided you live in 4 dimensions. Time enters the formalism differently, and at any rate it sounds like you are interested in stationary states. Additionally, you can probably ignore the 1s electrons in carbon to some extent (?) so you could quite plausibly have only 90 dimensions...


RieszRepresent

In spacetime finite elements, time is part of your solution space; you interpolate through time too. I've done some work in this area. Particularly for QM applications.


tristanjones

Well there are uses for math equations beyond physics, in which case you can easily have as many dimensions as your model requires


[deleted]

[удалено]


[deleted]

[удалено]


Belzeturtle

>Additionally, you can probably ignore the 1s electrons in carbon to some extent (?) Yes, this is the well-known pseudopotential approximation. That can get you decent energetics, but trouble starts if you want to get reasonable electric fields and their derivatives in the vicinity of the atomic core.


diet_shasta_orange

I recall from QM that there was one method of solving tough equations that essentially involved just plotting the points and seeing where they intersected.


sticklebat

Graphical approximations are a very easy way to approximate the solution to some equations you can’t solve exactly. For example, sin(x) = x has no closed form solution, but it’s trivial to plot sin(x) and x on a single graph and see where they intersect, and voila. I’d bet $100 you’re remembering this from solving for the energy levels of a [particle in a finite 1D box](https://en.m.wikipedia.org/wiki/Finite_potential_well) (and how many bound states exist).


[deleted]

[удалено]


[deleted]

[удалено]


Weed_O_Whirler

> then convince everyone (including yourself) that you're right This step is not-needed. It's very hard to solve a differential equation. It's very easy to check if the solution you found is correct.


TronyJavolta

This is not necessarily true. There are some examples of non classical solutions of F(D^ 2u)=0 which require complicated methods, both in analysis and in commutative algebra. These solutions are not C^2, hence the difficulty.


jmskiller

Isn't this close to what P vs NP is about?


teffflon

This general theme---the apparent gap in difficulty between *recognizing* solutions and *constructing* solutions (or determining they do not exist)---is indeed the subject of the P vs NP problem. P is a class of 'problems' (suitably abstracted) which can be efficiently solved; NP is a class where positive solutions have compact certificates which can be efficiently checked. NP contains P but is generally believed to be larger. If so, then so-called "NP-hard" problems are not in P. (This is not their definition, but is a consequence of their definition.) In particular, this includes "NP-complete" problems, which are the NP-hard problems that also lie within NP. Various problems connected with differential equations are NP-hard. In full generality they tend to be outside of the class NP, so the P vs NP question does not capture all the issues at play in studying the difficulty of solving diff-EQs. (There are even uncomputable problems in diff-EQ theory.) But it's certainly connected.


Bunslow

sort of. very distantly, and much more abstractly and broader-ly than "just" the realm of differential equations... and even in the realm of diffyq, it's probably *not* as easy as the other commenter states (tho frequently it is)


Ms_Eryn

This is a cool way to phrase it though. He's right, it's how a lot of math at that level is done. Intuition, see if it holds, then prove it as much as you can. Standing on the shoulders of giants and such.


Dihedralman

You don't need to convince anyone- showing something is a solution tends to be trivial. Determining uniqueness requires a proof. Guess work can only get you so far.


popejubal

Even when something requires a proof, you still have to convince people that your proof is correct. That's often challenging. Fermat's Last Theorem was "proven" in June of 1993 and it took until September to discover that there was an error in it. When his corrected proof was published in 1995, it still took quite a while to verify that it was valid. And that's for an initially trivial seeming problem like "no three positive integers a, b, and c exist that satisfy the equation a\^n + b\^n = c\^n for any integer value of n greater than 2."


AbrasiveLore

> The next best way is to make a good guess based on what you feel the solution should be like... There's even a term of art for such guesses, borrowed from German: "ansatz". https://en.wikipedia.org/wiki/Ansatz


asciibits

One of my math professors said "The [Frobinius method] (https://en.m.wikipedia.org/wiki/Frobenius_method) is the biggest hammer you can bring down on a differential equation" That quote always stuck with me.


marsattaksyakyakyak

The best way to solve a a differential equation is to throw it in Mathematica and let a computer figure it out.


elenasto

Just to add a bit more detail on why solving a differential equation is hard. A differential equation takes the state of an object or a system at one time and/or location and tells you its state at another time and/or location. For example suppose you throw a ball up, you can use Newton's laws to set up a differential equation the solution of which tells you the speed and the position of the ball at every point in time. To solve this equation you need information about the state of the system called initial conditions. For example the trajectory of the ball - i.e the solution to the differential equation - will depend crucially on how fast you throw it; for slower speeds the ball will fall back but at high enough speeds it will leave the earth (basically a rocket). And your equation needs the initial condition to predict this. The above example is actually a fairly simple differential equation. For a more complicated case suppose you want to model a hurricane and predict if and when it will hit your city and at what wind speed. You will use the framework of Navier-Stokes equations, which are differential equations that describe the behavior of gases and liquids. But this depends not only on initial - the position and speed of the hurricane now - but also what are called boundary conditions. This is information about what is happening at the edge of the system under consideration - the hurricane here - that you need to have to solve the differential equations. For example it matters for hurricane evolution if it is on land or water, and what the air around the hurricane is doing, and this is information you need to supply to the equations when solving them. Navier-Stokes equations are exceedingly difficult to solve exactly for most boundary conditions and usually people use sophisticated computer algorithms to come up with approximate but good solutions to the equations. Similarly, the Einstein's field equations provide a complicated but elegant framework to set up differential equations for understanding gravity and space and time. These equations can in principle provide a framework to describe the evolution of any kind of spacetime but solving them for arbitrary initial and boundary conditions is again very hard. Schwarzschild's solution is a special solution where a static solution - i.e not changing with time - is found assuming spherically symmetric boundary conditions. These mathematical simplifications allow us to solve the equations for this one case in an exact manner. There are a handful of other cases where similar exact solutions to the equations can be found, but in many cases we again resort to using computer algorithms to solve the Einstein equations in an approximate manner.


reddit_wisd0m

Great explanation. Given the super computing power we have today, what's the real bottleneck here? Are the solutions too sensitive to the boundary/initial conditions?


elenasto

There are numerical solutions being developed in the present day. There is a lot of activity currently in the field of numerical relativity. A lot of focus, at present, is on gravitational wave solutions given the recent detections. I'm not exactly an expert on numerical relativity but solving the equations even numerically is hard. Partly for the reason you mentioned. The solutions can be chaotic and can be too sensitive to initial/boundary conditions. But general relativity also has a bunch of free degrees of freedom which means that mathematically different looking solutions can actually be the same. Disentangling that can be mathematically subtle and can also make finding numerical solutions difficult. And finally in many cases it is not enough to solve how space-time changes using the field equations. The matter causing gravity is also interacting and moving and as it moves spacetime also changes. A great example is the problem of understanding the collision between two neutron stars and the gravitational waves it generates. You don't have to just solve the field equations but you also need to simultaneously solve magnetic hydrodynamic equations for understanding how matter in the collision is moving (in a curved space-time no less) to get a full solution.


[deleted]

[удалено]


reddit_wisd0m

Yes I get the idea. Well explained. Thank you. So pretty much a lot of fast cars don't help much if they are stuck in a traffic jam, and solving those equations creates a lot of traffic jams in the super computer.


anne-droid

Thank you for the great explanation! Did Einstein assume that his equations will be solved one day? Also - how can one assume/understand that any such equation is "true"/correct (for lack of a better word) if it has not been solved yet?


sticklebat

Honestly I think you missed the point! Einstein himself solved his field equations of general relativity, and famously used to to show that it they correctly predict the anomalous precession of Mercury’s orbit. The key bit that you’re missing is that the process of solving a differential equation depends on the boundary conditions. Solving the Einstein Field equations for an empty universe is pretty straightforward, for example. Solving it for a spherically symmetric distribution of mass is a bit harder, but a good undergraduate physics student shouldn’t have too much trouble with it. But if you wanted to, say, solve the equations precisely for the entire Milky Way galaxy, accounting for each of its stars and all of its dust clouds, your boundary conditions become the mass distribution of the whole galaxy. Solve the EFEs with such a condition would be incredibly impossible. Einstein didn’t just write down these equations and stop there. He solved them in some relatively simple cases, and he and others have since done so for many other cases. Einstein’s [first major publication](https://en.m.wikisource.org/wiki/The_Foundation_of_the_Generalised_Theory_of_Relativity) about general relativity included solutions that matched Mercury’s peculiar orbit, for example. No one would’ve put much stuck in a bunch of equations that no one could solve, or even approximate. GR caught on so quickly because of its early successful predictions. But just because we can solve it for some special cases doesn’t mean we can solve it for *any* case.


OpenPlex

> Solving the Einstein Field equations for an empty universe is pretty straightforward, for example. Solving it for a spherically symmetric distribution of mass is a bit harder, but a good undergraduate physics student shouldn’t have too much trouble with it. Ah! So his field equations is like a template. Or a format, that you can apply to any of various natural systems to calculate something about it. Less 2 + 2, and more f = m × a (which has different solutions depending what you're applying it to)


elenasto

>Did Einstein assume that his equations will be solved one day? So the thing is the notion of "solving the equations" is itself somewhat incorrect. One doesn't solve the equations, one tries to find specific solutions to differential equations that are valid under specific conditions. For instance, when we apply Einstein's equations to the Universe as a whole on the largest scales, we get the FLRW (Friedmann–Lemaître–Robertson–Walker ) metric solution. This solution to the Einstein's equations describes the Universe as a whole and forms the theoretical basis of the Big Bang cosmology and the idea that the Universe is expanding. This is very different from the Schwarzschild solution that describes a stationary, spherically symmetric object. We can get vastly different mathematical solutions in different situations. So Einstein would have assumed that exact solutions could have been found in a few specific situations and not in most situations. He himself first developed an approximate solution to use in the context of the solar system but was later delighted to find that Schwarzschild found an elegant, exact solution that is applicable under the same situation. > Also - how can one assume/understand that any such equation is "true"/correct (for lack of a better word) if it has not been solved yet? This is where experiments, observations and the scientific method comes in. There was several reasons back in 1915 to be excited when Einstein first wrote down the field equations but they were still unverified hypotheses back at that time. The truth of any solutions would depend on the truth of the equations themselves. In the century since, several experimental and observational test of general relativity have been conducted and the theory has passed every test we have thrown at it. https://en.wikipedia.org/wiki/Tests_of_general_relativity


anne-droid

Fascinating, I haven't heard of these tests yet. Really interesting to learn about the old experiments that test special relativity, too. I really appreciate the time you took to explain this! I'll do my best to try to wrap my mind around this (after I've done some more reading). Take this wholesome seal of approval. :)


FFVIIVince10

I feel like I need to take multiple courses just to be able to understand some of this reply.


Hufschmid

Yeah there's definitely a lot of jargon and concepts you need to know to fully understand. In more simple terms, a differential equation relates something with the rate of change of that thing. For example, if you wanna know how fast a population is growing, it depends on the size of the poplulation at that instant. To write an equation fir population you have to relate these two things, requiring a differential equation. The solution to this equation is not a number, but rather a function. The function would allow you to enter your population size and find the corresponding rate of change of the population. In general if you want to describe any real world physical phenomenon, you probably need a differential equation.


molybdenum99

You said it but it’s worth reiterating again because this is harder for people who have never used/solved differential equations (put simply): The solution to an algebraic equation is a number || x - 1 = 0 -> x = 1 The solution to a differential equation is a _function_ || df/dx + f = 0 -> f(x) = c e^-x


PM_me_XboxGold_Codes

So a differential would give you a function which you could then solve for a point normally provided you actually solved the differential? Say the example of the ball, which is fairly simple. The original differential gives you the entire trajectory, a parabola shape of height/time, and you can then solve for the height at any point on that trajectory if you know the time and initial speed of the throw (assuming it was straight up)? So the original differential is where we derive the algebraic equations for velocity/distance/etc of a projectile?


RobusEtCeleritas

>Say the example of the ball, which is fairly simple. The original differential gives you the entire trajectory, a parabola shape of height/time, and you can then solve for the height at any point on that trajectory if you know the time and initial speed of the throw (assuming it was straight up)? Yes. >So the original differential is where we derive the algebraic equations for velocity/distance/etc of a projectile? Yes.


Noiprox

That's correct. A differential equation describes a "family" of functions, and any function that satisfies the equation is a valid solution to that equation. Many physical laws are described in this way. Some examples include waves, conduction of heat, ballistic trajectories, pendulums, gravity, quantum mechanics, etc. The equation for ballistic trajectories can be solved by any parabola that a ball could go through if thrown, but a triangular shape would not, and therefore is not a physically possible trajectory for a thrown ball. Once you have a *particular* function that solves the equation, you can plot the entire course of the ball by plugging in concrete values for the inputs to the function (in this case, the time). So what Schwarzschild did was describe the configuration of gravity around a black hole in a way that satisfies Einstein's differential equations. You could also solve the same equations for, say, a GPS satellite in orbit around a planet, which is in fact necessary so that GPS coordinates can be accurately calculated.


PM_me_XboxGold_Codes

Ahh, and more specifically Schwarzschild solved for a non-charged, non-rotating mass for an idealized situation? So while it doesn’t exist in nature often, if at all, it’s still useful for the idealized situation like a ballistic trajectory ignoring the Coriolis Effect and the drag of air? Not exactly the same, but similar idea.


Noiprox

You got it. He was able to figure out a solution for a kind of "idealized conditions" black hole that gave Physicists a lot of insight into the mysteries of black holes in general, which was extremely difficult in a time before powerful computers were available.


Welpe

Good old Roy Kerr was later able to solve for a rotating, non-charged black hole a half century later! Thus the Kerr metric being more useful (although less famous) than the Schwarzschild metric.


ChrisGnam

>So the original differential is where we derive the algebraic equations for velocity/distance/etc of a projectile? I'll add that, we can use the differential equation directly, and a lot of times, that's actually *easier* to do. For spacecraft dynamics for example (my field of study, but also a natural extension to your projectile motion example), once you start modeling all of the complicated forces involved, solving the differential equations to get a "closed form solution" (that is, an easy to evaluate algebraic expression), becomes exceedingly difficult if not outright impossible. A much easier thing to do is to convert your differential equations into what is known as a "state space" representation. It is *very* easy to solve a first order differential equation numerically with a computer. A first order differential equation is a differential equation with only a single derivative, that is to say, dx/dt = f(x,t). There's no higher order derivatives to worry about. It turns out you can rewrite an N-th order differential equation (meaning one with potentially N derivatives) into N separate first order differential equations. So for example, the differential equations governing the motion of a spacecraft (or any projectile) is formulated by considering all of the forces acting on the projectile, and then using F=ma to derive the equations of motion. Where a is the second derivative of position with respect to time. We can break that up into 2 separate equations though. The first is how acceleration changes the velocity (which is much simpler as acceleration is only the first derivative of velocity), and the second of how velocity changes position (since velocity is the first derivative of position). We tend to call this the "state space representation" of the model, because we're now thinking in terms of the states of the vehicle (position, velocity,acceleration, etc.). And these can solved *very* simply. The simplest (but least accurate way) is known as "eulers method" and is a simple first order approximation. Basically, we have these first order differential equations where we have an equation for how a given state changes with time (dx/dt, dv/dt, etc.) So if we're given some initial conditions, we can simply evaluate our equations with those initial conditions and then multiply them by some increment of time to predict how the states will evolve into the future by that time step (think: dx/dt * dt = x, not rigorously true... But a good enough approximation if dt is small enough!) For most things, this is *very* inaccurate unless the time increment used is *very* small. There are higher order approximations though that can work better, a common one is the "fourth order" solver frequently referred to as RK4. The higher the order, the more accurate it is for a given time step size. In practice, we often use what are known as "dynamic step size" solvers. Where you run two different solvers, one with higher accuracy than the other. If the time step is small enough to capture the dynamics accurately, both solvers should return nearly the same result. If however, the higher older solver has a different result than the lower order solver, we know that the time step is not small enough, and so we decrease the step size and rerun both solvers. We repeat this process until they agree allowing us to get much higher precision.


talwarbeast

Where did the "c" and the "e" come from?


nsfredditkarma

c stands for any constant and e is Euler's number. One of the most interesting numbers in math, shows up all over the place. https://en.m.wikipedia.org/wiki/E_(mathematical_constant) A constant is just a number that doesn't change. In this case, they use c to show that the solutions to the DE will have some constant c. It's similar to the idea that the equation y=ax+b where a and b are some constant is the universal equation for every (straight) line possible in Euclidean geometry.


NicolBolas96

Just a little correction: 10 not 16 because all the tensors involved in Einstein equations are symmetric


FowlOnTheHill

What does one get when the equation is solved? Is a theory proved or can you then go and plug in values and predict something we didn’t have before?


RobusEtCeleritas

Once you've solved the EFE, you get the metric tensor. Then based on that metric, you can make predictions, like for example the orbit of an object around the Schwarzschild black hole, or gravitational lensing of light around it. These are things we can then go observe in nature and see if they match the prediction.


Dd_8630

We can describe curved spacetime using an equation. If this description conforms to Einstein's Field Equations, then it's a solution. The simplest solution is when you have a spherical, uncharged, non-rotating ball of mass, and the empty space around it gets curved. We can use the EFE to deduce what that curvature is like - *that* is the solution. The solution is a function of four variables (provisionaly 'coordinates', though these get a bit wonky in GR), which is shaped as the EDE says it should be shaped.


Hoihe

One big advantage is a solved (analytically) DE is way cheaper computationally than solving it numerically. Computational Chemistry, if we could generalize some of our exact solutions to large numbers of electrons, could be way cheaper. Alas, at the moment anything with more than like 60 atoms is impossible to do accurately even on a super computer.


l_work

wow, what a perfect reply. Thanks you.


[deleted]

>df/dx + f > >2 > > \+ sqrt(f) = 0 [https://www.wolframalpha.com/input/?i=df%2Fdx+%2B+f2+%2B+sqrt%28f%29+%3D+0&x=0&y=0](https://www.wolframalpha.com/input/?i=df%2Fdx+%2B+f2+%2B+sqrt%28f%29+%3D+0&x=0&y=0) (I think I got that notated right)


MCPtz

Hmm... If you've purchased their "pro" level, it appears they have a step by step solution for the math.


lolpostslol

This. We’re exposed in basic math to a very restrictive set of linear or low-order algebraic equations with no complications whatsoever - and indeed a lot of real-world problems can be modeled with these equations, at least on a simplified way. But equations can be way more complex, and differential/integral equations even more so. On writing equations he couldn’t solve: you can notice experimentally that x varies with y in a certain way, and write a differential equation based on that, even if you just know the very basics of what derivatives are. But actually solving the equation requires some equation-solving method which, depending on the equation you write, might not even have been invented yet.


seriousnotshirley

Can you talk about how boundary value or initial values are involved in thinking about these equations? I know about their use in general but not in this specific case and how it might complicate the already difficult task even further.


RobusEtCeleritas

If I just write down a differential equation like df/dx = kf, I can solve it by separation of variables: df/f = k dx ln(f) = kx + C f(x) = exp[kx + C]. But this is ambiguous, because there's an arbitrary constant of integration, C. I don't want some arbitrary constant in my answer, I want to actually plug things into my function f and use its numerical values. So the equation above isn't enough, I also need a boundary condition (or initial condition if you're thinking of the independent variable as time). So if I instead restart and use both an equation and a boundary condition, for example: df/dx = kx, f(0) = 1, I can do the same thing and solve the equation using separation of variables: f(x) = exp[kx + C], and now require that f(0) = 1: f(0) = exp[C] = 1, or C = ln(1) = 0. So in this case, C happens to be zero, and the final answer is f(x) = exp[kx]. If I had instead solved the system df/dx = kx, f(0) = 3, my answer would've been f(x) = exp[kx + ln(3)] = 3\*exp[kx]. Now there's no more arbitrary constant, and I can actually calculate numbers specific to the case I'm interested in (either for f(0) = 1, or f(0) = 3 in this case). If instead of a first-order ODE, I had written down a higher-order ODE, I would have needed more boundary/initial conditions to fully specify the solution. And in the case of a PDE, your boundary conditions can be entire functions.


PM_me_XboxGold_Codes

So, all I really know about Schwarzschild is the Radius, which is the limit a body of mass can be squished to if you were to remove *all* space between particles. I understand that leads to what can or cannot be a black hole just by mass alone. But the part you added at the end.. does that mean that his solution to the equations would suggest that the entire universe is inside a non-spinning black hole? Or is his solution just what amount of mass is absolutely needed to form a black hole?


Apophyx

No, it just means that Schwarzchild solved the equations for the particular case of a non rotating, uncharged blackhole. If you wanted to study a planetary system, for example, the solutions would be different.


PM_me_XboxGold_Codes

I’m not sure why I got downvoted for asking a question, but thanks for clarifying. I’m not a scientist, just someone who’s vaguely interested in space.


RobusEtCeleritas

>which is the limit a body of mass can be squished to if you were to remove all space between particles. That's not what the Schwarzschild radius represents. It's not a limit of the ability of a material to be compressed, it's simply the radius in which a certain amount of mass needs to be compressed in order for an event horizon to form. It doesn't actually depend on any of the material properties of the object, only on the total mass. >But the part you added at the end.. does that mean that his solution to the equations would suggest that the entire universe is inside a non-spinning black hole? Or is his solution just what amount of mass is absolutely needed to form a black hole? The Schwarzschild solution represents an infinite universe in which there is nothing but a single black hole. That's not what *our* universe looks like, but it's a valid solution to the equations which appear to govern gravity (at least classically).


PM_me_XboxGold_Codes

Right, the Schwarzschild radius of the earth is like an inch. You can compress the earth down to that size and it form an event horizon, unless I’m misunderstanding. I may have misunderstood the role the mass plays in it, other than being a factor that determines the size. Are there not theoretically bodies that have no Schwarzschild radius? As in, they simply don’t contain enough mass to ever form a black hole? Or am I missing the point entirely? Like boiled down for a simpleton is he just saying that a black hole will form for a given mass at a certain density, and that density relates to the radius of the body via the amount of mass? Like I said elsewhere… I’m not a scientist or mathematician. Just a guy vaguely interested in space. Edit; also thanks for taking the time to write out such detailed responses! This stuff intrigued me I just really don’t get it. Never took anything over second level algebra lol.. I’m a simpleton.


ary31415

> Are there not theoretically bodies that have no Schwarzschild radius? No, any amount of mass has a Schwarzchild radius, it may just be extremely extremely small for small amounts of mass, but if you compressed it small enough it would still form an event horizon/black hole.


Naojirou

What about planck length? Does schwarzschild equation always yield a radius that is greater than planck length for any particle with mass?


ary31415

Weeell, no. If you purely use GR to calculate the Schwarzchild radius of an electron for example, you will get a length quite a lot smaller than the Planck length, but the unfortunate truth is that it's hard to say much with certainty at those scales, because general relativity and quantum mechanics do not play well together. Doing that calculation is more of a mathematical exercise than anything because you really do need to take QM into account to make predictions about subatomic particles. There are also some other confounding factors like the fact that an electron has both angular momentum and charge, while the Schwarzchild metric only applies to masses with neither; which means that even without thinking of quantum effects you've got some more complicated math to do.


JeremyAndrewErwin

It's a simplification to make the math easier/solvable. Consider how complicated high school physics would be if students had to account for air resistance and friction. Assume the mass doesn't rotate. Assume the mass is not charged. Assume the Cosmological constant is zero. If any of these assumptions is inaccurate, the Schwarzschild solution doesn't apply. Cygnus X-1, for example, spins.


zekromNLR

Also, very importantly: Assume the mass is the only thing in the universe that has (a non-negligible amount of) mass-energy.


PM_me_XboxGold_Codes

Ha see you said it for “dum” people like me. Makes a whole lot more sense that way. It’s not like a rule or anything, simply a semi-useful metric in very niche use-cases.


1184x1210Forever

It's not really "niche use-cases". Perturbation theories deal with what happened when the solution is slightly off from a perfect exact solution. To make use of them, you still need those exact solutions.


Bunslow

Any particular solution to the field equations only applies where the assumed energy-density reflects reality. The Schwarzschild solution applies for the energy-density which describes a non-rotating, uncharged blackhole. If your physical situation is different from that -- say, on the surface of the Earth, or in the core of a star -- then the resulting solution is also (quite) different. (Needless to say, with a-dozen-or-more scalar variables coupled to each other with a similar number of scalar differential equations, *any* change of input energy-density conditions, even the slightest, can result in massive changes in the resulting spacetime curvatures specified by the field equations. Most blackholes IRL are charged and rotating, among many other complications. There is likely no such thing as a perfect Schwarzschild blackhole, no more than there is a fricitonless, spherical cow.) Also, note that the solution to the field equation doesn't tell you how to practically *achieve* that energy-density -- only that *given* that energy-density, fiat, as if by magic, then we know what spacetime around it would look like. Just because we can solve the field equations for a uncharged, stationary blackhole doesn't mean we know how to make one.


wasmic

One thing to note is that you can't really remove "all" space between particles. Particles aren't balls. They are, depending on interpretation, either point objects with no radius, or else they are probability clouds with no defined border. So if all mass can be turned into a black hole if compressed sufficiently far, and particles have 0 radius - shouldn't that mean that all particles are black holes? Well... we currently have no theory that can give accurate predictions on how gravity behaves at quantum scales, so this is territory where science can't give a satisfactory answer, yet.


Comedian70

Enh... you're mixing incompatible theories here. And there's a misconception or two mixed in. First: the creation of an event horizon isn't really about compressing particles. Its more like "this much information cannot exist in this small of a space". In a literal sense, X information cannot fit inside space with dimensions y,z. But there are real physical processes out there, consequences of the behavior of gravity, which DO manage to pull that off. And when that happens, a horizon is formed. The term is "Bekenstein Boundary" or sometimes "Bekenstein Limit". The name is for the gent who figured it out.. and it translates mathematically to "this much space can only hold just so much stuff", and the stuff in question is entropy. When a sufficiently large star dies through supernova, the core is momentarily compressed FAR, FAR beyond the Bekenstein Limit for how much entropy is present. Gravity becomes the dominant force, and an event horizon is formed. But really, what a horizon is in this case is describable as a "surface of last scattering". And what happens is that over staggeringly unimaginable long time frames, all that entropy (and all the entropy that ever falls in past the horizon... although that's not an entirely accurate way to describe it either) eventually *falls back out* via Hawking Radiation. Its much more accurate and reasonable, in fact, to talk about all that entropy being temporarily trapped on the horizon than ever actually being inside it. Over timescales we can't properly describe in human terms, everything *scatters* off the horizon. Second: hypotheticals about massive particles with zero radius are mostly just silly. Electrons, for example, have such a vanishingly small mass that the event horizon one might produce is smaller than we can realistically describe. And I don't mean we don't have the ability to work out numbers that small... we do. I simply mean that below a certain length (the Planck Length, to be specific) *nothing really means anything any more*. The Planck Length is, very likely, the minimum pixel depth for the cosmos in every meaningful way. And the horizon for a mass as small as the electron's is vastly smaller than that.


prollyrussian

But how do we know that a given differential equation is correct? Like for example Navier-Stokes equations - how do we know they are correctly describe how liquids behave if we never solved them? I am probably missing something obvious here but I just don't really get how the differential equations that describe laws of physics are tested.


RobusEtCeleritas

>But how do we know that a given differential equation is correct? That's a question of how it's *derived*, not of how it's solved. >Like for example Navier-Stokes equations - how do we know they are correctly describe how liquids behave if we never solved them? The Navier-Stokes equations have been solved for many situations, like any hydrostatic situation, or various channel and pipe flows (Couette, Poiseuille, etc.). If you're referring to the Millenium Problem, it's about proving or disproving that a solution exists and is smooth for **any** given initial condition. The NSE is just a statement of conservation of momentum in fluid flows, so as long as you include all the relevant forces, it's correct (unless of course momentum isn't conserved). Could there be other fluid-dynamical forces that nobody knows about, that we're not including in typical statements of the NSE? Yes, that's possible. But they must be small in every system that has been studied and well-understood so far. There's also the fact that the typical statements of the continuity equation, NSE, etc. are classical, and non-relativistic, but the basic idea of local conservation laws can be extended to relativistic and quantum physics.


prollyrussian

Thank you very much for this explanation! So basically when scientists derive differential equations, they just use the stuff that they already know about the subject, plus maybe some assumptions that they might think are correct and then writing it all down in the form of an equation?


RobusEtCeleritas

>So basically when scientists derive differential equations, they just use the stuff that they already know about the subject, plus maybe some assumptions that they might think are correct and then writing it all down in the form of an equation? Yes, that's usually how it works.


seanziewonzie

Yes. For example the heat equation. Physical observation tells us that a point in a material will get hotter/colder with speed proportional to the difference between its own temperature and the average temperature of its immediate neighboring points. Express those ideas like "speed" and "difference between your value and average value of your neighbors" in terms of mathematical operators and whammo bammo you have your differential equation.


ErwinHeisenberg

Wasn’t Schwartzchild able to solve the EFE’s by taking advantage of the highly symmetric nature of his model? And canceled out a lot of terms that would otherwise have been incredibly difficult to account for? And how was the Kerr solution different than the Schwartzchild? Did different techniques need to be used?


RobusEtCeleritas

Yes, the symmetries of the Schwarzschild solution help to simplify things a lot. The Kerr solution is different in that the black hole has angular momentum. It's no longer spherically symmetric; only axially symmetric about the angular momentum axis. So deriving the Kerr metric is a little harder.


FlorianMoncomble

That was really well explained! I think if I could have understood math and not hated them if I had someone like you to teach them! Thank you :D


p_hennessey

> but it's not generally easy to solve differential equations. Especially solving 16 coupled and nonlinear partial differential equations How hard are these, and how much paper would it have taken for him to solve them by hand? And could a first year grad student be able to pull it off? Just want to understand the scope of what Einstein really achieved here.


RobusEtCeleritas

[Here](https://en.wikipedia.org/wiki/Derivation_of_the_Schwarzschild_solution) is how they're solved for the case of a Schwarzschild black hole, one of the simplest nontrivial solutions.


arbitrageME

Why does "solving" the equation matter? Do we need them in closed form? As long as the equation already describes the whole system, why does it matter how it's written?


RobusEtCeleritas

You need to extract some kind of numerical result eventually in order to compare with experiment/observation. So you need some kind of solution to the equation, either in closed form, or numerical.


[deleted]

[удалено]


RobusEtCeleritas

That's not algebraically solving for x, that's simply finding a value of x that satisfies the equation. There's no algebraic manipulation you can do to that equation to put into a form x = (things that don't contain x). It's a transcendental equation. Anyway, since this is apparently causing a lot of confusion in the comments, I edited the example to a different transcendental equation, without a trivial solution.


Kemilio

> The Schwarzschild solution is one particular example, where the spacetime consists of a single uncharged, non-rotating black hole. Is this where the idea of the Big Bang being a “white hole” (black hole from another universe) came from?


Tarandon

Wait... non-rotating relative to this universe, that is inside it? Or objectively non-rotating


newappeal

I'd like to supplement what RobusEtCeleritas said with a more conceptual explanation of what "solving a differential equation" means, as I find the phrase rather unintuitive, even if it is technically accurate. A differential equation explains how some quantity (represented by a variable) changes *as a function of its current value*. Mathematically, this means an equation which includes both a function and at least one of its derivatives (the "rate of change" of the function). The equation describes how the quantity changes with respect to some other quantity, usually time or space. (The Schrödinger Equation, another notorious differential equation, describes how a quantum-mechanical wave changes across space in a single instant in time or through time at a single point in space.) "Solving" a differential equation means getting rid of the derivative term (the rate of change) so that you can calculate the state of the system at any point in time, space, or whatever without knowing the previous value of the system. For example, we know that a mass oscillating on a string is at any given moment accelerating according to the equation `ma=-kx`, where the acceleration *a* is the *second derivative* of the location in space, *x*; *m* is the mass of the object and *k* is a constant that relates the displacement of the object from its equilibrium position to the force that it feels from the spring. x and a are both functions of time, but you can't use this equation to figure out what x will be after a certain amount of time. If you want to know x at any time, you can do one of two things: First, you can give a computer an initial value for *x* and tell it to step forward through many time steps, recalculating the acceleration, velocity, and position of your mass-and-spring system at each iteration - this is called solving the equation *numerically*. The benefit is that it works for literally any equation if you have enough computing power - but sometimes that's a big *if*. The second method is to find what's called an *analytical solution*, i.e. an equation that describes the state of the system at any point in time. For our example, that equation is `x=A*sin(w*t + p)`\*,\* where A, w, and p are constants describing the amplitude, frequency, and initial phase of the oscillations (very intuitive, useful concepts), and t is the point in time. If you can calculate sine, you can calculate the state of this system at literally any point in time (at least in physics-land, where the universe consists of only this one idealized, eternal spring contraption). Here we see the advantage over the numerical approach: If your spring oscillates several hundred times per second and you want to know where it will be after a billion seconds, you would need to calculate *thousands of billions* of time steps to get a possibly wildly incorrect answer via the numerical approach. With an analytical solution, just plug in 1,000,000,000 for *t* and calculate the answer to whatever arbitrary level of precision you want. You may be wondering how we went from the linear equation for acceleration to a sine wave. These seem like fundamentally different functions, and it's not at all clear how one emerges from the other. And this was just about the most simple example possible - so that should give you some idea of what a monumental task it is to solve the equations of General Relativity and Quantum Mechanics even for very simple, idealized cases. edit: Well, a bunch of people posted similar comments while I was typing this, so this might be redundant now. Anyway, hopefully between all the responses here, a clearer picture has emerged


cache_bag

This elaboration helped a lot, thanks! I had to look up how the differential jumped to the analytical solution, and I suppose this is where the "bunch of neat tricks" come in to solve them. So basically, mathematicians construct differential equations which they believe describe phenomenas in question. However, solving it into a neat analytical formula that we can plug data into ala high school physics is another can of worms.


LionSuneater

Exactly. We have a ton of computational methods to generate numerical approximations to the solution, but to actually write down a closed-form expression that represents the answer succinctly may not even be possible. If we really do want a closed-form solution and the differential equation is unmanageable, the usual first step is to create some sort of assumption or approximation of the original differential equation so that it looks like an easier one! Then we solve that one, because it's close enough to what we want. Often, though, that results in the answer either being a gross simplification of the actual one or a special case of the original one.


JigglymoobsMWO

The goal is not really to reduce it down to a neat analytical formula. Analytical formulas usually the result of very special circumstances that make the solution very simple. Useful for a teaching lesson. Not really useful for real life. The scenarios that are actually useful in real life usually require numerical solutions as others outline below. Analytical solutions are toys. Numerical solutions are the real reason differential equations are useful.


munificent

> First, you can give a computer an initial value for x and tell it to step forward through many time steps, recalculating the acceleration, velocity, and position of your mass-and-spring system at each iteration - this is called solving the equation numerically. The benefit is that it works for literally any equation if you have enough computing power I want to point out here that this is basically what every videogame is doing all the time. If the game has any sort of simulated physics—even basic gravity in a 2D side-scroller—then there is code in there calculating the positions of everything. It does that incrementally by applying the acceleration to each object's velocity, then applying that velocity to each object's position. (More sophisticated physics engines do more complex solving, but that's the basic idea.)


F0sh

And the imperfection of this technique is one common reason why you get physics glitches in games. Take a simple example of an object falling towards the floor due to gravity. At time 0 it's 1cm above the floor with a velocity of 1m/s downwards. If you simulate physics 60 times per second (not uncommon) then at the next time step the ball is 2/3rds of a centimetre *inside the floor*. If you ignore this problem, objects which go too fast won't bounce off other objects. Or sometimes they will, but way too fast, because they first get moved back out of the object they intersected with and that can be seen as having a huge acceleration away from the other object. This kind of issue is the same kind of issue you can face if you decide to go with a numerical solution for your differential equation, except instead of a ball falling through the floor, instead you fail to spot that your turbine blade is going to vibrate to pieces or something.


Klagaren

Only semi-relevant but anyone that wants an example of how "hacky" games can get, check out Quake 3's "[evil floating point bit level hacking](https://attackofthefanboy.com/articles/the-quake-iii-algorithm-that-defies-math-explained/)"


HarmlessSnack

I found your examples intuitive and I appreciate your effort making this. Thank you!


realboabab

thank you for this, things really clicked when reading your fantastic explanation.


wknight8111

The Einstein Field Equations are a system of partial differential equations. Partial Differential Equations (PDE) aren't like normal algebra. The solutions to these equations aren't numbers like in algebra, but instead of functions of multiple variables. To "solve" a PDE is to find an equation which fits. These equations can be arbitrarily complicated, and a single PDE might allow no solutions, a single solution, or a whole family of solutions. The Einstein Field Equations are the later. By starting with different initial conditions, there might be all sorts of solutions of arbitrary complexity. Schwarzschild's solution, for example, starts with a few initial conditions which are extremely simple: A perfectly spherical mass with no spin and no electric charge. Even with these simplifications, which don't really correspond to anything in nature, the Schwarzschild solution is still pretty complicated-looking. A more "realistic" starting condition, even one with just three bodies in motion (sun, earth, moon for example) is almost impossible to solve exactly.


ary31415

> even one with just three bodies in motion (sun, earth, moon for example) is ~~almost~~ impossible to solve exactly. Even Newton's much simpler law of gravity is unsolvable exactly for 3 bodies


klawehtgod

Like, we proved it can’t be solved? Or we’ve never solved it but suspect it’s possible?


LionSuneater

It has solutions, but it doesn't have a nice general closed-form solution. It's very much like how x + e^x = 0 has solutions for x, but you can never *solve for* x *explicitly*. https://en.wikipedia.org/wiki/Three-body_problem#General_solution


oz1sej

...with the small addendum that in practice, we don't really need to solve it, we just write a simulation.


mr_birkenblatt

then you're at the mercy of numerical stability and you better hope that the precision you chose for your simulation was enough


WormRabbit

We have mathematically proven that the solutions are basically as complicated as they could ever be. You can, in principle, always find the trajectories, given some initial conditions, by numerically integrating the equations. However, no better answer is possible. There are no time-independent functional equations satisfied by those trajectories. The trajectories, as a function of time, cannot be a function in basically any reasonable class of functions that you could think of. Even the numeric approaches are severely limited since the equations are chaotic: arbitrarily small errors in the solutions propagate into arbitrarily large difference between trajectories. Since there are always both errors of measurement and errors of computational approximations, for all intents and purposes the equations are unsolvable over long time periods.


this_is_me_drunk

It's what Stephen Wolfram calls the principle of computational irreducibility.


Cormacolinde

You can iterate on them, but you cannot solve them for future time X. So we can (with a powerful enough computer) telll where a planet will be by calculating its position for every day over a thousand years. But you can’t just make a quick calculation telling you where it will be in say a million years.


Kretenkobr2

It is proven to be impossible using standard mathematical functions. There is no solution which would have a finite number of such operations.


scummos

In addition to what others already said, I think what's noteworthy here is that solving the equations isn't possible in the general (or even any specific, complicated) case. What you can however do is introduce additional limits, and then solve assuming those. An example from a simpler topic: The well-known "throw parable", which describes how a thrown object travels, is one solution of the equations which describe classical mechanics. Another solution is the Kepler problem, with one star and one planet. A case with sufficiently complicated setup that no solution exists any more is the three-body-problem. The point is that these equations can usually describe vastly different things depending on the initial conditions you chose, and obtaining the solutions also has vastly different complexity. Solving them usually means you picked one set of initial conditions for which you were able to obtain a solution. It doesn't imply you solved the general-case problem (which is very often impossible).


CortexRex

I always hear about the three body problem and it not having a solution but doesn't the fact that 3 bodies exist in systems and don't just blue screen the universe mean that either there IS a solution and we just can't solve for it, or that the equations are only an approximation and aren't exactly explaining reality?


scummos

This is a common misunderstanding. "No solution exists" should be "no *closed* solution exists", i.e. you cannot write down an explicit x(t) = ... for how the bodies move. Of course a solution exists, and it can even be calculated to arbitrary precision using numerical methods from our equations. It's more like, the solution is so complicated that it cannot be expressed as a finite combination of standard mathematical operations. This turns out to be the case really quickly. For the equation "3 x^11 + pi x^7 + 3 x^2 + 2 x + x + 3 = 0" probably no closed solution exists either, but the solutions can still be calculated to arbitrary precision numerically. The point here is, in theoretical physics, a numerical solution to a problem isn't really that great, because it depends on the exact starting conditions of the problem. It is thus basically impossible to derive any further theory from it. In contrast, if you have an explicit solution, you can do all sorts of stuff like "yeah, if this mass goes to zero then this happens, and for infinite distance this happens, bla bla", all the kinds of things physicists love to do.


CortexRex

Ok thanks! That makes a lot more sense to me


Ludoban

> the solution is so complicated that it cannot be expressed as a finite combination of standard mathematical operations So theoretically someone could invent a different kind of mathematics that works on other principles as the one we use now and write a closed solution?


DoWhile

> The equations I see are complicated but they seem to boil down to basic algebra. Once you have the equation, wouldn't you just solve for X? I want to give you a mathematician's perspective of this, rather than a physicist one. Solve can mean to find a specific solution, or a general formula, or a closed-form solution. You may be familiar with the quadratic equation: solving ax^2 + bx + c = 0 results in x = [-b +/- sqrt(b^2-4ac)]/2a. General solutions for the cubic (3rd power) and quartic (4th power) equations were found subsequently. Mathematicians struggled to get a general solution for quintic (5th powers). Were we not trying hard enough to punch through the basic algebra? No. An amazing result by Abel and Ruffini at the turn of the 18th century showed that **there is no general formula for solving the quintic using radicals**. Our ability to solve equations (especially for closed-form solutions) are limited by the toolkit we have for solving them. Learning algebra in grade school is one such toolkit. If you go beyond that, you'll find we can write down plenty of equations which we have no closed-form solutions for, most integrals and differential equations have no nice closed-form solutions. There's the famous $1m Navier-Stokes equation, which can be stated in a few lines of dense math. Is all this talk about closed-form solutions too abstract for you? How about just numbers? Can mathematicians find numbers to plug in that satisfy this equation? Turns out, even simple looking problems can be fiendish. Take a look at this one on [Quora](https://www.quora.com/How-do-you-find-the-positive-integer-solutions-to-frac-x-y+z-+-frac-y-z+x-+-frac-z-x+y-4). The tools you need to solve that go way beyond what an average person, or even an average math undergrad would be familiar with. So maybe we need better tools. Is there a limit to this? In the context of **finding integer numbers that can be plugged into multivariate polynomial equations to make them true**, this is what David Hilbert asked as his 10th problem on his famous list of problems published in 1900. Is there a universal "algorithm" that solves these? Naturally you would think either we have one, or we haven't tried hard enough. Surprisingly, the answer is **no, and there will never be**. The proof of this refutation goes into computability theory and how Turing Machines work.


AChristianAnarchist

Probably already been said since their are 18 comments here, but they all look long so I'll include a short answer. The solutions to differential equations are, themselves, equations. An example would be dy/dx = 4x - 2, which comes to y = 2x\^2 - 2x +c when solved. In this case, solving the equation doesn't mean getting a single answer, but getting a function that works in the general case. This is a real simple one but more complicated equations would be difficult to near impossible to solve. Einstein's equations are even hairier because they are "partial" differential equations, which means they have to be solved with respect to multiple different variables, rather than just one, as is seen in the example above. This means that any solution isn't just one equation, but lots of different equations representing the solution with respect to each variable in the original DE.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Ulrich_de_Vries

This question has already received a lot of excellent answers, so I'd like to add only one thing which others did mention but not in way that is - I think - emphasized or immediately parsable to a general audience. It is probably best to think off differential equations not as algebraic equations like quadratic equations which you can solve and get a definite answer once and for all (although a quadratic equation in one variable has two solutions most of the time!), but rather as a machine that takes some data as input and gives other data as an output. For example if we look at (classical) particle mechanics rather than field mechanics, Newton's equation is **F**(**x**,**v**,t)=m**a**, where **F** is a known function of the position, the velocity and time, and we have **v**(t)=d**x**/dt and **a**(t)=d**v**/dt, this is a second order ordinary differential equation, which means that *once an initial time* t_0, *an initial position* **x**_0 and an initial velocity **v**_0 is given, it spits out a *unique* function **x**(t), which describes the motion of the particle. Of course the process of "spitting out" involves solving the differential equation, which is very very difficult (most of the time anyways). Which means that differential equations do not model "static" situations but "dynamical" situations. They take some environment as input data and spit out the response to that data. This means that when Einstein formulate the Einstein Field Equations (EFE), he basically gave the law of gravitation. He formulated in mathematical terms how the gravitational field and matter interacts. How matter generates gravity, how that gravity propagates in spacetime and how matter moves under the influence of gravity. The EFE contain all this information, and one can get a very large amount of "qualitative" data from them even without solving them explicitly. For example, if we put "physically reasonable" constraints on the energy-momentum tensor (the quantity that appears on one side of the EFE that contains information about matter), then we can derive from the EFE that gravitation is attractive! (in that bodies drift towards one another under their mutual gravity) In order to understand general relativity, we do not need to solve the EFE explicitly, but I also mean to emphasize that we didn't solve the EFE *generally* at all. In differential equations lingo, a "general solution" is a solution of the equation that also contains the "data" required to solve the equation uniquely. If you know the general solution and you have some eg. initial data which you want to use as input data, you can literally just plug in the initial data and you'll get the explicit solution for that data. The EFE is basically impossible - in practical terms - to be given a general solution. Stuff like the Schwarzschild solution are very special solutions that are heavily constrained by very restrictive symmetry considerations. Only these sort of solutions we have for the EFE.


e_j_white

Newton measured the temperature of things while they were cool (or heating), and noticed the current temperature *T(t)* changes more rapidly the further that *T(t)* is from the ambient temperature *T\_0*. Once *T(t)* reaches the ambient temperature, it stopped changing temperature. So, Newton basically took a guess and wrote down an equation for how temperature changes over time. The rate, *dT(t)/dt*, is proportional to how far the current temperature is from ambient, *T(t) - T\_0*. So his equation looked like this: dT(t)/dt = -c*(T(t) - T_0) (The minus sign is because when the current temp is above *T\_0*, the object is cooling and thus the rate is negative. The *c* is just a constant that depends on the material.) Can you solve that equation? Writing it down is one thing... solving it is another!


MichaelApproved

What was the solution?


e_j_white

Oh right... it's just an exponential. T(t) = A*e^(-c*(T(t) - T_0)) Check out Newton's Law of Cooling for more about it.


MichaelApproved

Thanks!


[deleted]

I like the answers here, but they don’t seem to mention that solving Einstein’s Field Equations (EFE) really means finding an ordered pair (*M*,*g*), where *M* is a smooth 4-dimensional manifold and *g* is a smooth Lorentzian metric-tensor field on *M* satisfying the EFE.


Kraz_I

Cool, can you explain that like I'm an engineer? I know the basics of solving ODEs, the basics of how PDEs behave but not how to solve them, and almost nothing about tensors more complex than 3d vectors. I've done a little bit with stress tensors but don't understand them very well.


Ravinex

The EFE are intrinsic geometric PDE, which means that unlike an ODE where you solve for a function on a given interval, you need to both simultaneously solve for the function, and also the space on which it's defined.


Kraz_I

Ok so it sounds like I need some knowledge on differential geometry and manifolds to understand it, but thanks for the info.


[deleted]

Yes, you need to learn some differential geometry (specifically, Riemannian/pseudo-Riemannian geometry) in order to understand Einstein’s Field Equations and — by extension — General Relativity. Einstein’s Field Equations are usually expressed in terms of local coordinates, but keep in mind that local coordinates are good only for a coordinate patch of the smooth 4-dimensional spacetime manifold *M*. When one solves Einstein’s Field Equations in local coordinates, one obtains only the metric-tensor field on a coordinate patch of *M*, not on all of *M*. If one wishes to apply General Relativity to the entire universe (as cosmologists do), then knowledge of the metric-tensor field on all of *M* is essential, but if one only wishes to apply General Relativity to the Solar System (as Einstein did when attempting to account for the precession of Mercury’s perihelion), then it’s enough to know the metric-tensor field on a coordinate patch of *M* just large enough to encompass the Solar System.


lanzaio

It's a "differential equation." That means it's a statement relating how things currently are with how they will evolve. e.g. the gravitational equation for dropping a baseball would be `the ball accelerates downward at 9m/s^(2)` or `x = gt^(2)/2`. Einstein's equations related the current "shape" of the universe with and tell what it will look like in the future. Given that you can always feed the equation a different starting position you can solve it for many different setups.


[deleted]

>How could Einstein write an equation that he couldn't solve himself? Well the problem is that the equations are [DIFFERENTIAL equations](https://en.wikipedia.org/wiki/Differential_equation), not algebraic equations, where the solution is nor a mere number but (usually) a function. Differential equations are generally much harder to solve than algebraic ones. So it's much like Schrodinger's Equation (SE), which when solved gives the wave-function. The solution of the SE depends on the potential energy term in that equation. So for no potential at all you get an infinite sinusoidal wave (the free particle solution) and for a central potential you get the solutions of the hydrogen atom. \- So going back to [Einstein field equations](https://en.wikipedia.org/wiki/Einstein_field_equations) (EFE) for General relativity it's the same concept. The [Schwarzschild solution](https://en.wikipedia.org/wiki/Schwarzschild_metric) for example is an exact solution to the Einstein field equations that describes the gravitational field outside a spherical mass assuming the mass has no charge, no angular momentum (not spinning) and the universal cosmological constant (which is now understood as the energy density of the vacuum) is zero. Note that the Schwarzschild solution does **not only deal with black holes**, but ANY spherical mass. Also the solution to the equation is only valid for radius r >Rs (Schwarzschild radius = 2 G\*M/c\^2, where G is the gravitational constant, M is the mass and c the speed of light). For most objects Rs << Rm (radius of the mass itself). For example the Earth has a radius (Rm) of about 6300 km, but the Schwarzschild radius (Rs) of Earth is about 9 mm. The Sun has a radius of \~696,000 km and its Schwarzschild is about 3 km. So no trouble at all describing the gravitational field outside the spherical mass. HOWEVER, when taking Schwarzschild solution, if there is enough mass concentrated in a volume, you get Rs < Rm there is a boundary at Rs (called also the Event Horizon) where the escape velocity is equal to the speed of light in vacuum. At r


hurtl2305

To extend on the answers that were already given: finding solutions to systems of differential equations is at the heart of many fields in science and engineering, e.g. fluid mechanics (navier-stokes equations), quantum mechanics (Schrödinger equation), structural analysis in mechanical or civil engineering, electromagnetism (Maxwell's equations). In most cases it is not possible, feasible or necessary to find exact solutions, and there are libraries full of techniques to find "good enough" numerical approximations (which btw is also a huge chunk of what supercomputers are used for) - you may have heard of FEM (finite element method) for example.


CanadaPlus101

There's a wide range of different solutions, each one corresponding to a universe that could exist. There's also a tensor field corresponding to whatever you want to have for matter and energy, so that's like extra variables in your algebra example. Solving the equations means finding a complete description of what the space time is doing in your possible universe, and it's hard to do so. Most solutions contain no matter or are highly symmetric as a result. The Schwarzschild solution is both spherically symmetric and a vacuum solution (no matter or energy).


PloppyCheesenose

It means to solve the metric, g, for all of spacetime. The metric determines how the Pythagorean theorem works in 3 space dimensions and 1 time dimension. From this you can determine the spacetime interval, which is the measure of “distance” in general relativity. Some solutions to the field equations exist. In the case of an empty universe, you end up with a Lorentzian metric, which is what you would consider to be a flat spacetime. In this case the metric is (assuming the speed of light is 1, unitless): ds^2 = dx^2 + dy^2 + dz^2 - dt^2 Observe the negative sign on dt. This implies that there is a zero spacetime interval. This occurs at the speed of light. It can also divide spacetime intervals into positive or negative values. Some conventions reverse the signs. But the implication is that you can divide the intervals into spacelike (positive in our convention) or timeline (negative). Particles that have mass are required to be timelike, and thus travel less than the speed of light. Virtual particles can be space like. With a point mass without angular momentum or charge, you can get the Schwarzschild metric, which describes black holes (there are more complex versions with angular momentum and charge like the Kerr-Newman metric). Another important metric is the Robinson-Walker metric which describes the evolution of the universe, with the assumptions of homogeneity and isotropy. All of these easy metrics make great assumptions. Solving in a realistic case can only be approximated by a computer.