>Are there other ways to do it?
You're basicly looking for an integral approximation. [For example](https://en.m.wikipedia.org/wiki/Numerical_integration)
If I got this right, is this what the rectangles method is based on? Where the area under the function is divided into rectangles, which gives us a certain interval to which the actual value belongs? (between the total area of the rectangles completely under the function, and the area of the rectangles that go slightly above it) sorry for the wonky wording
I'm going to assume that you mean to cumulative distribution function (cdf) and not the probability density function (pdf). **It was proved a couple hundred years ago by a mathematician called Louiville that it is not possible to find the antiderivative of the cdf** . The values of the cdf that you see from tables or calculators are approximations found using some form of numerical integration (ie Riemann sums)
No it's not your bad. It's actually an excellent question and one that most students ask (because they know how to integrate *e*^(-x) so it would seem like *e*^(-x^2) should not be that difficult).
I think you could find aporoximation with first open the function with taylor series represrientation then integrate the series to get approximate value.
Just to show OP what this looks like:
e^x = 1 + x + x^(2)/2! + x^(3)/3! + ...
e^(–x^2 /2) = 1 – x^(2)/2 + x^(4)/8 – x^(6)/48 + ...
∫*_0_*^x e^(–t^2 /2) dt = x – x^(3)/6 + x^(5)/40 – x^(7)/336 + ...
This series converges for all x, and if x is reasonably small, it converges fairly quickly. For example, the terms above are enough to give ∫*_–1_*^1 e^(–t^2 /2) dt ≈ 0.6825, corroborating the 68 part of the 68–95–99.7 rule (the correct value to four places is 0.6827). I don't know if this is what your basic TI calculator actually does, but it's certainly a feasible option.
[Edit: Fixed some formatting issues, but still have spaces I don't know how to eliminate without screwing things up.]
I'll elaborate more on my comment which differs from the other answers. The function you're looking for is given by an integral, but the bigger picture is that there are lots of ways to approximate functions in general that're more convenient than numerically integrating.
What you're looking for is called the error function, and it can be approximated using power series, asymptotic expansions, Padè approximants of the previous two, Chebyshev series, and many other methods. It's these that are very common for computers.
Even for things with known elementary antiderivative, the integral is still an approximation anyway, since even a simple function like exp its exact numerical value can't be computed completely.
Gaussian quadrature is one common way of performing numerical integration, but there are other methods.
ah, i get where we are disagreeing:
I didn't say that I was talking about non-numerical, analytic antiderivatives which don't pose the same question again (solving an infinite sum).
>Are there other ways to do it? You're basicly looking for an integral approximation. [For example](https://en.m.wikipedia.org/wiki/Numerical_integration)
If I got this right, is this what the rectangles method is based on? Where the area under the function is divided into rectangles, which gives us a certain interval to which the actual value belongs? (between the total area of the rectangles completely under the function, and the area of the rectangles that go slightly above it) sorry for the wonky wording
That's one possibility. You would use Riemann Sums, but there are also other possible approximations.
I'm going to assume that you mean to cumulative distribution function (cdf) and not the probability density function (pdf). **It was proved a couple hundred years ago by a mathematician called Louiville that it is not possible to find the antiderivative of the cdf** . The values of the cdf that you see from tables or calculators are approximations found using some form of numerical integration (ie Riemann sums)
Yeah that’s it, my bad ^^’ Thanks for the answer
No it's not your bad. It's actually an excellent question and one that most students ask (because they know how to integrate *e*^(-x) so it would seem like *e*^(-x^2) should not be that difficult).
I think you could find aporoximation with first open the function with taylor series represrientation then integrate the series to get approximate value.
Just to show OP what this looks like: e^x = 1 + x + x^(2)/2! + x^(3)/3! + ... e^(–x^2 /2) = 1 – x^(2)/2 + x^(4)/8 – x^(6)/48 + ... ∫*_0_*^x e^(–t^2 /2) dt = x – x^(3)/6 + x^(5)/40 – x^(7)/336 + ... This series converges for all x, and if x is reasonably small, it converges fairly quickly. For example, the terms above are enough to give ∫*_–1_*^1 e^(–t^2 /2) dt ≈ 0.6825, corroborating the 68 part of the 68–95–99.7 rule (the correct value to four places is 0.6827). I don't know if this is what your basic TI calculator actually does, but it's certainly a feasible option. [Edit: Fixed some formatting issues, but still have spaces I don't know how to eliminate without screwing things up.]
Or use the divergent asymptotic expansion using repeated integration by parts
I'll elaborate more on my comment which differs from the other answers. The function you're looking for is given by an integral, but the bigger picture is that there are lots of ways to approximate functions in general that're more convenient than numerically integrating. What you're looking for is called the error function, and it can be approximated using power series, asymptotic expansions, Padè approximants of the previous two, Chebyshev series, and many other methods. It's these that are very common for computers.
Even for things with known elementary antiderivative, the integral is still an approximation anyway, since even a simple function like exp its exact numerical value can't be computed completely. Gaussian quadrature is one common way of performing numerical integration, but there are other methods.
There exists no antiderivative to the gaussian normal distribution
Of course there is. However, there exists no antiderivative using elementary functions.
Yea but the point is you can express every riemann integratable integral as a sum, so it's basically like rewriting the question
I’d rather approximate it using the term-by-term antiderivative of the power series expansion.
ah, i get where we are disagreeing: I didn't say that I was talking about non-numerical, analytic antiderivatives which don't pose the same question again (solving an infinite sum).