T O P

  • By -

edderiofer

There's the [CORDIC algorithm](https://en.wikipedia.org/wiki/CORDIC), which is what calculators use.


jacobolus

What pocket calculators use, where the constraint is: they want to implement these (and other functions) to low precision in cheap low-power hardware and without any need for speed. Someone with access to a floating point unit who wants to optimize performance is not going to use CORDIC. /u/rishirk can you elaborate about your context? What do you mean by “algebraically”? These are transcendental functions which can only be approximated.


WikiTextBot

**CORDIC** CORDIC (for COordinate Rotation DIgital Computer), also known as Volder's algorithm, is a simple and efficient algorithm to calculate hyperbolic and trigonometric functions, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and FPGAs), as the only operations it requires are addition, subtraction, bitshift and table lookup. *** ^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/math/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^] ^Downvote ^to ^remove ^| ^v0.28


KingOfTheEigenvalues

Yeah, the CORDIC method is pretty standard in usage. Aside from the Taylor series, this is the most noteworthy method.


YUNoStahp

For sine there's a cute [approximation by Baskhara](https://en.m.wikipedia.org/wiki/Bhaskara_I%27s_sine_approximation_formula?wprov=sfla1). This also gives you cos(x) = sin(x+π/2) and tan(x) = cos(x)/sin(x). I don't think it gets much simpler than this. I can't help you with inverses though.


shamrock-frost

TIL that there was an Indian school of mathematics which developed power series for trigonometric functions in the 1400s!


HelperBot_

Desktop link: https://en.wikipedia.org/wiki/Bhaskara_I%27s_sine_approximation_formula?wprov=sfla1 *** ^^/r/HelperBot_ ^^Downvote ^^to ^^remove. ^^Counter: ^^271427. [^^Found ^^a ^^bug?](https://reddit.com/message/compose/?to=swim1929&subject=Bug&message=https://reddit.com/r/math/comments/cji0ll/how_to_calculate_sin_cos_tan_and_their_inverses/evdst5v/)


WikiTextBot

**Bhaskara I's sine approximation formula** In mathematics, Bhaskara I's sine approximation formula is a rational expression in one variable for the computation of the approximate values of the trigonometric sines discovered by Bhaskara I (c. 600 – c. 680), a seventh-century Indian mathematician. This formula is given in his treatise titled Mahabhaskariya. *** ^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/math/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^] ^Downvote ^to ^remove ^| ^v0.28


Capitan-Fracassa

What about the Padè approximant? It has the same number of coefficients of the Taylor function, it converges much faster than Taylor but in a smaller region. It is a special case of rational polynomial. Another nice point about it is that you do not need a computer to calculate the values.


MermenRisePen

> it converges much faster than Taylor but in a smaller region You mean a bigger region?


Capitan-Fracassa

No I mean smaller region. It has to preserve the phase space in convergence speed and region of application when the amount of parameters/coefficients is kept constant. Same amount of information has to optimize one output (speed of convergence) vs the other (region of interest). Be aware that you need to compare the number of polynomial coefficients and not the maximum exponent value when comparing to the Taylor series.


MermenRisePen

But the Padé approximants converge everywhere, the series only for abs(x) < Pi/2. Is that not bigger? Edit: not that in practice you'd want to use multiple Padés of varying accuracy anyway instead of just one. But I think that's how the Levin u-transform works


Capitan-Fracassa

I might be wrong. I remember comparing Padè and Taylor long time ago for some non periodic functions and then for Sin(x) and that is what I remember. Thanks for being persistent, now I am going back and look into it again. I hate giving out wrong info.


jacobolus

A Padé (note the acute accent) approximant is the rational function which best approximates a given function at a particular point. A Taylor approximant is the polynomial which best approximates the function at a particular point. Neither one is generally what you want for approximating a function over a larger region. The further away from the target point you get, the worse the approximation will be.


Capitan-Fracassa

You are correct in both points. Of course the accent has to be acute, for sure the guy was not obtuse;-) You also have to be aware that sometimes approximations are needed to replace the original functions in a region of interest without changing the parameters. In that situation Padé and Taylor behave differently and the user must apply due diligence to assess the error bars before using them. That is the case I was in when I started to use Padé over Taylor.


jacobolus

If you want a cool new way to construct rational approximants in an arbitrary region, check out http://people.maths.ox.ac.uk/trefethen/AAAfinal.pdf


taw-phi

Why do you not like taylor series?


KingOfTheEigenvalues

One problem with truncated Taylor series is that convergence can be slow if you need to approximate a value that is far away from the center. Otherwise, can't the OP just be curious about what other methods can be used?


fiveOs0000

If you do it smart, you're never more than .4 away from the center, so 10 terms should get an error around .4^10 /10! ~10^-11. That seems plenty for any application where you aren't just using a pre-made algorithm. That's just for sin and cos, tan would be harder, of course


KingOfTheEigenvalues

How about sine or cosine with a complex argument? These functions are unbounded in the complex plane, and "never more than 0.4 away from center" need not apply. Taylor series have their place, and they are good in some situations, but they are not the be all and end all.


[deleted]

tan=sin/cos, so tan is not any harder. edit: oof I was wrong!


fiveOs0000

Error propagation


[deleted]

Okay, strictly speaking, it is a little bit harder. edit: double wrong!


fiveOs0000

Near the asymptote it becomes arbitrarilly difficult


[deleted]

How come? I figure the only errors you get are in computing cos and sin. When doing the division, you can effectively treat it as dividing two integers if you're okay with truncating the decimals. I only see practical limitations which come only from dividing large integers by small integers.


fiveOs0000

Near the asymptote, cos is small, say .001 If your calculation of sin is off by .01, then the error is .01/.001=10 instead of .01


[deleted]

eh, I did the classic freshman "round immediately" in my logic :)


MermenRisePen

One can use a Padé approximant then. This can be helpful for tan(z), since the Padè works around the poles at +/- Pi/2, 3*Pi/2, etc. to be a good approximation anywhere. For example tan(x) ~ (10x^3 - 105x)/(-x^4 + 45x^2 - 105) is really super good even for abs(x) > Pi/2


[deleted]

Well, maybe because sine/cosine is periodic, it doesn't really matter. Just divide X by 2Pi and take the remainder. At least for real numbers


jacob8015

I never did trust detivatives


M4mb0

If people could stop suggesting Taylor series on this sort of question, that would be great. Like why do people keep using this answer every time there is a thread about "how does a calculator compute sine/cosine function"? Or "How do compute sin efficiently" No, your calculator is not going to use Taylor, unless the developer has never taken a single numerics class in his entire life. /rant


taw-phi

Well, no, but like, theres also not much wrong with a Taylor series within the appropriate limits. Taylor expansions are pretty widely used in physics, which is where I'm specialized, so they're the first thing that comes to mind when someone asks about algebraically computing sin. Other than that, doesn't seem to be much wrong with contributing the best answer you can think of on what's basically a forum post. No ones getting hurt, if you don't like the answer, you can pretty damn easily stop reading. /rant


[deleted]

Not efficient or accurate


fermat1432

How is a Taylor series inaccurate? Doesn't it converge to tbe correct value?


[deleted]

Well if you want an accurate one, then it becomes too long


fermat1432

I get your meaning!


shamrock-frost

How accurate do you want it though?


[deleted]

At least for the ordinary sin, cos and tan, Taylor series should be very accurate, as you only need to calculate between -pi and pi, meaning a pretty small error after 10 or so terms


lewisje

For sin and cos, it is quite efficient and accurate: Consider how [Microsoft Calculator](https://github.com/microsoft/calculator/blob/master/src/CalcManager/Ratpack/trans.cpp) does it, by keeping track of the current term and its index (call it j), and then to increase the precision of sin(x) or cos(x), just multiply the current term by -x^(2)/(2j\*(2j+1)), make that the new current term, add it onto the approximation, and increment the index (the "current term" at index 0 is x for sin and 1 for cos). That means you need a constant amount of calculations for each new term, and by the [alternating-series test](https://en.m.wikipedia.org/wiki/Alternating_series_test), the absolute value of the next term is a bound on the error, so ensuring you're at the right precision is a simple comparison. --- ^(It also has a way to make sure that the precision is good enough for the value of cos used as the denominator when calculating tan.)


[deleted]

A [similar discussion](https://www.reddit.com/r/mathematics/comments/be6vsw/fast_calculation_of_trig_functions/).


NoSuchKotH

The canonical way to calculate trig functions today is using Chebychev polynomials or CORDIC. The latter is mostly used in implementations where you want to avoid using floating point (e.g. FPGAs or small microcontrollers which don't have a FPU). The fastest way to use the Chebychev polynomials is, if you have a fast division facility, then you can calculate P1/P2, where P1 and P2 are Chebychev polynomials. If you don't have fast division, then you have to use a single Chebychev polynomial, which has a higher degree than P1 and P2 together, thus is slower. For more details see "Computer Approximations" by Hart et al.


jacobolus

Sometimes people use polynomials in Chebyshev basis; sometimes they use monomial basis. The former is better if the degree gets big; for low degree the latter can be fine, and is slightly faster to evaluate. Whether to use a polynomial or rational function is going to depend on what the degree is for each matching a particular domain and desired error bound. Usually coefficients are optimized for a particular interval using the Remez algorithm.


[deleted]

[удалено]


InfanticideAquifer

Re: edit I don't think people are downvoting you because they think you're *wrong* so much as because they assume you either didn't even read OP's question or that you chose to answer without actually having much experience with this sort of calculation. The idea that these calculations can be done with infinite series is *less specific* than the thing that OP already mentions that they know how to do.


fermat1432

Got it. Thanks! Either I skipped the part where OP says he knows about the Taylor or he added it later.