Parentheses? Do you mean at line 17 or line 28? Also, I’m still new to the curve fitting thing, so I haven’t “got” what makes a good target value to plug into the target method. Is it any value that intersects the x axis on the normal sine func? Right now I’m not using the optimized version as I’m still trying to wrap my head around the basics.

RMS is probably my end method for figuring out the optimized constants, but I have no experience with Goalseek or Solver, I will check those out.

As far as precision goes, my target from the beginning was a complete set of elementary trig functions that use Q16 nums inside of int64_t for all inputs and outputs. 48 bits above the radix should be more than enough scratchspace for the intermediary calcs, and I dislike factoring out the shifts for what should be a separate func/concern. Do you used Q24 for the extra precision on the bottom end? Additionally, both Gcc and Clang should be able to do the final bitshift optimization/algebraic reduction during compile if I set MULT() and DIVD() to allow for inlining.

In fact, here is a paste of my MULT() and DIVD() func for Q16 numbers stored in 64 bit registers: https://pastebin.com/2mC2GxJx

]]> I’ve done a quick test with my own implementation of sine5 (https://pastebin.com/0x70eeQa). Notice that I do most of the math in Q24, because why not.

At α = π/4 I get 0xB518. The true value should be 0xB504, which makes my S5 approximation off by roughly 0.05%. This is about the accuracy I expected from Fig 5.

For true Q16 precision, it’s necessary to do the intermediary calculations at higher precision. The range of calculations is around [-4.0, +4.0] I think, so you should have bits to spare.

As for the constraints for D: they can be whatever you want :D No really, you can choose the constraints yourself. Suggestions are:

- sin(π/4) = 1/&sqrt;2 (extra intermediary point)
- sin(π/6) = ½ (extra intermediary point)
- ∫ sin dx (0, π/2) = 2/π (area under curve; minimizes average error)
- Minimize the root-mean square. For this you’ll need something like Excel’s GoalSeek or Solver.

I discuss some of these options in section 3.3. The point is that you can decide on what the characteristics of the curve. I kinda like an extra intermediate point or average error, but for true minimal error, RMS is probably the best option. However, you can’t solve for that analytically.

]]>Here is a pastebin of the corrected code: https://pastebin.com/TXF9V2GK

]]>I also have a func for doing square roots with fixed point numbers that is fairly interesting, I would like to learn about using the sort of analytical graphs you have here to evaluate it’s output over a range, do you have a link to a good guide?

]]>I can also recommend this playlist: https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

The stuff I’m talking about here are discussed in part 4, if I recall correctly.

]]> Here you go: https://pastebin.com/WNb67ivf

One of the problems in your initial code was things like 2<<16 instead of 1<<16, which skews everything. Also, operator precedence.

This particular version uses Q24 for the intermediates to make things easier. You can gain a higher precision by shifting things about, but ultimately since the C4 approximation is only accurate to about 10 bits, that hardly matters. A few simple tests are included as well.

For stuff like this, it’s probably easier (well, relatively easier) to always shift the intermediary results down to a fixed fixed-point. Only after that should you think about moving the fixed-point about. This stuff is hard enough as it is, don’t you think?

]]>make_poly(sin(pi*x/2),[1],[1,0],’odd’,'debug’)

]]>Please check my script which tries to generate polynomial approximations base on method from this article. This script is using SymPy so the resulting polynomial has infinite precision =)

Of course adapting to fixed point is another story…

still modern MCU have float hardware which can give fast, precise polynomial approximations using just few Multiply accumulate instructions (1 float MAC = 1 cycle on cortex M4) using methods described here. I really want to benchmark these against CMSIS trigonometric functions.

https://github.com/cherubrock/polyfit/blob/master/polyfit.py

output of script for example from article (2.1 High-precision, fifth order):

a + b + c – 1 = 0

a + 3*b + 5*c = 0

a – pi/2 = 0

x**5*(-3/2 + pi/2) + x**3*(-pi + 5/2) + pi*x/2

SNR: 71.0 [dB]

ENOB: 11.8