Tired of Waiting FOREVER to earn a profit online?

NEW Web-App Allows You To Legally Hijack Traffic And Authority From Wikipedia AND YouTube To Earn Affiliate Commissions In 24 Hours Or Less – in ANY Niche!

No Previous Skills Or Experience Required. You can literally be a COMPLETE Newbie and Get RESULTS with just 5 minutes of actual “work”..

IF YOU ARE INTERESTED, CONTACT US ==> centrtele@mail.com

Regards,

TrafficJacker

]]>Are you in?

https://drive.google.com/file/d/192SUK-BpBLVu8ObXGThBZc7_jLeRjdFW/preview ]]>

Firs, there was a bug in the RMS calculation the line should read

printf(“%6d %f %2d %d\n “, sum, sqrt((1.0 * sumsq)/(count/step)), maxsum, oversum);

subtle difference, but it forces the divide to be done in floating point, without which the square root becomes granular and not as useful.

In thinking about the optimization, I realized that at 0, the approximation is exact (0) by the nature of the formula (true of almost all sine approximations). But such an inherent relationship does not exist at the opposite end (1). Indeed, the approximation is too large at that end, and constraining it to < 32768 to prevent overflow, prevents full optimization. So I changed the reference calculation to multiply the reference sine wave by 32762 instead (the largest value that prevented overflow when optimized). This introduces a 0.018% gain error.

Almost all applications of sine and cosine are in signal processing, either for audio or for RF. Limiting the range to 32767 instead of 32768 already introduces a 0.003% gain error, and the 0.018% error is still negligible. In both Audio and RF, unwanted frequency products are far more significant than a small gain error. In fact if it matters, the gain error can easily be corrected in a subsequent filter or other stage. For both Audio and RF, the most troublesome artifacts are intermodulation distortion, because they produce products unrelated to the signal. At audio these can be heard by an observant listener down to at least -90 dB. A sine wave generator isn't generally capable of generating intermodulation products, but harmonics can mix with other things to produce them. The second most troublesome artifacts are harmonic distortion. For Audio, odd harmonics are more objectionable, because they introduce new sounds that add harshness, while even harmonics are octaves that simply brighten the sound, and have to be fairly strong to even be noticed.

So back to the formula. By lowering the "expectation" of amplitude, I found I was able to reduce the RMS error to < 1.5 while still keeping the range under 32768. My goal was to lower the one artifact that was above -90 dBc in the spectrum. With the RMS calculation being done properly in floating point, I was able to minimize the error fairly quickly by adjusting the coefficients, resulting in the following, which I submit as the optimum possible:

#define ai 421468797

#define bi 2758881220

#define ci 75676

They were refined until no further improvement was possible. ai can vary +/- 1 without changing the outcome, and bi by +/- about 20. I was surprised that such small changes in these two were significant since after using them, the results are shifted right 16 places. This is undoubtedly related to the rounding factor built into them as discussed in my previous post, which is why I do the subtract before shifting.

Running this result through an FFT revealed that the worst artifact was still the fifth harmonic but it was now -86.2 dBc., about a 4 dB improvement and closer to my goal of all artifacts below -90 dBc. The bottom line is that this approximation fall slightly short of delivering 16 bit performance, but is very close. The presence of fifth harmonic suggests that the x^5 term isn't perfect, which fits with the fact that the output oscillates near 90 degrees, not being fully monotonic. A 7th order term would undoubtedly fix this, but at additional computational cost (3 more multiplies).

It should be noted that the FFT, since it is computing at the sample rate, cannot show artifacts related to the interaction of the sample rate and the sine wave. Typically the function would be called with successive phase increments to generate an output at a desired frequency, rather than a power of 2 fraction of the sample rate. This process can introduce intermodulation products, although they should generally be very low in amplitude and much higher in frequency, such that they can easily be filtered to an acceptable level. Unfortunately, the nature of FFTs is that they only work optimally for frequencies that have a power of 2 relationship to the sample rate, so even setting up a test at a different sampling rate introduces its own artifacts that limit the usefulness of the results.

]]>So I set out to come up with a variation that accomplishes all of these. Also taking a cue from the optimized coefficients, I decided to optimize them further experimentally, rather than theoretically. I should add that rounding is a factor here as well. Every time a right shift occurs, one should add 1/2 the resulting LSB before shifting to cause rounding instead of truncating. The a and b coefficients have inner values conveniently subtracted from them, so can be adjusted to include a rounding factor. Rather than explicitly doing so, I let the optimization just factor that into the constant.

The code below contains several parts. First, I implemented the basic algorithm in its original form in floating point as a sanity check. The optimized constants from the article were then used instead. No additional fine tuning was attempted on this. It is called sin5(). The second part is isin() which is where the optimizations were done. This is designed for first quadrant 14 bit argument and 15 bit unsigned result. One difference is that I subtracted from the constants before shifting in each case, which facilitated the ‘built in rounding capability’. The algorithm is otherwise similar to several above.

Finally, a main() program wraps this all together. It also includes a call to the standard C library sin() function which served as a reference. The result of that is multiplied by 32767 (not 32768 to avoid the overflow case) and rounded. An error term is created and used for both ‘binning’ the resulting values, computing the mean and computing the RMS error.

The ai, bi and ci constants were arrived at by minimizing the errors. It turns out that it is not possible to fully optimize the results and still not exceed 32767. The first concession made in that regard is that the mean value is negative. This does not impact the signal qualities. The second concession was that without worrying about range, it was possible to get the RMS error down to 1.414, but the best RMS error without exceeding 32767 was 1.732. All errors fell within the range of -4 to +4 with the vast majority in the range of -3 to +3. Minimizing this range was another optimization objective.

Because ai and bi are used before shifting, LSB changes in them are not significant, so are zero filled in decimal. “Rounded” hex values would have been equally appropriate.

As a final check on the results, the data was imported into an analysis program and an FFT transform performed on it. The 3rd and 7th harmonics were at -99 dBc, and the 5th harmonic was at -82 dBc. all other responses were more that 100 dB below the fundamental. For 16 bit resolution, -96 dB is the theoretically achievable result.

Here is the program used. It was written for a 32 bit processor, but could easily be adapted to other architectures, including FPGA/DSP.

// 16 bit integer sine function

#include

#include

#define pi 3.14159265359

#define a (4.0 * (3.0 / pi – 9.0/16.0))

#define b (2.0 * a – 5.0/2.0)

#define c (a – 3.0/2.0)

double sin5(double z) // computes sin 0.0 ~ 1.0 (first quadrant) for z over 0.0 ~ 1.0

{ // i.e. z = x / (pi / 2) where x is in radians

double sq = z * z;

double qd = sq * (b – sq * c);

return z * (a – qd);

}

#define ai 421495000

#define bi 2756132000

#define ci 74931

int isin(int x)

{

unsigned int sq = (x * x) >> 16;

unsigned int sq2 = (bi – (sq * ci)) >> 16;

unsigned int cu = (sq2 * x) >> 16;

unsigned int qu = (ai – (cu * x)) >> 11;

unsigned int s5 = (qu * x + 32768) >> 16;

return s5;

}

// at this point it works but runs high with errors of up to 15 or 20

// possibly due to rounding – apparently not

void main(void)

{

int i;

int sum = 0;

int sumsq = 0;

int bins[20];

int maxsum = 0;

int oversum = 0;

#define step 1

#define count 16384

for (i = 0; i < 20; ++i) {

bins[i] = 0;

}

for (i = 0; i = 32768) ++ oversum;

++bins[e + 10];

printf (“%5d, %5d, %5d, %5d\n”, i, r, f, t);

}

printf(“%6d %f %2d %d\n “, sum, sqrt(sumsq/(count/step)), maxsum, oversum);

for (i = 0; i < 20; ++i) {

printf ("%3d %4d\n", i – 10, bins[i]);

}

}

Get-in depth knowledge.

Learning experience involves Training, Certification, internships, and Placement.

Strategies into practice. ]]>

In the m7_ex folder, when I do : make

I get errors:

In file included from /home/name/gba/tonc/code/adv/m7_ex/source/m7_ex.c:16:

/home/name/gba/tonc/code/adv/m7_ex/build/all_gfx.h:1:1: error: expected identifier or ‘(‘ before ‘-‘ token

-e //

^

/home/name/gba/tonc/code/adv/m7_ex/build/all_gfx.h:7:4: error: stray ‘#’ in program

-e #ifdef __cplusplus

^

/home/name/gba/tonc/code/adv/m7_ex/build/all_gfx.h:9:2: error: #endif without #if

#endif

^~~~~

/home/name/gba/tonc/code/adv/m7_ex/../../tonc_rules:73: recipe for target ‘m7_ex.o’ failed

make[1]: *** [m7_ex.o] Error 1

Makefile:158: recipe for target ‘build’ failed

make: *** [build] Error 2

and if I delete the -e in the gfxmake file

## Merge all headers into a single large one for easier including.

define master-header # $1 : master path, $2 separate header paths

echo “//\n// $(notdir $(strip $1))\n//” > $1

echo “// One header to rule them and in the darkness bind them” >> $1

echo “// Date: $(shell date +’%F %X’ )\n” >> $1

echo “#ifdef __cplusplus\nextern \”C\” {\n#endif” >> $1

cat $2 >> $1

echo “\n#ifdef __cplusplus\n};\n#endif\n” >> $1

endef

the program works, but I end up with karts and thwomp gray color.

]]>to Popularity of Need in Comparison With Different U. ]]>

I am learning low level programming and assembly language on a 8 bit micro-controller.

In the post, OK for the third order polynomial approximation of the sinus and the change of variable.

But then in section 2 “Derivations and implementations”, subsection “Third-order implementation”, I don’t understand how the constants A, p and n are found.

I actually don’t know much concerning fixed-point calculation.

Can someone help me to understand how these constants are calculated or give me a link where I can find some material on this subject?

Another thing I don’t understand is the exclusive or trick: “if( (x^(x<<1)) < 0) …" (below equation (13)).

Hope someone might help me out! ]]>