Another fast fixed-point sine approximation

Gaddammit!

 

So here I am, looking forward to a nice quiet weekend; hang back, watch some telly and maybe read a bit – but NNnnneeeEEEEEUUUuuuuuuuu!! Someone had to write an interesting article about sine approximation. With a challenge at the end. And using an inefficient kind of approximation. And so now, instead of just relaxing, I have to spend my entire weekend and most of the week figuring out a better way of doing it. I hate it when this happens >_<.

 

Okay, maybe not.

 

Sarcasm aside, it is an interesting read. While the standard way of calculating a sine – via a look-up table – works and works well, there's just something unsatisfying about it. The LUT-based approach is just … dull. Uninspired. Cowardly. Inelegant. In contrast, finding a suitable algorithm for it requires effort and a modicum of creativity, so something like that always piques my interest.

In this case it's sine approximation. I'd been wondering about that when I did my arctan article, but figured it would require too many terms to really be worth the effort. But looking at Mr Schraut's post (whose site you should be visiting from time to time too; there's good stuff there) it seems you can get a decent version quite rapidly. The article centers around the work found at devmaster thread 5784, which derived the following two equations:

(1) \begin{eqnarray} S_2(x) &=& \frac4\pi x - \frac4{\pi^2} x^2 \\ \\ S_{4d}(x) &=& (1-P)S_2(x) + P S_2^2(x) \end{eqnarray}

These approximations work quite well, but I feel that it actually uses the wrong starting point. There are alternative approximations that give more accurate results at nearly no extra cost in complexity. In this post, I'll derive higher-order alternatives for both. In passing, I'll also talk about a few of the tools that can help analyse functions and, of course, provide some source code and do some comparisons.

Continue reading

DMA vs ARM9 - fight!

DMA, or Direct Memory Access, is a hardware method for transferring data. As it's hardware-driven, it's pretty damn fast(1). As such, it's pretty much the standard method for copying on the NDS. Unfortunately, as many people have noticed, it doesn't always work.

There are two principle reasons for this: cache and TCM. These are two memory regions of the ARM9 that DMA is unaware of, which can lead to incorrect transfers. In this post, I'll discuss the cache, TCM and their interactions (or lack thereof) with DMA.

The majority of the post is actually about cache. Cache basically determines the speed of your app, so it's worth looking into in more detail. Why it and DMA don't like each other much will become clear along the way. I'll also present a number of test cases that show the conflicting areas, and some functions to deal with these problems.

Continue reading
Notes:
  1. Well, quite fast anyway. In some circumstances CPU-based transfers are faster, but that's a story for another day.

Some interesting numbers on NDS code size

Even though the total size of code is usually small compared to assets, there are still some concerns about a number of systems. Most notably among these are stdio, iostream and several STL components like vectors and strings. I've seen people voice concerns about these items, but I don't think I've ever seen any measurements of them. So here are some.

Table 1 : Memory footprint of some C/C++ components in bytes. Items may not be strictly cumulative.
Barebones: just VBlank code14516
base+printf71148
base+iprintf54992
base+iostream266120
base+fopen56160
base+fstream260288
base+<string>59384
base+<vector>59624
base+<string>+<vector>59648

The sizes in Table 1 are for a bare source file with just the VBlank initialization and swiWaitForVBlank() plus whatever's necessary to use a particular component. For the IO parts this means a call to consoleDemoInit(); for vectors and strings, it means defining a variable.

 

Even an empty project is already 15k in size. Almost all of this is FIFO code, which is required for the ARM9 and ARM7 to communicate. Adding consoleDemoInit() and a printf() call adds roughly 71k. Printf has a lot of bagage: you have to have basic IO hooks, character type functions, allocations, decimal and floating point routines and more.

Because printf() uses the usually unnecessary floating point routines for float conversions, it is often suggested to use the integer-only variant iprintf(). In that case, it comes down to 55k. The difference is mostly due to two functions: _vfprintf_r() and _dtoa_r(), for 5.8k and 3.6k, respectively. The rest is made up of dozens of smaller functions. While the difference is relatively large, considering the footprint of the other components, the extra 16k is probably not that big of a deal. For the record, there is no difference in speed between the two. Well, almost: if the format string doesn't contain formatting parts, printf() is actually considerably faster. Another point of note is that the 55k for iprintf() is actually already added just by using consoleDemoInit().

And now the big one. People have said that C++ iostream was heavy, and indeed it is. 266k! That's a quite a lot, especially since the benefits of using iostream over stdio is rather slim if not actually negative(1). Don't use iostream in NDS projects. Don't even #include <iostream>, as that seems enough to link the whole thing in.

Related to iosteam is fstream. This also is about a quarter MB. I haven't checked too carefully, but I think the brunt of these parts are shared, so it won't combine to half a Meg if you use both. Something similar is true for the stdio file routines.

Why are the C++ streams so large? Well, lots of reasons, apparently. One of which is actually its potential for extensibility. Because it doesn't work via formatting flags, none of those can be excluded like in iprintf()'s case. Then there's exceptions, which adds a good deal of code as well. There also seems to be tons of stuff for character traits, numerical traits, money traits (wtf?!?) and iosbase stuff. These items seem small, say 4 to 40 bytes, but when there are over a thousand it adds up. Then there's all the stuff from STL strings and allocators, type info, more exception stuff, error messages for the exceptions, date/time routines, locale settings and more. I tell you, looking at the mapfile for this is enough to give me a headache. And worst of all, you'll probably use next to none of it, but it's all linked in anyway.

Finally, some STL. This is also said to be somewhat big-boned, and yes it isn't light. Doing anything non-trivial with either a vector or string seems to add about 60k. Fortunately, though, this is mostly the same 60k, so there are not bad effects from using both. Unfortunately, I can't really tell where it's all going. About 10k is spent on several d_*() routines like d_print(), which is I assume debug code. Another 10k is exceptions, type info and error messages and 10 more for. But after that it gets a little murky. In any case, even though adding STL strings and vectors includes more that necessary, 60k is a fair price for what these components give you.

 

Conclusions

The smallest size for an NDS binary is about 14k. While printf() is larger than iprintf(), it's probably not enough to worry about, so just use printf() until it becomes a pressing matter (and even then you could probably shrink down another part more easily anyway). There is no speed difference between the two. The C++ iostream and fstream components are not worth it. Their added value over printf and FILE routines are small when it comes to basic IO functionality. STL containers do cost a bit, but are probably worth the effort. If you need more than simple text handling or dynamic arrays and lists, I'd say go for it. But that's just my opinion.

Please note, the tests I did for this were fairly roughly. Your mileage may vary.

 

Lastly. The nm tool (or arm-eabi-nm for DKA) is my new best friend for executable analysis. Unlike the linker's mapfile, nm can sort addresses and show symbol sizes, and doesn't include tons of crap used for glue.


Notes:
  1. Yes, I said negative. Even though it has the benefit of being typesafe and extensible, the value of these advantages are somewhat exaggerated, especially since it has several drawback as well (see FQA:ch 15 for details). For one thing, try finding the iostream equivalent of "%08X", or formatted anything for that matter. For early grit code I was actually using iostream until I got to actually writing the export code. I couldn't move back to stdio fast enough.

On arctangent.

The arctangent is one of the more interesting trigonometry functions – and by “interesting” I of course mean a bitch to get right. I've been meaning to write something about the various methods of calculating it for a while now and finally got round to it recently.

I, ah, uhm, may have gotten a little carried away with it though ... but that's okay,! At least now I while one to recommend.

To C or not to C

Tonclib is coded mostly in C. The reason for this was twofold. First, I still have it in my head that C is lower level than C++, and that the former would compile to faster code; and faster is good. Second, it's easier for C++ to call C than the other way around so, for maximum compatibility, it made sense to code it in C. But these arguments always felt a little weak and now that I'm trying to port tonclib's functions to the DS, the question pops up again.

 

On many occasions, I just hated not going for C++. Not so much for its higher-level functionality like classes, inheritance and other OOPy goodness (or badness, some might say), but more because I would really, really like to make use of things like function overloading, default parameters and perhaps templates too.

 

For example, say you have a blit routine. You can implement this in multiple ways: with full parameters (srcX/Y, dstX/Y, width/height), using Point and Rect structs (srcRect, dstPoint) or perhaps just a destination point, using the full source-bitmap. In other words:

void blit(Surface *dst, int dstX, int dstY, int srcW, int srcH, Surface *src, int srcX, int srcY);
void blit(Surface *dst, Point *dstPoint, Surface *src, Rect *srcRect);
void blit(Surface *dst, Point *dstPoint, Surface *src);

In C++, this would be no problem. You just declare and define the functions and the compiler mangles the names internally to avoid naming conflicts. You can even make some of the functions inline facades that morphs the arguments for the One True Implementation. In C, however, this won't work. You have to do the name mangling yourself, like blit, blit2, blit3, or blitEx or blitRect, and so on and so forth. Eeghh, that is just ugly.

 

Speaking of points and rectangles, that's another thing. Structs for points and rects are quite useful, so you make one using int members (you should always start with ints). But sometimes it's better to have smaller versions, like shorts. Or maybe unsigned variations. And so you end up with:

struct point8_t   { s8  x, y; };   // Point as signed char
struct point16_t  { s16 x, y; };   // Point as signed short
struct point32_t  { s32 x, y; };   // Point as signed int

struct upoint8_t  { u8  x, y; };   // Point as unsigned char
struct upoint16_t { u16 x, y; };   // Point as unsigned short
struct upoint32_t { u32 x, y; };   // Point as unsigned int

And then that for rects too. And perhaps 3D vectors. And maybe add floats to the mix as well. This all requires that you make structs which are identical except for the primary datatype. That just sounds kinda dumb to me.

But wait, it gets even better! You might like to have some functions to go with these structs, so now you have to create different sets (yes, sets) of functions that differ only by their parameter types too! AAAARGGGGHHHHH, for the love of IPU, NOOOOOOOOOOOOOO!!! Neen, neen; driewerf neen! >_<

That's how it would be in C. In C++, you can just use a template like so:

template<class T>
struct point_t  { T x, y; };    // Point via templates

typedef point_t<u8> point8_t;   // And if you really want, you can
                                // typedef for specific types.

and be done with it. And then you can make a single template function (or was it function template, I always forget) that works for all the datatypes and let the compiler work it out. Letting the computer do the work for you, hah! What will they think of next.

 

Oh, and there's namespaces too! Yes! In C, you always have to worry about if some other library has something with the same name as you're thinking of using. This is where all those silly prefixes come from (oh hai, FreeImage!). With C++, there's a clean way out of that: you can encapsulate them in a namespace and when a conflict arises you can use mynamespace::foo to get out of it. And if there's no conflicts, use using namespace mynamespace; and type just plain foo. None of that FreeImage_foo causing you to have more prefix than genuine function identifier.

 

And [i]then[/i] there's C++ benefits like classes and everything that goes with it. Yes, classes can become fiendishly difficult if pushed too far(1), but inheritance and polymorphism are nice when you have any kind of hierarchy in your program. All Actors have positions, velocities and states. But a PlayerActor also needs input; and an NpcActor has AI. And each kind of NPC has different methods for behaviour and capabilities, and different Items have different effects and so on. It's possible to do this in just C (hint: unioned-structs and function-tables and of course state engines), but whether you'd want to is another matter. And there's constructors for easier memory management, STL and references. And, yes, streams, exceptions and RTTI too if you want to kill your poor CPU (regarding GBA/DS I mean), but nobody's forcing you to use those.


So why the hell am I staying with C again? Oh right, performance!

Performance, really? I think I heard this was a valid point a long time ago, but is it still true now? To test this, I turned all tonclib's C files into C++ files, compiled again and compared the two. This is the result:


Difference in function size between C++ and C in bytes.

That graph shows the difference in the compiled function size. Positive means C++ uses more instructions. In nearly 300 functions, the only differences are minor variations in irq_set(), some of the rendering routines and TTE parsers, and neither language is the clear winner. Overall, C++ seems to do a little bit better, but the difference is marginal.

I've also run a diff between the generated assembly. There are a handful of functions where the order of instructions are different, or different registers are used, or a value is placed in a register instead of on the stack. That's about it. In other words, there is no significant difference between pure C code and its C++ equivalent. Things will probably be a little different when OOP features and exceptions enter the fray, but that's to be expected. But if you stay close to C-like C++, the only place you'll notice anything is in the name-mangling. Which you as a programmer won't notice anyway because it all happens behind the scenes.

 

So that strikes performance off my list, leaving only wider compatibility. I suppose that has still some merit, but considering you can turn C-code into valid C++ by changing the extension(2), this is sound more and more like an excuse instead of a reason.


Notes:
  1. As the saying goes: C++ makes it harder to shoot yourself in the foot, but when you do, you blow off your whole leg.
  2. and clean up the type issues that C allows but C++ doesn't, like void* arithmetic and implicit pointer casts from void*.