This post is older than 2 years and might not be relevant anymore
More Info: Consider searching for newer posts

nRF51822 with GCC - Stacksize and Heapsize

Using:

  • nRF51822
  • GCC 4.8.4
  • Nordic Sdk 5.2.0
  • and even Ole Morten's nice pure-gcc setup
  • newlib-nano and -flto

Within the startup_nrf51.s I found something like this.

Sdk:

#ifdef __STACK_SIZE
    .equ    Stack_Size, __STACK_SIZE
#else
    .equ    Stack_Size, 2048
#endif
...
#ifdef __HEAP_SIZE
    .equ    Heap_Size, __HEAP_SIZE
#else
    .equ    Heap_Size, 2048
#endif

Pure Gcc:

#ifdef __STACK_SIZE
    .equ    Stack_Size, __STACK_SIZE
#else
    .equ    Stack_Size, 0xc00
#endif
...
#ifdef __HEAP_SIZE
    .equ    Heap_Size, __HEAP_SIZE
#else
    .equ    Heap_Size, 0x100
#endif

This looks as there are some hard coded default values for Stack and Heap size inside of the startup assembler sources. Now I wonder, what's the way to override the default values (without modifying the startup assembler sources).

Well, CFLAGS+=-D__STACK_SIZE=1024 would not work, because CFLAGS is not applied for the assembler call. But also if I try to extend the assembler command line to something like this:

arm-none-eabi-as --defsym __STACK_SIZE=0x0800 ../../../../HaalandSetup/template/startup_nrf51.s -o _build/startup_nrf51.os

won't help. The reason seems to be, that arm-none-eabi-as completely ignores the c-style preprocessor statements (#ifdef ...). He sees both .equ Stack_Size, __STACK_SIZE and .equ Stack_Size, 0xc00 and uses just the last seen assignment.

Then I try to invoke gcc preprocessor like this:

arm-none-eabi-gcc -x assembler-with-cpp  ../../../../HaalandSetup/template/startup_nrf51.s -o _build_SensorDevboard/startup_nrf51.os

but I failed with some curios errors ("undefined reference to _exit" ...).

Finally, I'd made it work with the following modifications:

  • In startup_nrf51.s, changing all #ifdef et cetera to .ifdef
  • In pure-gcc Makefile, changing $(AS) $< -o $@ to $(AS) $(AFLAGS) $< -o $@

And now, I'm able to add in my project specific Makefile something like:

AFLAGS += --defsym __STACK_SIZE=0x0800

to get project specific control over that.

Now the question: Is this the right way to get project specific control over stack and heap size or did I something wrong and failed to see the prefered way?

I'd just playing around with some different values for __HEAP_SIZE and __STACK_SIZE.

  • If I set them too big (e.g. both to 0x5000), then I earn what I expect ld: region RAM overflowed with stack.
  • Then I set __HEAP_SIZE to e.g. 0x0100, call malloc(4000) in my program and expect the malloc() to fail but it succeeds.
  • And then setting __STACK_SIZE to e.g. 0x0010 and expecting my program to crash because running out of stack but it works regardless which value I assign for __STACK_SIZE.

So for me it looks, that assigning __STACK_SIZE and __HEAP_SIZE was not the right way to control stack and heap size, isn't it?

Edit 2014-08-27: After some further investigation I found the following curious facts:

  • __HEAP_SIZE and __STACK_SIZE will generate symbols like __StackLimit and __HeapLimit
  • __StackLimit and __HeapLimit are only checked at link time (may generate an assert "region RAM overflowed with stack")
  • at runtime neither __StackLimit nor __HeapLimit are recognized
  • __StackLimit don't raise any e.g. "stack overflow exception" when stackpointer is passing by
  • malloc() won't interesting for __HeapLimit (and also not for __StackLimit!)
  • Highly dangerous: malloc() grab all memory up to the current position of the stackpointer (yes, the stackpointer position at the time of the malloc call). This means that malloc takes takes memory that I'd tried to reserved for the Stack! → Strange behaviour …

Edit 2014-08-30: Solved the »malloc() grab my Stack« issue

With this tiny custom _sbrk() implementation, I teach malloc() to take care of __HeapLimit:

void* _sbrk(ptrdiff_t incr) {
    extern uint32_t __HeapBase;
    extern uint32_t __HeapLimit;
    static char* heap = 0;
    if (heap == 0) heap = (char*)&__HeapBase;
    void* ret = heap;
    if (heap + incr >= (char*)&__HeapLimit)
        ret = (void*)-1;
    else
        heap += incr;
    return ret;
}

Note that _sbrk() required to return (void*)-1 to indicate that we are out of memory (see here and here). When returning (void*)0 instead, it will end up in a hardfault.

In my setup, I need to spend at least 1468 Byte heap space (within the startup_nrf51.s) to make printf & co work.

Giving just 436 Byte will also work, but slows down the output performance (because stdio will work unbuffered). write() is then called for each single character to output. When writing to uart with 1 MBaud, its about factor 2 slower!

Giving less than 436 Byte heap space results in a hardfault at the first time when printf / puts / putchar was called.

When both ram size and speed matters, we can adjust the stdout buffersize by calling setvbuf() (see here) before the 1st printf / puts / putchar call like this:

{
    static char stdoutBuffer[40];
    setvbuf(stdout, stdoutBuffer, _IOLBF, sizeof(stdoutBuffer));
    printf("Hello …");
}

In this way, we prevent that stdio allocates it's 1 KiB buffer but uses that provided 40 Byte buffer instead.

Note that the given values may depend one the used toolchain version.

Related