Caches and Scratchpad Memory in RTOS

Cache Replacement Policies

Predictability calculated by RELACS tool

Policy

Assoc.

2

3

4

5

6

7

8

LRU

1

1

1

1

1

1

1

FIFO

1/2

1/3

1/4

1/5

1/6

1/7

1/8

PLRU

1

0

0

RANDOM

0

0

0

0

0

0

0

LRU (Least Recently Used)

Minimal life span (mls) – minimal no. of pairwise different memory blocks needed to evict a just-visited mem block out of cache. Replacement policy with larger mls preferred.

Upper bound of mls = L. mls of LRU = L = no. of cache lines.

Non-LRU policies are considered to be much less predictable than LRU, and it would be very difficult to develop precise and efficient analyses for them. It is recommended to only use LRU caches when timing predictability is a major concern in the system design.

Continue reading

Garbage Collection in RTOSes

The requirements for a garbage collection to be real-time are-

  • Ensure that programs with bounded allocation rates do not run out of memory

  • Provide short blocking times – Specify and maintain a maximum pause time which is short, predictable, eg. 50 ms threshold. There can be many short, frequent GC pauses. Duration of each is kept to a minimum, improving performance.

Issues with incremental GC: The two sources of blocking are root scanning and heap compaction.

The Metronome Collector

The Metronome garbage collector, which is a commonly used RT-GC, limits interruptions to one millisecond and spaces them evenly throughout the application’s execution.

Continue reading

Stack Management in RTOSes

Proper size of the stack is important. It must be small enough to prevent memory wastage and large enough to accommodate real-time application requirements.

The ways to determine the size to allocate are-

  • Test the system under conditions that produce worst-case stack behavior – hard to predict/provoke.

  • Calculate the theoretical maximum stack requirement (using software tools).

Browsing for stack sizes through the Dalvik source history:

API 3 (Android 1.5) = 8KB

API 4-10 (Android 1.6 – Android 2.3.7) = 12KB

API 14-17 (Android 4.0 – Android 4.2.2) = 16KB

Garbage collection on the stack – The life span of variables on the stack is limited to the duration of the function. As soon as the function returns, the used stack memory will be freed.

Multi-Task Stack Sharing (MTSS) scheme

MTSS is a multi-task stack sharing technique, that grows the stack of a particular task into other tasks in the system after it has overflown its bounds. A possible configuration to use MTSS effectively is to deliberately reduce the memory provided to each task to below what it needs, and the deficit is recovered from stacks of other tasks. It helps to reduce the physical memory needed for an embedded system without reducing its reliability.

Continue reading

Heap Management in RTOSes

Real-time performance of the heap is not deterministic. Memory allocation time depends on such factors as previous use and the requested data space size. So, minimizing heap usage is a must in small embedded systems.

Private-Shared-Anonymous Paging (PSAP) Algorithm

The PSAP algorithm separates the physical memory space into several blocks. It is used for allocating memory pages in interactive real-time systems.

Each interactive real-time task can create a private-reserve block where pages are never stolen by other tasks. There is also a shared-reserve block where pages are shared by all interactive real-time tasks but are never stolen by best-effort tasks.

The rest of memory space, which is categorized into neither a private- nor a shared-reserve block, is called an anonymous memory block where any tasks are allowed to get pages.

Fixed-Size Blocks Allocation/Memory Pool Allocation

Uses free list of fixed-size blocks of memory, usually all same size. It works well for simple embedded systems where no large objects need to be allocated. It has a very low overhead, which results in improved performance for frequent allocation/deallocation. However, it can suffer from fragmentation.

Continue reading

Memory Allocation in Real-Time Systems

Android does not use virtual memory. Virtual memory hardware leads to increased energy usage, area cost, and design complexity. Also, the possibility of Translation Lookaside Buffer (TLB) misses is detrimental to real-time guarantees. 

Lacking virtual memory, the system will encounter a fatal error if its memory footprint exceeds the physical memory by even one byte.

Memory Requirement

Global – Size of the global segment is fixed at compile time.

Stack – Grows at run time. Estimation of maximum size s done by the compiler by measuring the stack usage of the longest path in the call graph from main() to any leaf procedure.

Stack size estimation from call graph at compile time fails in case of –

Continue reading

Shared Memory Management in Android

Shared memory

Each app process is forked from an existing process called Zygote. The Zygote process starts when the system boots and loads common framework code and resources (such as activity themes).

To start a new app process, the system forks the Zygote process then loads and runs the app’s code in the new process. This allows most of the RAM pages allocated for framework code and resources to be shared across all app processes.

Most static data is mmapped into a process. This not only allows that same data to be shared between processes but also allows it to be paged out when needed.
eg. Dalvik code (pre-linked .odex file), app resources, traditional project elements – native code in .so files.

In many places, Android shares the same dynamic RAM across processes using explicitly allocated shared memory regions (either with ashmem or gralloc).

Continue reading

Stack Memory Management in Android

Stack management

Stack architecture for threads

Dalvik had separate stacks for native and Java code, with a default Java stack size of 32KB and a default native stack size of 1MB. ART has a unified stack for better locality. Ordinarily, the ART Thread stack size should be approximately the same as for Dalvik.

A Thread is a concurrent unit of execution. It has its own call stack for methods being invoked, their arguments and local variables. Each application has at least one thread running when it is started, the main thread, in the main ThreadGroup. The runtime keeps its own threads in the system thread group.

Dalvik uses registers as primary units of data storage instead of the stack. Google is hoping to accomplish 30 percent fewer instructions as a result.

Stack size for threads

The OS allocates the stack for each system-level thread when the thread is created. When the thread exits the stack is reclaimed. The size of the stack is set when a thread is created.

Android’s default stack size is 8KB. This gets you 60-100 stack frames, depending on how complex your methods are.

Continue reading

More on Garbage Collection in Android

Garbage Collection

There appears to be a separate thread (called the HeapWorker) in each VM process that performs the garbage collection actions.

GC triggers:

  • When the Dalvik heap hits a soft occupancy limit. VM Parameter – InitiatingHeapOccupancyPercent. Default value is 45%.
  • When memory allocation for an object fails.
  • Just before triggering OutOfMemoryError – full, synchronous garbage collection is done.
  • Explicit triggering by calling System.gc()

Heap fragmentation

Android does not defragment the heap to close up space. Android can only shrink the logical heap size when there is unused space at the end of the heap. But this doesn’t mean the physical memory used by the heap can’t shrink. After garbage collection, Dalvik walks the heap and finds unused pages, then returns those pages to the kernel using madvise. (Reclaiming memory from small allocations can be much less efficient because the page used for a small allocation may still be shared with something else that has not yet been freed).

Continue reading

Heap Memory Management in Android

Android Memory Management

Android does not offer swap space for memory, but it does use paging and memory-mapping (mmapping) to manage memory.

This means that any memory you modify—whether by allocating new objects or touching mmapped pages—remains resident in RAM and cannot be paged out. So the only way to completely release memory from your app is to release object references you may be holding, making the memory available to the garbage collector.

That is with one exception: any files mmapped in without modification, such as code, can be paged out of RAM if the system wants to use that memory elsewhere.

Heap Memory Management

Each Android application runs in a separate process within its own Dalvik instance, relinquishing all responsibility for memory and process management to the Android run time, which stops and kills processes as necessary to manage resources.

Why we have a heap: While allocation may be slower than a stack (O(log n) vs O(1)), heaps allow freeing memory at an arbitrary location to be fast – O(log n), compared to a stack’s O(n).

Continue reading

Android Binder Functionalities

What is Binder?

Binder is an Android-specific mode of achieving IPC (inter-process communication). It is a high-level implementation spanning several layers of the Android software stack. The main implementation can be found in binder.c in the kernel source code.

Binder Functionalities

One Android process can call a routine in another Android process, using binder to identify the method to invoke and pass the arguments between processes.

Some functionalities provided by the Binder IPC are:

  • Link of death
  • Notification
  • Transaction

Binder Class

Java interface to Binder. Inherited constants – From interface android.os.IBinder.

Public constructor – Binder().

Continue reading