This post is about a technique for creating a trait like system within C. I wrote the code after watching a talk about Zig about runtime polymorphism.

They mention how this kind of thing can be done in C, and I am away that a technique is used in the Linux kernel where the container_of macro allows intrusive data structures to be extensively used, so I wanted to try this out myself. The idea was to implement a few concepts usually not found in the C language- iterators, scans (Haskell style), and allocators.

The implementation is not necessarily easy to create, nor to use, nor does it have the nice type level properties or generated code that other languages might get (I'm looking at you, Rust and Haskell). However, this is C and we have limited means of abstraction, so I think it is at least interesting to look into how we might create these system anyway.

The code is on github, and has comments with extra details that I did not replicate in this post. Please read the code for a deeper understanding of the implementation.

Overview

The concept here is to combine the container_of macro and instrusive data structures, which will contain pointers to implementation functions, to create a trait like system where a type contains the implementation of a standardized interface. Note that this is resolved at runtime- it depends on which functions you provide- unlike Haskell or Rust where it is determined based on the type at compile time. This gives us some flexibility, but can be harder to understand (there is no unique implementation for a particular type), and comes at a performance cost (Rust can compile away traits, and inline their implementation, which is why their iterators can be so fast).

An instrusive data structure is simply a structure that is contained within another structure- for example a linked list where the structures that make up them nodes contain the link to the next element internally as well as the node data, instead or just pointing to the structure from a Node type. This is intrusive in that it requires changing your data to be aware of the list that it participates in, and requires new fields for each structure it will be used with.

This can be manual labor intensive, but also fast and simple in many cases, and it is not an uncommon approach in C.

One problem that arrises is that function that have a signature pointing to the data structure, say a Node or List type, do not know about the containing structure. We might try to make sure that the List is the first field of our structures, so we could cast a pointer to the list to a pointer to the containing structure, but this limits how many structures we can use.

Instead we have the container_of macro, which takes a pointer, the field within a structure that the pointer points to (the instrusive structure), and the type of the containing structure, and provides a pointer to the containing structure. This requires moving a pointer around in memory, and its up to the user to make sure this works out correctly.

The container_of macro is similar to the offsetof macro in that is provides a huge amount of leverage for a tiny implementation. In the case of container_of, my implementation was:

#define container_of(ptr, type, member) ((char*)ptr - offsetof(type, member))

while a better, more type safe implementation would be:

#define container_of(ptr, type, member) ({              \
  const typeof( ((type *)0)->member ) * __mptr = (ptr); \
  (type *)( (char*)__mptr - offsetof(type, member) ); })

Which seems complex, but note that it casts a NULL pointer (0) to the member type, extracted from the field's name using typeof (which I rarely see in C, but is also used for C's limited generic functions).

The pointer is then reduced by the the position of the field. The offset of the member from the type gives the distance from the original pointer to the pointer to the containing structure.

See this article for more details.

My implementation is just the thing that occurred to me when writing it out- I am interested in simplicity for my toy example, not completeness.

Iterators

The iterator concept that I implemented contains only a single function called 'next'.

This function takes a pointer to a structure, and returns a boolean indicating whether a new element of the structure was provided or not. If 'false' is returned (from stdbool.h) then no new elements are possible. Other concepts of iterators are possible- look at Rust for a more fine grain approach with more varieties of usage.

As examples I implemented a Range type which iterators over a range of numbers such as 1 through 100. The 'next' function takes a pointer to a uint32_t and places the next number in the sequence, returning whether there is a new number to provide.

The fun thing here was to implement different loop styles using this iterator to see how it compares to normal C loops. I think it is more complex and likely a bit slower then a normal loop, but it is at least interesting, and might be useful in more complex cases.

Unfortunately, none of the loops are really scoped well. They all require a declaration outside of the loop. I couldn't think of a way to keep the declaration inside the loop- you can declare multiple variables in a loop, but they have to be of the same type as far as I know.

The next example is a list iterator. This is interesting in that it iterators over a different type then the iterator itself- there is a ListIter type for the iterator, and a List type for the linked list itself. The iterator simply provides a pointer to each node in turn, and returns false when the last node's next pointer points to NULL.

Scans

The next concept is something I learned in Haskell land. A scan is basically a value that you want to combine with other values of the same type (usually a monoid), and then extract into a possibly different type. The functions to provide here are 'return' (extract a value), and 'append' which takes a new value and combines it with the current value.

The examples in the code are a sum, which adds numbers together and returns the result, and a string bullder.

The string builder is somewhat interesting- it keeps track of strings given to it (char*), but does not contatenate them when they are provided. Instead, when you ask to retrieve the data, it builds up the final string all at once. This is a common pattern that can make a naive O(n^2) algorithm into a O(n) algorithm. The concept is related to continuation passing style (CPS) if you are interested, and there is some interesting stuff in the Haskell community about these concepts.

Allocators

The memory allocator example is the most advanced, but also the most interesting. This example shows how to create a simple allocator interface where you can create custom allocators, and even wrap allocators in other allocators. This final point is interesting- you could in principal add features to allocators or compose allocators or other traits to create whole stacks of traits that log, monitor, check, etc other trait implementations.

The allocators require an 'alloc' function to allocate memory, a 'realloc' to reallocate an existing allocation into a larger or smaller one, and 'free' to free a previously allocated block of memory.

I implemented a simple heap allocator, which just wraps the normall malloc/relloc/free calls in this allocator trait.

I also wrote an arena allocator which wraps another allocator and re-uses its memory, potentially providing a faster allocation then always requesting from the underlying allocator. It can allocate a large block and then use a simple stack allocator on free memory in that block. There are much better and more interesting arena allocator concepts out there- this is just for fun.

The last allocator is a bump allocator which requires the user to provide a block of memory, and will hand out blocks sequentially within this given block. It is trivial- it does no freeing (the free function is a noop), and doesn't even try to free as a stack. It could be used, for example, to allocate per-frame memory in a game which can then be freed by simply pointing the allocator back to the start of its memory (much cheaper then calling free for each allocation).

The bump allocator could also be used to provide a kind of allocation in situations where dynamic allocation is not allowed (such as embedded systems). If you write code that expects to be handed an allocator, and then the user chooses to give you a bump allocator, you can pretend to dynamically allocate (potentially simplifying your code), while still allowing the user to restrict what allocations actually occur. A similar, but better, strategy is carried out in Zig, and is one of the really nice features of that language.

Conclusion

Hopefully this writeup, along with the toy examples of increasing complexity, are interesting and provide some understanding of these concepts. I had fun trying out a technique I had never tried, and I'm interested in seeing what this would feel like in Zig.