NativeOS @ 2021
A summary about the advancements of the project in 2021
2021 has been the year with the most number of Git commits, up to a total of 43. Which is funny because most of the work comes from the second half of the year.
Since 2021-07 what I've focused is fixing the build tools and the development pipeline. Working on this project used to suck because the toolchain was not ideal due to a lot of reasons. So most of the commits this year and most of the advancements were focused on fixing this, to make the development easier. It paid a lot of benefit, and as of 2022-01-01, if you compile the latest version of the code, you get something when you run the code.
The advancements began with the introduction of kcons in commit 967f92e3.
kcons is a script to generate Makefiles for building the kernel images.
It is inspired by config(1), config(8) and other similar scripts present
on many UNIX and UNIX-like operating systems.
So esentially kcons is a script that writes Makefiles. I could have used CMake, Automake or something else, but I figured out that instead of wasting my time looking how to configure custom compilers (remember the days of i386-elf-gcc?) or how to completely tweak CFLAGS and LDFLAGS (this is a freestanding executable file, get your ld.so out of my lawn), I could simply write a script that takes a Makefile template, a list of files, and performs some expansions to derive the template into a flat Makefile that is able to compile every C file and to finally link the entire kernel/loader executable.
What were the problems with the old set of Makefiles?
- NativeOS used to use multiple Makefiles. Running
makein the toplevel Makefile would recursively call
makeon the others. For instance, you first run
makefor the libc, then
makefor other libraries required by the kernel/loader, and then you finally run
makein the kernel subproject. This was subject to errors. Sometimes a recursive make call did not recompile modified files, sometimes it unnecessarily did, sometimes dependencies were not properly updated when recompiling another subproject. With kcons I am using a flat Makefile that takes all the kernel libraries and the kernel code as input and it just works.
- Dynamically configuring Makefiles is difficult because you end up with a lot of variables, such as ARCH, DEBUG, CC... This is clunky. It is clunky to test a Make variable, it is clunky to override a Make variable, and it is clunky to pass these variables recursively to other Makefiles. kcons uses a small config file with instructions to set these optional flags or to modify the makefile parameters. These optional flags can be used to even conditionally include some files in a compilation only if a specific feature is requested.
- On top of that, I wanted to stop depending on gmake (the GNU Make implementation). The old Makefiles used to fail on other Make implementations such as bmake. This is not an issue for GNU/Linux because it is obvious they are going to use GNU Make. It is also not an issue with macOS because it ships with GNU Make. However, it is an issue in other BSD distributions such as FreeBSD, since
makepoints to BSD Make, so you have to explicitly install gmake. The Makefile generated by kcons is polyglot and can be used with gmake and bmake .
kcons is written in Python 3 and lives in the /tools directory of the repository. It is not properly documented because it is not finished and I'm still adding and modifying features as I need them, so it is not stable for use.
Another important refactor made to the build tools this year was switching from GCC to LLVM. GCC is a great compiler and it deserves a lot, but the overall feel is that they have slept on their success. LLVM is working on new and innovative features and switching to clang for building the project will allow to use them.
For instance, GCC is only able to build files for a specific platform and CPU architecture. You usually don't notice them with the standard GCC that comes from your package manager, because it is ready to build files for your platform (Linux, macOS, MSYS, whatever you're using) and your CPU architecture (x86_64, arm64). But if you want to build executables for a different platform and/or CPU architecture (say you want to build a Windows version of your free software while using Debian, without installing Windows; or you want to compile a big package for your Raspberry Pi using your fully featured x86_64 desktop), you need to build a cross-compiler. This is, recompiling GCC with a different set of flags that specify that you actually want GCC to build files for aarch64-elf, not for x86_64-linux-none.
Unlike GCC, because LLVM uses a modular design composed of frontends, backends and the IR (intermediate representation), it is possible to install the same clang on your OS from your package manager or even download the Windows installer from llvm.org and generate code for any supported architecture and platform. Even the clang provided by Apple when you install the Developer Tools in a Mac can build executables for some other system architectures.
Changing GCC with LLVM makes easier to bootstrap the source code and to start writing code because it removes a slow and clunky step: having to build a cross-compiler, which takes some space and time to do, and requires tweaking with the PATH of the terminal in order to avoid using system GCC.
Some other benefits after switching to LLVM:
- clangd, the language server. On text editors such as Vim or VSCode that support the language server protocol, it is now possible to have the real time help from clangd while writing code in order to get code completion and function documentation. Using Bear it is possible to get a compilation database file that can be picked up by clangd in order to autocomplete.
- clang-format, the code formatter. There are code linters for C since more than 35 years. The first version of indent was written in 1976. That is a long time ago. However, indent (and GNU indent, its successor), are difficult to configure. clang-format uses a YAML file with proper key names and values. There is a spec that explains every possible option, and there are even defaults. You configure clang-format, and as long as you run the formatter before commiting code to the project, you can forget about it. I use vim-ale, so every time I save my code, it gets formated automatically.
stdkern is the new name for libc. Despite its name, libc was only used for the kernel, so it makes sense to call it stdkern instead.
stdkern is the kernel standard library. Because kernels and loaders are freestanding applications (they don't run on top of an OS because they are the OS), there is no hosted standard library that you can use. Some people writing homebrew operating systems manage to link their kernel against musl, newlib and other "micro" standard libraries, but I did not want to go this way because I don't want to keep adding fragile dependencies. So instead, whenever the kernel needs a C standard library function such as strcpy or memset, I'll just drop a simple implementation in /kernel/stdkern instead and call it a day.
VFS: here comes the important stuff¶
The refactors are great, but what about real features? After a lot of years, I finally finished the first iteration of the VFS: source:kernel/sys/vfs.h.
What is a VFS? It is a virtual file system. If you know object oriented programming, you know about interfaces. As long as your code implements an interface, callers will be able to use it no matter the real implementation. VFS is similar. The file vfs.h defines some data structures and some functions. As long as the real file system driver is able to use these data structures and the functions use the ABI properly, the kernel will not care about the internals of the file system.
So for instance, fs_devfs.c (I am linking to a specific blob here because this file might disappear in 2022) implements the device file system. It uses the VFS ABI to translate file system calls such as "fs_read" or "fs_write" into code that will interact with the Device API. For instance, calling fs_read on the "keyboard file" will actually read keys, and calling to fs_write on the "serial port file" will actually transmit bytes through the COM port.
The second iteration of the VFS is already in progress. A file system driver for the initrd is in progress. I need to allow the drivers to accept parameters when a file system is being initialised. For instance, the TAR file system driver that will be used by the initrd might receive the memory address of the initrd file, so that multiple initrd files can be mounted if the TAR mount() function is called multiple times with different memory addresses.
This is why I don't want you to look at the code so closely yet, and that is why I haven't documented this part of the code yet. Some data structures will be added, some will change, some will be deleted.
Many people following OS development tutorials create a main function that usually looks like this:
This is fine. In fact, I started writing code like this at the beginning of the project. But it makes difficult to conditionally add some devices. A Device API was added this year: source:kernel/sys/device.h. It works similar to the VFS: as long as you implement the data structures properly and implement the functions following the ABI, the OS will accept the driver.
Many "drivers" that used to exist as "init_" functions in older revisions of code have been rewritten to use this new ABI. They are being added to the /kernel/device directory. (If you are willing to look into the code of these files, I suggest to start with the null device (null.c), which is my implementation of a /dev/zero and a /dev/null device, because they are small and makes easy to follow the data structures).
The DEVICE_DESCRIPTOR macro has to be called in the driver source code. It will group pointers to these device data structures in a different section in the ELF file. The device_init() function will iterate over these pointers and call the init() function on each driver.
Initial work for this was added in 2111c454. It is still a WIP because there are some corners to cut, such as adding driver dependencies, in order to make the kernel init some drivers before others, but at least it works at the moment.
What comes next¶
These are some of the things in my todo list that I'd like to work next. Not everything is going to happen in 2022 because I only work on this project on some specific weeks of the year:
- Finishing the virtual memory manager and the scheduler. I have finished learning about most of the i386 data structures at the moment, so I just have to write the code now.
- Finishing the initrd and the TAR file system.
- More devices. Some early support for PCI, to have things like reading from hard drives.
- ELF loader. I worked on this a few years ago, I just have to finish it.
- System calls.
- Dropping from kernel land to user land for the first time.
Lower priorities in the todo list, will be eventually done but it is not important right now:
- User interface.
- Text shell.