so the most popular operating systems are written in a mix of assembly,C and C++(lesser extent)
but I'm not sure how this is done, so C and C++ need compilers and linkers, do you need to write your own compiler and linker from scratch using assembly? because how does the OS know what C or C++ even is?
Compilers operate in user space rather than kernel space, so how can an OS be written in C or C++ if the compiler itself is limited to user space?
hope that makes sense,
Thanks
* on a side note assembly even needs an assembler how does the OS know about assembly
do you need to write your own compiler and linker from scratch using assembly?
All you need is for your OS to be able to understand the binary format that the linker outputs. The binary will usually contain things such as where to load each part, which parts to modify if the binary needs to be relocated, the actual binary content, etc.
Compilers operate in user space rather than kernel space, so how can an OS be written in C or C++ if the compiler itself is limited to user space?
You build in one computer that already has userland and you execute in a different computer, which doesn't. Nowadays, usually the last step of your toolchain grabs all the binaries and builds an image that can be read and booted by a virtual machine.
because how does the OS know what C or C++ even is?
how does the OS know about assembly
You're writing the OS itself in Assembly and C/++. Asking this is like asking "how does my console program know about C++?"
The answer, obviously, is that it doesn't. You compile C++ and Assembly into machine code and you put that binary code into a storage device, that you then plug or insert into a computer. Then when you turn the computer an automatic process that's built into the hardware copies your code into some predefined region of memory and tells the CPU to start executing there.
The computer obviously doesn't know anything about C++ or Assembly. All it understands is its own machine code.
You build in one computer that already has userland and you execute in a different computer, which doesn't. Nowadays, usually the last step of your toolchain grabs all the binaries and builds an image that can be read and booted by a virtual machine.
that would make sense,
All you need is for your OS to be able to understand the binary format that the linker outputs. The binary will usually contain things such as where to load each part, which parts to modify if the binary needs to be relocated, the actual binary content, etc.
so the linker and compiler would create an executable right? the only other thing I could think of it creating is a static or dynamic library,
so in theory if we were to create a very (very) basic operating system, what would the file extension be? would the OS just be one large .exe file? I would have thought that operating systems would be in the form of an image or iso file, but a compiler and linker don't output iso files right?
so the linker and compiler would create an executable right? the only other thing I could think of it creating is a static or dynamic library,
It would be a file with an entry point that the bootloader can load and start running without too much of a fuss.
what would the file extension be?
The file extension is just part of the name. Do you mean the file format? Usually hobbyists use ELF nowadays.
I would have thought that operating systems would be in the form of an image or iso file, but a compiler and linker don't output iso files right?
Building the bootable media is a step after compilation. VM image formats are usually simple enough, so you just take your bootloader and kernel and put them where the computer can find them.
Obviously for a hobbyist project you don't output an ISO. You're not going to be installing your OS on any real computers. Even if you were, you'd just write your data directly to the sectors of a USB drive. You don't need or want anything too sophisticated for a simplistic kernel.