I'm not sure why, but whenever I see a link pointing to sourceforge my instant gut reaction is that it'll be a decade+ aged code that doesn't work properly anymore. I always avoid sourceforge when searching for solutions. I don't know why I have this bias
I have the exact opposite reaction: "This might be something that's been around for a long time, so it's probably more interesting than most of the vapid, insipid, overadvertised crap that passes for knowledge today."
...and indeed, this page has no ads and is in a very readable style, although it's actually quite new.
It's not necessarily centered around Github (although many open source projects are hosted there) -- there's also Gitlab, Bitbucket. The bigger reason that there's a reaction against SourceForge is that it has a long history of deceptive packaging practices, shady ads, and injecting malware into the packages they were distributing. Now SourceForge is under new, better management, but it's hard to kick that reputation.
I like Bitbucket. For Git hosting, it's faster than Github (maybe their servers are closer to me?). I also find my way around Bitbucket's UI more easily.
I hear you. Pains me it won out instead of i960 with an emulation mode for backward compatibility. Far as learning on x86, there's two counterpoints come to mind:
1. Most sample code out there for "real" OS's might be x86. Most projects that aren't embedded will target it. So, might bite the bullet for that reason.
2. Other reason is if the end goal is to improve software on x86 hardware. As in, learner explicitly wants to take advantage of high-performance, x86 chips or code.
A shortcut might be porting more sample OS's to non-x86 CPU's. They learn each thing for simple architecture followed by ugly one later. Alternatively, abstract away what one can like Fluxkit project did. That was how a number of OS's were built in high-level languages without having to redo all the low-level stuff.
There were several versions of the Oberon OS as they explored new languages and hardware. The first relevant here is Native Oberon using original, Oberon language:
They also did one, Active Oberon System, based on a multi-threaded language called Active Oberon. The latest version of that was A2 Bluebottle. I ran its ISO in VirtualBox a year or two ago to get a bare-bones, but fast, experience.
You might also love the alternate history we missed in the Juice project to replace Java applets (or todays JS apps) with Oberon apps sent as compressed, abstract-syntax trees. If you know how fast Go is, then you'll know what we lost when it got ignored.
Oh these are great, thanks so much for the links and detailed response. I can't wait to play with these. I had one questions about this comment in regards to compresses ASTs:
>" If you know how fast Go is, then you'll know what we lost when it got ignored"
I am intrigued by that statement but I don't understand it. Can you elaborate? Cheers.
Go is a Wirth-like language. Wirth's measure for complexiy of a language was how long it took to compile. He'd take features out if compiles got too slow. So, the Wirth compilers were fast along lines of tens to hundred thousand lines of code compiled per second. They didn't optimize a ton either but code was fast enough.
So, with Juice, you're looking at getting an Oberon program or subset of Go delivered to your computer as compressed source, type-checked + compiled in a second, and then running safely at native performance. Much better than Javascript or Java applets. Would've had way fewer vulnerabilities, too, since Oberon effectively has no runtime except the GC. It's also memory-safe.
EDIT: Looking at Pascal stuff, I found an example of Wirth-style, compile speed in a Delphi discussion. Delphi was the VC++ alternative from Pascal family. One commenter on HN talking a legacy codebase praised compile speed: "a full build of 2 million lines takes less than a minute on a single core." That's for an exe with no VM's, dependencies, etc that runs on vanilla x86 & win32. Or alternative platforms with FOSS Lazaurus.
I see, I thought you were originally suggesting that Go compilations were slow but I understand you are saying that with Go we got fast compile times but a lot the other benefits of Oberon/Wirth were left on the table.
I remember reading that Go took some inspiration from Oberon.
This is a good info-graphic on where Oberon and some of the Wirth languages fit in the Algol tree if anyone else is interested:
Yeah, vs Go, Oberon's could be used for OS's, modified for web in Juice, were simpler, and so on. I like the first chart in the Quora article as it shows modern, imperative programmer things were more complex than just ASM, C language, VB6, "application" languages, and "scripting" languages. ;) The second is somewhat irking because it doesn't have Ada. Although I might have overlooked it given all the text and it's easier to read than the first. Maybe just add Ada in there. While we're at it, I think these maybe should include SPARK as it's own entry given it was first language that let you automatically prove absence of errors without formal verification. Used to great effect in IRONSIDES DNS, SPARKSkein, and Muen.
Honestly, the other platforms tend to be at least equally weird. Particularly given how there tends to be no way for for software to interrogate the hardware on most non-x86 platforms, so you have to just sort of hardcode topology (either directly in the kernel or in a device tree type file that you generate).
Well-written primer, so cool to see something complex reduced to its simplest form that captures the essence.
Funny to read how PC memory can be "millions of bytes on modern machines". To be fair, I was using a computer with only 128MB as recently as 7 years ago :-)
Awesome. We need resources like this to get people into low level programming.
The best class I ever took when I was an undergrad was a class where we built our own OS for a Data General Aviion workstation. Our systems staff was already under contract to help port 4.3BSD to the Aviion, and they had tons of test hardware and docs. So one of the group taught an OS class where whey basically turned us loose with a cross-toolchain and a manual and let us hack all semester. I think by the time we were done, we had something that booted and interacted with a very basic shell (but could not fork, no multi-tasking, no mem prot, etc).
If it were not for this class, I think the more theoretical OS class that I took later in grad school (all dreadfully boring queueing theory style stuff) would have turned me off to doing OS work. Instead, I've had a career doing lots of low-level stuff (drivers, OS ports to new CPUs, network stack improvements, etc).
Writing an OS for the Raspberry Pi is harder work that it should be because of the Broadcom SoC. It's a closed, proprietary chip that does a lot of the heavy lifting (including the initial boot mechanism). My advice to you if you want to write an OS targeting a developer board is to use one of the many open hardware platforms rather than the Raspberry Pi specifically. The Raspberry Pi's are better for tinkering with stuff that runs atop of the OS rather than designing kernels.
Which platforms (hardware) are good for this? I mostly want to mess with Linux on the Pi, but wouldn't mind trying some assembly on a cheaper board to learn more about bare metal programming.
Readers understand this isn't close to being an operating system right? You get the first boot sector of a floppy for free from the BIOS. This isn't even as complex as the most simple boot loader. It doesn't handle loading more sectors of the floppy let alone being like an OS.
It's hello world in assembly, loaded onto the first sector of an emulated floppy. That's neat, but nothing to do with operating systems. Other than you've now learnt the very first stage of bootstrapping from a floppy disk back in the 90s. Which is cool, and good info to know and start out with.
I get the vast majority of people seeing this post have probably only ever experienced nodejs, or Ruby on Rails or something equally high level. But this is trivial with respect to an operating system or even a boot loader.
No, we don't have 'our biases' you're just plain wrong.
An operating system is everything needed to run a machine, a kernel is the foundation on top of which all that user space code runs.
So from a unix perspective everything in /usr/bin /bin /usr/local /etc, /boot and whatever other directories are there after you install makes up the operating system. Whatever you install after you get the base system up and running are applications, the programs we run on top of operating systems.
Now, the line can get a little blurry: is 'X' part of the operating system or not? Is your window manager, are the various built ins? Probably yes, but not always.
Is a text editor part of your OS? Probably not, but there is a good case to be made for the fact that without a text editor of any kind an operating system is fairly useless. But that doesn't make 'libreoffice' or 'sublime' part of your OS, those are applications.
For myself, I draw the line where a minimum base install stops, so everything up to and including window manager for a desktop machine (for a headless machine or server much less than that).
From a very technical perspective you could have an operating system that consists of just a kernel and one user space program, in that case the kernel really would be the entirety of the operating system. But that's a pretty rare case (though it can be done).
This is all written from a UNIX/Linux perspectie, for OS you might draw the lines a little different and for MS/Windows different still. But those basic principles apply.
What you are trying to get at is that everything in userspace is not part of the kernel and that is correct. But the kernel does not make a complete operating system.
This is also the reason why the whole GNU/Linux thing existed, without the userland that GNU provided the Linux kernel would have been pretty useless.
I think it's worth adding though - articles like these aren't kernel's, they're just programs running on the bare-metal. This stuff (booting the computer and getting to a sane environment) is really the easy part, designing the kernel and creating the kernel's facilities that it provides to programs is the bulk of the work when writing a kernel/OS.
That's not to say these articles are bad, they're still fun, but it's like having a "simple emulator in 24 hours" that just sets up a simple GUI and doesn't actually start designing the emulator. It's technically the start of an emulator, but while that part might actually be complicated it's still not really that relevant to designing an emulator.