This is a transcript of a video by moody: https://www.youtube.com/watch?v=1s4Jhuoq67I Hello, this is going to be a video series about the Plan 9 operating system. This is going to serve as the first instalment in the series which is just going to be a kind of short, brief overview of the entire system, some of its history, why I think it's useful to learn the system, and why I think you might find it useful to learn the system. First though, to say a little bit about myself, my name is moody, I work on 9front (that's a fork of Plan 9, we'll get more into that in a little bit later), and I have a list of some things that I've worked on for the system, not comprehensive, but just to kind of give an idea of where my expertise lies, some stuff that I'm familiar with, so you can get an idea for things and my familiarity with the system. So what what is Plan 9? Well, Plan 9 is a full operating system. That means it has a kernel, has a user space, and everything is all its own — we don't use GNU, we don't use MUSL, we don't use BSD. It's all original for Plan 9. We have our own compilers, our own kernel, our own libc, which is quite unique, there's really not a lot of systems that do that. I mean, even macOS now ship something like clang for their compilers for the most part. But I digress, not super important. A lot of the design is In the same spirit as early Unix. Not necessarily Unix-like I would say, with how Unix is interpreted these days, but a lot of the design and motivation was a descendant of the original Unix philosophy of programs communicating and composable pieces. Plan 9 itself was written by Rob Pike, Ken Thompson, and other members of the Unix room (Phil Winterbottom, Tom Duff, so on). What's interesting about Plan 9 is that it has this understanding that the world is now a network series of machines, and any given person may have a number of machines they want to access. The composition of resources from these machines is something that was very much at the forefront of the design of the system. which is really interesting. We'll kind of get more into the implications of that and how that works later, but just know for now that things are designed from the get-go to be network-transparent, to be understandable and composable from a multi-machine layer of abstraction. So in order to really talk about Plan 9 and the reason why that it is the way that it is, you really have to understand some of the history of it. So I would say that Plan 9 was written with Unix in hindsight — there were some complaints that had kind of built up about the system, so Ken Thompson and Rob Pike after working on Unix and using it for quite a while wanted to use Plan 9 as a clean slate: let's walk it back, we don't have to make any promises of compatibility, we don't have to do any of this extra work, let's just try to get this right again. And the reason why this was important and why they couldn't do this in place with Unix has to do with how the community at large around Unix had been going at the time. See, while Unix was also being used within Bell Labs, it also had gotten out into the world. It was being used at places like Berkeley, University of Toronto, and many many other places; if you were a university with a CompSci program, you had a Unix license probably. And the thing is, when students and professors get their hands on this type of software, they want to write code for it, and sometimes that code ends up proliferating a lot. For other interesting historical reasons that are outside the scope of this video, Bell Labs was not exactly interested in TCP/IP, so that was all done by Bill Joy over at Berkeley. Things like the vi editor: Bill Joy over at Berkeley. The CSH also started over at Berkeley. So a lot of what moved the Unix world, and now the Linux world, forward to some degrees, were Innovations and designs being driven by people not at Bell Labs. And this is the stark contrast to how things work with Linux today — if Linus Torvalds wants something to change, he can put his foot down and say "this is how these things should happen", "I think this is the way forward", and a lot of people will listen and make that so. In the old days there was no internet. Ken Thompson was not available to the world, and his ideas about how these things should work weren't largely known, so people were more free to just take their code and run with it, just do things. Fly-by-night, put code up, it gets adopted and this is how we end up with things that are, to be a little charitable, not exactly designed how I think Ken Thompson and Rob Pike would have designed them — things like BSD sockets, ioctl, and things of that nature are a little un-Unix in my opinion, but we're getting a little sidetracked here. In a lot of ways, I want to point this out, I think Bill Joy and Rob Pike have very interesting rivalry in terms of things that they did. They both wrote editors, they both had the graphical editors, they both were the the chosen son of Unix in terms of innovating it; they have each taken the world in different directions. It's very interesting to compare and contrast them, but again, don't want to get too much into it, and this I'll say is mostly my own interpretation of this. There's no stark evidence that this is exactly how things work, this is just how I've interpreted the narrative. So, to get back on track, what makes Plan 9 interesting? Well, I think, the number one reason is the code is what I call "human scale". One single person can understand all of the code in Plan 9, and that's crazy to think about. What goes on in a modern computer — the Linux kernel, GCC, the tools that we use every day — are reaching a point where it is impossible for one person to comprehend everything. I'm sitting, recording this on a Linux machine, and I have no hopes understanding every piece of code that is required for my machine to boot, it's just impossible. The code, the line count, the complexity, is all way astronomically higher than what is ideal. Not going to comment on whether or not it's mandatory, but it's certainly not ideal. So with Plan 9 the code is a lot less — we try to really be conservative with where we invest code, where we invest time, and how things move forward. The kernel is much much smaller, we have compilers which are much much smaller, a lot of our utilities are much much smaller, and this allows one person to keep a lot of the system in their head, which I think really impacts how you're able to program the system and how you're able to understand and utilize it when you don't have intentional blind spots when you're programming, reading, or trying to configure it. You can keep it right up in your head. Like I mentioned earlier, a lot of the ideas for how these things are constructed is based on Unix. A lot of file system interfaces, files, are inherently network-transparent due to 9P and we'll get more into that in a little bit. We have first-class namespaces, which allows us to do a lot of stuff, both in the security domain and the organizational domain. The namespaces are very powerful. In fact, they're so powerful that they have become a main stay over in the Linux world and of course those namespaces have now been used in Implement things like Docker — there's a direct lineage between Plan 9's implementation of namespaces and how people use Docker these days (interesting historical tidbit). All of this is done with a very limited system call interface. The system creates this interface for "here's how you interact with files", and we try to put as much on top of that file interface as we can get away with in order to keep that interface, that surface area between user space in the kernel, to be a very limited, very understood portion. So how does Plan 9 accomplish a lot of the things that it does? I mentioned the 9P protocol. That is essentially exposing a file system over a network. You have some file system tree like your home directory, you'd like to make that available to someone else on the network, you use 9P for that. Similar to NFS or SMB, but a lot simpler in a lot of regards. It's pretty easy to implement a 9P server, it's a lot more complicated to implement something like an SMB or NFS server, and in fact, I've implemented a couple of 9P servers, but anyway, it's besides the point. Any program in user space can mount a tree over 9P, so bringing in a 9P directory or tree from another system is a namespace operation. Any process can mount something, which is a huge change from how things work over in the Unix world, and we can do this because we don't have set UID binaries — we don't have them, we don't want them. A lot of the capabilities of the kernel in terms of networking interfaces, pipes, processes, are exposed through files as well. When you combine this together and when you do so much with files and some of it is done by user space, some of it is done by kernel, you kind of end up with this environment which is not necessarily a monolithic kernel, not necessarily a microkernel. People call it a hybrid approach and I think that's pretty accurate, where the environment for which a program runs under is based on utilities both from user space and kernel space, but the the key there is that the program doesn't necessarily care which interface is being exposed to it. If you have a pipe device, it doesn't matter if that came from the user space or if that came from the user space. As long as they expose the same semantics, the program doesn't care. That's really useful and we'll talk a little bit more about the implications of a design like this here. To talk more about this, opening a TCP connection is done through /net/tcp. Everything is done with open(), read(), write(), and close(), so that means that any program in any programming language on the system that can do these simple things can create TCP sockets, and that includes the shell in this case — you can do all of this with our shell rc, and you can create TCP connections without much work at all. Proxies can be achieved by having access to a remote /net/tcp. Bear with me here, I don't exactly have an illustration for this, but you have two machines, and machine A connects to machine B and has machine B export its /net over 9P, that means that programs on machine A can use that to transparently make use of machine B's network stack, which is really quite powerful. You can essentially get a proxy where I would like all my traffic to go through this remote machine, you can just mount that /net and everything I do is being proxied through this now. Easy, it just falls out of the design as we like to say. SSH tunneling is a really good example of this about the whole "is it being implemented by the kernel or user space and that not mattering much". So SSH, if you haven't used it before, has the capabilities of proxying network traffic, so you can do what's called SSH forwarding, reverse forwarding (sometimes called port forwarding or reverse port forwarding), so you'll connect to a remote machine and you say "okay remote machine, make this connection for me", or "remote machine, listen on this port and forward connections over to me". Well, in Plan 9 we have this implemented as a program that connects to a machine through SSH and then exposes a /net from user space, so you can essentially use that /net, import it into your namespace, and you can make connections that will go through the SSH /net, which then will go through the real /net, and they will show up in the remote machine. I can just start a web browser, or whatever, under this /net/tcp and it will just transparently use the remote system's network without any sort of knowledge whatsoever about what's happening. When you have these building blocks, you get a lot of really cool features basically for free, from how these things are organized. This is what I think is really powerful about the design used by Plan 9 — you get a lot for free just by really paying attention to how you implement features. After I've basically explained why I think Plan 9 is great, we should talk about who is Plan 9 for, who would make good use of of the Plan 9 operating system. I use it because I really enjoy programming in my own time. I like working on clean systems. Writing code for Plan 9 is a very enjoyable process. If you like programming in your free time and the notion of working on an operating system sounds interesting to you, I totally suggest to check Plan 9 out. Of course, it's not all programmers — we do of course have users that are less technical and more user-focused. They don't necessarily write code for the system. And that's not me, I can't really speak to that experience, but there seems to be a desire for clean and tidy interfaces. The way that Plan 9 works seems to click well with these people that are using the system. I've often referred to Plan 9 as a sort of zen garden — I work all day working on these complicated systems for my dayjob, and I come home and I get to deal with this more idealized environment for Plan 9. Of course, there's also times when you shouldn't use Plan 9. If you're expecting a modern web browser, don't use Plan 9 — we don't have Chrome, we don't have Firefox, you're not going to be watching TikTok videos on Plan 9. A lot of the benefits of Plan 9 are there, because there was careful consideration for every new feature that was added. Adding complexity and features in order to reach parity for something like a web browser in the current age would essentially require you to reimplement something like Linux or Unix on top of Plan 9 with all the features that would be required. That's not exactly something we're interested in, nor is it something that we think other people should be super interested in. There's some exceptions: we have NetSurf, doesn't really count but it's there. If you're expecting some XYZ Linux or Unix program, if you can't live without Emacs and can't fathom using other programs, then Plan 9 might not be for you. Learning the system and utilizing it really makes sense when you walk into the system and leave your assumptions at the door, open your mind to experiencing the way how Plan 9 accomplishes the tasks that it wants to accomplish. Also, everything's written in C. If you hate C you probably won't like programming Plan 9. That's just a heads up warning about that. I think it's important to say that most people that use Plan 9, in in my experience, don't use only Plan 9. Right now, I'm recording this from a Linux machine that's connected to a Plan 9 machine. That's how I do most of my Plan 9 development. I know a lot of people that do it the other way, where they have a Plan 9 machine but they will connect to a remote Linux machine and do things on there when they need to, or use VNC. If you have just one laptop, if you only have one machine and you have to pick Plan 9 or Linux or Windows or whatever, then just pick something that's going to help you get your job done, but if you have a spare machine and you want to learn and engross yourself within a different sort of alien world of possibilities then give it a go, give it a try. This is all just to say: walk in with a clean mind, walk in with the expectation that you're going to learn something that you perhaps weren't expecting to learn, or use a program you weren't expecting to use. The last thing I want to talk about here is 9front, and this will segue into the next part of this series where we really dig into how to set this up and how to use it, but for now let's just talk about this. 9front is a fork of Plan 9 that has continued development. Plan 9 is no longer being developed by AT&T, the people that worked on it are no longer at Bell Labs. 9front basically was people from the community saying "we'd like to have a repository where bugs can be fixed, new features can be shared" and built their own community around it, because there was lack of community around the original releases. A lot of work has been put into 9front to make Plan 9 work on modern hardware: UEFI, modern ACPI support, NVME support. A lot of what people take for granted that an operating system does for them was not exactly true of Plan 9, but 9front made it. Lots and lots of bug fixes, small enhancements all over the place, new programs that I think are quite useful. The current way I'd suggest using Plan 9 is to use 9front. It runs great in a virtual machine so QEMU, Hyper-V, VMware (not VirtualBox, don't expect VirtualBox to work great because VirtualBox has some issues, but I digress). The use of drawterm really makes it quite easy to use a 9front machine from a different operating system. What I'm presenting this in is a program called Spit that was written by a member of the community. I can un-fullscreen that here, and that's my little editor here, this is what I was writing the presentation in, and I'm recording this from a Linux machine, and what I'm using here is drawterm. You essentially have a Plan-9-in-a-box-type deal, where everything runs as you would expect and you're actually remoted into the system. Its closest cousin, I would say, would be something like Windows RDP where you're connecting and you get a desktop to a remote system. I use drawterm a lot, I've worked on drawterm quite a bit, I think it's a great way of experiencing the system and it's low-barrier to entry. Like I said, we'll get more into it next time, but I really think that a virtual machine and drawterm on top of it is really how I suggest people dip their toes into the system. That's pretty much all that we have for today. I hope this provided a good overview of the system. I hope that if you hadn't heard of this system before, this helped you understand a lot of what it is and why people might be interested in it, and I hope if this is the first time you've learned about it you do feel interested enough to give it a check out and look to see if it's something that's interesting to you. Let me know in the comments if something was unclear, if you have any comments about how this was presented, any suggestions, things that you'd like me to cover in the future for other topics on Plan 9 or 9front, please let me know. Thanks, I will see you guys next time. See ya.