weinzierl 2 days ago

This is an interesting approach, and I wish it will succeed.

I am still skeptical. In the late 90s or early 2000s Linus was interviewed on TV and what he said stuck with me to this day. When asked about competitors he roughly said:

No one likes writing device drivers and as long no one young and hungry comes along who is good at writing device drivers I am save.

I think he was already well aware at that time that keeping the driver interface unstable is his moat. A quarter of a century later kernels that run on virtualized hardware are a dime a dozen but practically useable operating systems in the traditional sense of abstracting away real hardware can still be counted on one hand.

  • tatetian16 2 days ago

    > keeping the driver interface unstable is his moat

    Maybe we will have young and hungry AI-for-system researchers who would like to take on the job of developing AI agents that translate Linux drivers in C to Asterinas ones in (safe) Rust.

    Another feasible approach is to reuse Linux drivers by running a Linux kernel inside some kind of isolated environments. For example, the HongMeng kernel leverages User-Mode Linux to reuse Linux drivers on HongMeng [1]. Asterinas can take a similar approach.

    [1] https://www.usenix.org/conference/osdi24/presentation/chen-h...

    • rcxdude 2 days ago

      The bottleneck, for the most part, is actually being able to test them. Even a translation by a skilled engineer is liable to have issues if they don't actually have the hardware to test things out. Linux's driver support is built out mainly by people doing that, either hobbyists scratching their own itch of hardware they own or manufacturers contributing drivers for their own hardware.

      (It's also why regressions are pretty common: it's completely infeasible to test all of linux on each release, some people test some parts of it, but it's all very ad-hoc, very little is automated, and it's not at all unified)

    • nickpsecurity 2 days ago

      OKL4, deployed on tons of phones, had the ability to run drivers stand alone or in driver VM's that wrapped them. Other guests could call either. I think Genode uses a similar, L4-based component.

      There was also an academic project that combined virtualization with Windows drivers.

    • tuna74 2 days ago

      If you port drivers from Linux those drivers would have to be GPLv2-licensed.

      • kvdveer a day ago

        That needn't be a problem, assuming the linking-clause of the GPL2 doesn't extend to device drivers. Gpl2 doesn't extend to userspace processes linking into the kernel, so maybe?

    • reactordev 2 days ago

      This is the future. Hardware has already standardized more towards USB HID than in previous decades, Linus interview included. When AI can develop these device drivers based on just probing the HID info, we’ll be on Cloud9. Because maybe then, we’ll get the year of the Linux desktop.

      • rcxdude 2 days ago

        Such standard interfaces are rarely the problem, though there is often a headache of dealing with the pile of 'quirky' hardware that just so happens to work well enough with exactly what windows happens to do. The pain point is all the things that aren't that. Nonstandard, niche hardware which maybe has a few thousand users, or big and complex interfaces like graphics cards which are basically whole OSs on their own.

      • BenjiWiebe 2 days ago

        Pretty sure if your device actually just uses USB HID, it already works on Linux without a custom driver.

        What requires a custom driver is when your device adds its own non standard features.

      • anthk 2 days ago

        USB it's a nightmare.

  • pxc 2 days ago

    > I think he was already well aware at that time that keeping the driver interface unstable is his moat.

    Does Linus have/want a moat? He's not a tech startup founder. He's a kernel hacker who has had success beyond his wildest dreams, and whose needs will be met for the rest of his working life no matter what happens.

    It seems like projection to talk about this aspect of the kernel as if it's some intentional strategy for preventing certain kinds of competition.

    • eikenberry 2 days ago

      I don't believe he wanted or intended a moat. The drivers need to be in the kernel to have a working kernel, that is a kernel that actually runs on the hardware. Move the drivers out of the kernel and Linux would have died long ago as there would have been a proliferation of proprietary drivers that stopped being maintained once the hardware was no longer on sale. And poor driver support is why no other kernel has taken root.

      • pxc 2 days ago

        > Move the drivers out of the kernel and Linux would have died long ago as there would have been a proliferation of proprietary drivers that stopped being maintained once the hardware was no longer on sale.

        Isn't this basically the situation on Android?

        • SR2Z a day ago

          They're still in the kerenel AFAIK, but you need to compile the kernel specifically with a bunch of source-restricted or binary blobs to get things to work.

          They're not open and maintained as part of the broader kernel release which is why the driver situation degrades so quickly once the hardware is no longer actively supported.

  • nickpsecurity 2 days ago

    Prior art includes SPIN OS (Modula 3), JX OS (Java), House OS' H-Layer (Haskell), and Verve. Each one had a type-safe, memory-safe language for implementing the features. They usually wall off the unsafe stuff behind checked, function calls. Some use VM's, too.

    Ignoring performance or adoption, the main weaknesses are: abstraction, gap attacks; unsafe code bypassing the whole thing; compiler or JIT-induced breakage of safety model; common, hardware failures like cosmic rays. This is still far safer than kernels and user apps in unsafe languages.

    One can further improve on it by using static analysis of unsafe code, ensuring all unsafe functions respect type-safe/memory-safe interfaces, compilers that preserve abstraction safety during integration, and certified compilers for individual components. We have production tools for all except secure, abstract compilation which (a) is being researched and (b) can be checked manually for now.

  • leeter 2 days ago

    > but practically useable operating systems in the traditional sense of abstracting away real hardware can still be counted on one hand.

    I think this is telling. There are plenty of 'standards' for interfaces in the hardware world. Some (mostly USB) are even 'followed', but the realities of hardware are that it never behaves nominally. So without someone willing to spend the time writing code to handle the quirks that can't be patched out and the errata it's very hard to run on physical hardware with any performance or support.

  • jitl 2 days ago

    On the other hand, running on real hardware is less important if none of your hardware is real!

    98% of Linux I interact with is running virtualized: on my desktop/laptop systems it’s either Virtualbox full-screened so I can use Windows for drivers, or a headless VM managed by Docker.app on my Mac. All my employer’s production workloads are AWS virtual machines.

    My only Linux bare metal hardware is a home server, which I’m planning to retire soon-ish, replaced by a VM on an ebay Mac mini to reduce power bill & fan noise.

    If someone can make a Linux compatible kernel that’s more secure and just as performant, it’s much easier these days to imagine a large new user base adopting it despite a dearth of drivers.

    • weinzierl 2 days ago

      In computer science we are taught it's turtles all the way down but in the real world you learn that you hit the world of bits and bytes really fast.[1]

      My point is that every virtualized environment needs a layer that talks to real hardware down below. We have enough diversity in the upper layers but not enough in the lowest layer.

      [1] I heard it expressed like this from an Azul Systems employee first, but unfortunately don't remember who it was.

    • jpc0 2 days ago

      Your os might be virtualised but very often the actual hardware leaks through that virtualisation, often intentionally.

      I don’t see any support for an os that doesn’t have good driver support for accelerators, whether GPU/TPU or otherwise. And if your look into some of the accelerators built into modern amd and intel chips that becomes a nightmare just supporting the CPU, never mind USB host controllers and network interfaces etc

  • exe34 2 days ago

    > keeping the driver interface unstable is his moat

    It's basically like npm update, at the kernel level.

  • digdugdirk 2 days ago

    Hmmm... Would containers + AI enable a scattershot "just let the LLM keep trying stuff until it works" approach to driver development?

    • daeken 2 days ago

      Given the number of times I've bricked hardware during reverse-engineering and driver development, I don't find it super likely, tbh. I'm by no means an expert here, but it's one of those things where if you already have good enough documentation (which in this case could be a known-good implementation) then it's more of a translation task and LLMs could absolutely be helpful there, but the edge cases are sharp and frequent.

      • ninkendo 2 days ago

        It's interesting though, because you don't really need to reverse engineer anything if the device has an in-tree Linux driver. You "just" need to port the Linux driver to your OS. This is certainly something an LLM can help with, although the usual skepticism applies (it works until it doesn't, etc.)

        In fact I sometimes wonder whether it's feasible to write a new kernel while somehow shimming into Linux's driver model, while still keeping your own kernel unique (ie. not just a straight clone of Linux itself.) Some way of "virtualizing" the driver layer so that a driver can "think" it's in a Linux kernel but with a layer of indirection to everything somehow.

    • jitl 2 days ago

      Maybe in a few years. I find ai most successful when you can provide a very clear spec and solid test suite, when I don’t have that it makes a lot of mistakes without handholding.

yjftsjthsd-h 2 days ago

> This IPC often has a performance impact, which is a big part of why microkernels have remained relatively unpopular.

I thought newer microkernels... Reduced that? Fixed it? I forget, I just had the impression it wasn't actually that bad except that the industry is still traumatized by Mach.

From the project website:

> Only the privileged Framework is allowed to use unsafe features of Rust, while the unprivileged Services must be written exclusively in safe Rust.

That feels backwards to me. If an unprivileged task is unsafe, it's still unprivileged. Meanwhile the unsafe code that requires extra verification... Is only allowed in the part where nothing can safeguard it?

And from https://asterinas.github.io/book/index.html (because it was one of my first questions on seeing 'Linux replacement in rust'):

> Licensing

> Asterinas's source code and documentation primarily use the Mozilla Public License (MPL), Version 2.0. Select components are under more permissive licenses, detailed here.

Not GPL, but not BSD either.

  • tatetian16 2 days ago

    > if an unprivileged task is unsafe, it's still unprivileged. Meanwhile the unsafe code that requires extra verification...

    I am sorry that the doc is a kind of misleading. I wrote that... The statement need to be interpreted in the context of framekernel. An entire Rust-based framekernel runs in the kernel space but is logically partitioned into the two halves: the privileged OS framework and the de-privileged OS services. Here, "privileged" means safe + unsafe Rust kernel code, whereas "de-privileged" means all safe Rust kernel code. And this is all about the kernel code. Framekernels do not put restrictions on the languages of user-space programs.

    • magicalhippo 2 days ago

      I had the same reaction that this sounded all very backwards, but reading the introduction in the paper[1] made it more clear:

      The kernel is logically divided into two parts: the privileged OS framework (akin to a microkernel) and the de-privileged OS services. Only the privileged framework is allowed to use unsafe, while the de-privileged services must be written in safe Rust completely. As the TCB, the privileged framework encapsulates all low-level, hardware-oriented unsafe operations behind safe APIs. Using these safe APIs, the de-privileged OS services can implement all kinds of OS functionalities, including device drivers.

      Ostd provides a small yet expressive set of safe OS development abstractions, covering safe user-kernel interactions, safe kernel logic, and safe kernel-peripheral interactions. Of particular note is the untyped memory abstraction, which addresses the challenge of safely handling externally-modifiable memory (e.g., MMIO or DMA-capable memory) – a longstanding obstacle in safe driver development.

      So the privileged part is privileged because it does unsafe stuff. It's also quite minimal, so that the "business logic" of a driver or similar can be implemented using safe code in the de-privileged code, which is de-privileged because it doesn't need privileged unsafe access. At least that's my understanding.

      [1]: https://arxiv.org/abs/2506.03876

  • josephg 2 days ago

    > I thought newer microkernels... Reduced that? Fixed it? I forget, I just had the impression it wasn't actually that bad

    SeL4 is a microkernel like this. They’ve apparently aggressively optimized IPC far more than Linux ever has. Sending a message via sel4 ipc is apparently an order of magnitude or two faster than syscalls under Linux. I wouldn’t be surprised if most programs performed better under sel4 than they do under Linux - but I’d love to know for real.

    • winternewt 2 days ago

      The trick with L4 is they treat IPC basically like syscalls. Arguments are left on CPU registers instead of serializing them to a message buffer. The only significant work performed is a change of virtual memory map. The called process continues execution within the time slice of the caller, instead of waiting for the thread scheduler or using synchronization primitives. While some of this could possibly be achieved with Linux, a lot of the optimization is ingrained into the calling convention and so would require changes to the user-mode source code.

  • rwmj 2 days ago

    > I thought newer microkernels... Reduced that? Fixed it?

    They have.

    Actually the elephant in the room is modern hardware which makes even syscalls into monolithic kernels expensive. That's why io_uring and virtio perform well - they queue requests and replies between the OS and applications (or the hypervisor and the guest for virtio) avoiding transitions between address spaces. Any operating system in the future is going to need some kind of queuing syscall mechanism to perform well, and once you've got it doesn't much matter if you structure the components of your OS as a monolith or microkernel or something else.

  • kelnos 2 days ago

    My understanding is they don't mean privileged/unprivileged in the kernel-space/user-space sense. All of it is running at the kernel's privilege level. Just they've logically defined a (smaller) set of core library-like code that is allowed to use Rust unsafe ("privileged"), and then all the code that implements the rest of the kernel (including drivers?) uses that library and is disallowed (by linter rules, I assume) to directly use Rust unsafe ("unprivileged").

    It's an unfortunate overloading of terminology that you entirely reasonably interpreted according to the more common usage.

    • yjftsjthsd-h 2 days ago

      Oh, okay, so it's "privileged" in that it has the privilege of using unsafe. I got that it was all kernel mode but assumed they were doing something fancy to nonetheless restrict the unprivileged parts (though since they say it's all one memory space, I wasn't sure what)

    • ronjakoi 2 days ago

      Perhaps I, as the author of this article, could have also been more careful with the terminology.

  • ulrikrasmussen 2 days ago

    > That feels backwards to me. If an unprivileged task is unsafe, it's still unprivileged. Meanwhile the unsafe code that requires extra verification... Is only allowed in the part where nothing can safeguard it?

    The unprivileged task is running in the same memory space as the core kernel, and thus there are no runtime checks to ensure that it doesn't do something which it is not allowed to do. The only way you could enforce it at runtime would be to adopt a microkernel architecture. The alternative architecture proposed here is to enforce the privileges statically by requiring that the code doesn't use unsafe features.

  • bandrami 2 days ago

    The parts written in unsafe rust implement the memory and access management that make it possible for the other parts to use safe rust.

  • pjmlp 2 days ago

    The problem is that many microkernel haters keep repeating what used to be true like 30 years ago, while running tons of containers for basic tasks.

    • uncircle 2 days ago

      There are hordes of developers completely dismissing the idea of microkernels with no serious argument other than "lmao didn't Linus destroy Tanenbaum that one time?"

      Designing a modern and secure kernel in 2025 as a monolith is a laughable proposition. Microkernels are the way to go.

      • dale_glass 2 days ago

        Well, here's some for you:

        * In modern times, the practical benefit from a microkernel is minimal. Hardware is cheap, disposable, and virtual machines exist. The use case for "tolerate a chunk of the kernel misbehaving" are minimal.

        * To properly tolerate a partial crash takes a lot of work. If your desktop crashes, you might as well reboot.

        * Modern hardware is complex and there's no guarantee that rebooting a driver will be successful.

        * A monolithic kernel can always clone microkernel functionality wherever it wants, without paying the price elsewhere.

        * Processes can't trust each other.

        The last one is a point I hadn't realized for a while was an issue, but it seems a tricky one. In a monolithic kernel, you can have implicit trust that things will happen. If part A tells part B "drop your caches, I need more memory", it can expect that to actually happen.

        In a microkernel, there can't be such trust. A different process can just ignore your message, or arbitrarily get stuck on something and not act in time. You have less ability to make a coherent whole because there's no coherent whole.

        • uncircle 2 days ago

          You describe microkernels are if there is only one way to implement them.

          > A different process can just ignore your message

          > arbitrarily get stuck on something and not act in time

          This doesn't make sense. An implementation of a microkernel might suffer from these issues, it's not a problem of the design itself. There are many ways of designing message queues.

          Also:

          > In a microkernel, there can't be such trust [between processes]

          Capabilities have solved this problem in a much better and scalable way than the implicit trust model you have in a monolithic kernel. Using Linux as an example of a monolith is wrong, as it incorporates many ideas (and shortcomings) of a microkernel. For example: how do you deal with implicit trust when you can load third-party modules at run-time? Capabilities offer much greater security guarantees than "oops, now some third-party code is running in kernel mode and can do anything it wants with kernel data". Stuff like the eBPF sandbox is like a poor-man's alternative to the security guarantees of microkernels.

          Also, good luck making sure the implicitly trusted perimeter is secure in the first place when the surface area of the kernel is so wide it's practically impossible to verify.

          If you allow me an argument from authority, it is no surprise Google's Fuchsia went for a capability-based microkernel design.

          • lkjdsklf 2 days ago

            I’m not sure I would consider fuschia an example that supports your point.

            It’s design largely failed at being a modern generic operating system and it’s become primarily an os used for embedded devices which is an entirely different set of requirements

            It’s also not that widely used. There’s only a handful of devices that ship fuschia today. There’s a reason for that.

            • pjmlp 2 days ago

              Don't mistake Google politics with technical achievements.

            • uncircle 2 days ago

              Did it fail because of its microkernel design?

              It’s quite disingenuous to use “success” as a metric when discussing the advantages microkernel vs monolithic, as the only kernels you can safely say have succeeded in the past 30+ years are three: Linux, NT and Mach, one of which is a microkernel (of arguably dated design), and the other is considered a “hybrid microkernel.”

              Did L4 fail? What about QNX?

              This topic was considered a flame war in the 90s and I guess it still isn’t possible to have a level-headed argument over the pros and cons of each design to this day.

              • jitl 2 days ago

                When I read this thread, I think it’s pretty level headed except your last reply lol.

      • Joker_vD 2 days ago

        > Designing a modern and secure kernel in 2025 as a monolith is a laughable proposition.

        I've seen this exact opinion before, only the year in it was "1992". And yet Linux was still made and written regardless of it.

        • herewulf a day ago

          Point taken but at that time there was no other free (as in beer and freedom) "UNIX" kernel?

          Someone may come along and correct me about BSD. Apologies I'm not super familiar with it's history.

    • yjftsjthsd-h 2 days ago

      > while running tons of containers for basic tasks.

      Those containers run on a monolithic kernel; what's your point?

      • pjmlp 2 days ago

        The supposed performance gains from monolithic kernel being wasted on features that mimic microkernel features.

        • yjftsjthsd-h 2 days ago

          > The supposed performance gains from monolithic kernel being wasted on features that mimic microkernel features.

          So two things:

          1. Containers don't have a meaningful performance hit. (They are semi-frequently used with things that can have a perf hit, like overlay filesystems, but this is generally easy to skip when it matters.)

          2. I don't think containers meaningfully mimic microkernel features. If I run everything on my laptop in a container, and a device driver crashes, then the machine is still hosed.

          • pjmlp 2 days ago

            1. The amount of memory consumption I see, versus traditional processes, must be a mirage.

            2. It depends on what the containers are being used for. Microkernels aren't only about using drivers in userspace.

        • Vilian 2 days ago

          And still manage to run better than a complete microkernel

          • pjmlp 2 days ago

            So goes the mythological tales from ancient times.

            That is what happens when people don't update themselves.

vbezhenar 2 days ago

Is it novel development: splitting kernel into small unsafe core and large safe modules? It sounds very interesting and promising. No hardware overhead of microkernel and no safety issues of monolith. Such a project, obviously, depends on a systems language with explicit unsafe/safe separation.

lifty 2 days ago

This is an awesome effort, thank you, knowing that one of the authors is in the thread. How far is this from usability, at least in some reduced context? Would love to be able to build server images based on this kernel and play around with it.

  • tatetian16 2 days ago

    As a relatively new kernel, Asterinas still has a lot of rough edges for general-purpose use. That said, if the goal is to run targeted, real-world services efficiently and reliably, the gap is not that large—I believe we can reach that milestone within a year.

    We're actively implementing key features like Linux namespaces and cgroups, and we're also working on the first Asterinas-based distribution. Our initial focus is to use Asterinas as the guest OS inside Confidential VMs. This use case prioritizes security, where Asterinas has clear advantages over Linux thanks to its memory safety guarantee and small TCB.

    • GardenLetter27 2 days ago

      What about DRM, ALSA, etc.? - I think these are the main blockers for people testing it out on home machines.

hardwaresofton 2 days ago

> This IPC often has a performance impact, which is a big part of why microkernels have remained relatively unpopular.

Somewhat comforting to see deeply technical people still misconstruing why approaches/projects don't get adopted.

  • mariusor 2 days ago

    It would help everyone if you'd actually tell us in which way they're doing that.

    • immibis 2 days ago

      I think they're referring to the fact that projects mostly get adopted or not based on socio-political processes such as marketing, and only rarely on the actual merits of the project such as performance.

mdtrooper 2 days ago

It is licensed under MPL. Well, there is best licenses such as GPLv3.

  • miniBill 2 days ago

    The have the rationale for why they picked MPL in their documentation. I don't love the choice, but I can see the reasons behind it.

karmakaze 2 days ago

Seems like a great idea. We have so much software invested that alternative substrates could yield great benefits or at least alternatives when needed for less technical reasons. Kinda reminds me of kFreeBSD and of course GNU/Hurd.

Toritori12 2 days ago

What should be the name for these kind of things? *nux?

DinoNuggies45 2 days ago

[flagged]

  • defrost 2 days ago

    A page of written text you mean?

    Likely a text editor or some web publishing tool.

    All of which raises a real question about your question and all your other recent "questions" - https://news.ycombinator.com/threads?id=DinoNuggies45

    is this just AI output generated to apear engaged while launching YetAnother beachhead account to gang upvote submissions, influence voting, etc.

alt187 2 days ago

I hope this project succeeds. Rust kernel-level development deserves far better than the current state of affairs.

  • guilhas 2 days ago

    Maybe c/c++ development deserves a better replacement than rust