icandoit 6 years ago

I really enjoy the craftsmanship in project like the SerenityOs operating system, the Zig programming language, and similar projects in the HandMade network (games and debuggers).

I hope this is only the beginning of a renaissance in quality independent software.

throwaway8879 6 years ago

The SerenityOS author makes fairly regular videos related to the OS, bug fixes, and other dev and life things. It's cool and inspiring to see a highly productive hacker do their thing on stream, kind of like watching Steve Gadd do a drum solo.

  • akling 6 years ago

    spins drumstick

    Indeed, if someone wants to check it out, I do have a YouTube channel at https://youtube.com/c/AndreasKling

    I’m glad you like the content, throwaway8879. Thanks for giving me an opportunity to link it :)

    • codetrotter 6 years ago

      Seconding the recommendation for this channel. Been following it for a few months now and enjoy each and every video a lot. Like, to anyone that hasn't seen his videos, get over there and watch one of them. Those videos are the real deal ^^

jabedude 6 years ago

Andreas, I find your improvements in the SerenityOS security posture really interesting (including pledge()/unveil()). Could you share your thoughts about choosing the BSD/pledge API instead of the Linux/seccomp way?

  • akling 6 years ago

    Hi jabedude! I looked at a couple of different approaches to this, and I just fell in love with the simplicity of pledge()/unveil().

    The main things I like about it are:

    1. The promises are baked into the programs themselves. No need to keep an outside profile in sync whenever something changes.

    2. The pledge() and unveil() APIs are so simple that any program can start using them immediately.

    Enabling these in userspace programs has been both fun and really interesting so far.

    Often you’ll need to start with a wide range of promises, but you can shrink it down by reorganizing code to do more initialization up front, allowing you to drop pledges incrementally.

    This actually gives me the same fuzzy feeling as performance work, except instead of trying to make something take less time/space, you’re making it relinquish more capabilities. :)

    • smhenderson 6 years ago

      Hi Andreas, thanks for your work on this. I'd not heard of your project before today but I really enjoy using OpenBSD so the headline caught my eye.

      I'll be sure to give Serenity a try ASAP.

      Based on your experience so far updating user programs to use pledge and unveil; do you have any suggestions for how to go about analyzing a program up front and figure out where to spend the effort putting these into legacy code?

      I haven't any time to play with this on OpenBSD but I've been meaning to so any insight you can provide is most appreciated.

      And BTW, the screenshots of Serenity are great, really take me back to "the good ol' days"!

      • akling 6 years ago

        Hi smhenderson!

        The process so far has basically been some variant of this:

        1. Describe the program to myself, and write out the promises/paths I think it will need.

        2. Try it out and hit a failure 90% of the time.

        3. Add the promises/paths I didn't think of.

        4. Stare at it a bit.

        5. Reorganize the code so I can drop more promises/paths and end up with a smaller final set.

        It's obviously a lot easier if you're intimately familiar with the programs you're pledging.

        pledge() needs are a bit easier to discover, since you can tell immediately when one is missing.

        unveil() needs can be trickier, since many programs handle failure to open a file and try to carry on anyway. So it takes more effort and attention to understanding file system needs.

        I'm still figuring this out as I go obviously. I've also gotten some good tips from brynet@ and jcs@ of OpenBSD along the way :)

        • smhenderson 6 years ago

          Hi Andreas, thanks! Sounds both fun and infuriating at the same time, like a lot of low level programming I guess! :-)

          Thanks for the feedback, I am now looking forward to my first arrival at step 4 which is probably where I'll end up spending the bulk of my time.

          I appreciate the work you're doing, looks very intense and rewarding!

        • yjftsjthsd-h 6 years ago

          > unveil() needs can be trickier, since many programs handle failure to open a file and try to carry on anyway.

          Is it possible to run strace and note failed file opens? There'll be some nice, but you could pretty easily compare against files that do actually exist and use that to make rules almost automatically, I would think?

Accujack 6 years ago

Nifty concepts. This sort of system being inherent in next generation OS design will be a big factor in limiting or eliminating malware and security issues as they presently exist.

  • mmis1000 6 years ago

    Well… linux already has similar ability to do that.

    They are apparmor and seccomp.

    The apparmor can limit file system access. And the seccomp limit system call.

    But the apparmor requires root and can't be trigger from process itself. And the seccomp requires you to handwriting filter chain that I believe most people can't do.

    • Accujack 6 years ago

      Right, that's what I mean by "inherent". When all OSs include features like this and all applications know about them, security and malware immunity will be much improved.

      Linux is usually out front in getting OS features in place, but that's just the OS part, not the application software. Windows and the Android version of the Linux kernel (and all android apps) really need this feature too.

tasty_freeze 6 years ago

I don't understand how unveil works for applications which ask the user to supply a file to work on, as the app can't pre-declare where in the file system the user might want to go to. For example, how would a word processor allow a user to select an arbitrarily located document to edit?

Or is there some exception mechanism which allows any directory path that the user selected manually?

  • q3k 6 years ago

    IIUC, you would usually have a separate subprocess responsible for file access only, communicating over IPC with a less trusted and more complex process (eg. one that parses documents and renders fonts).

    See: architecture of acme-client(1): https://kristaps.bsd.lv/acme-client/

  • the8472 6 years ago

    Pledge and unveil aren't inherited. So you can use a helper process for opening files.

  • notaplumber 6 years ago

    If a program needs arbitrary file-access late or "forever" then at least unveil(2) won't work, because that process needs arbitrary access. Sometimes a user policy can be enforced, i.e: documents must be in $HOME/Documents. But if not, that process can still pledge rpath or wpath. It's broader but may still limit creating/removing files, reading but not writing, etc.

    unveil(2) requires upfront knowledge, hard-coded or via configuration file, or computed at initialization time before the final locking unveil call.

    One model is using a privilege separation, various ways to do it, browsers might allow the main browser process access to the filesystem, defining an IPC mechanism (passing file descriptors) for restricted processes, like the renderer or content processes which are "sandboxed". They could use unveil(2) to lock down direct access to the filesystem, except to maybe read access to browser config, temporary directories.

    Another is having an out-of-process "filepicker" UI.

bleair 6 years ago

Neat, though I wish pledge and unveil included a string parameter to indicate why the process is needing the requested resource)s). Thus way as the user of an application I have a hint for why a process is trying to access some resource. The code making the call could try to lie, but at least I’d have a hint about the processes claim med intent vs. what it does

greatjack613 6 years ago

Can someone explain what the advantages of such a system is / are? I mean if a program can say what its doing, then a hacked program will also declare what its doing before, so what security benefit does this provide?

  • iSnow 6 years ago

    The program pledges at startup, if it gets corrupted at runtime and the bad code tries to call something beyond what's pledged, it's game over. Hacked code from disk is still dangerous, but runtime-modifications would be caught.

  • akling 6 years ago

    These mechanisms allow a program to drastically limit the amount of damage it could do if it's subverted by bad/malicious inputs (or something goes wrong because the program itself has bugs..)

    Imagine a program that connects to a host over the Internet. And someone malicious is in control of that host. If that malicious host manages to take control of the connecting program by exploiting a vulnerability in it, it still doesn't gain full access to your local machine, only to the limited subset of functionality that the connecting program has pledged. :)

woodrowbarlow 6 years ago

so this is essentially a syscall permissions system, right? i don't understand the point of having the application itself define its own permissions, as opposed to the user imposing permission restrictions upon an application.

could somebody enlighten me?

  • clarry 6 years ago

    The application knows what permissions it needs, and it can drop permissions once it has done early initialization that requires more permissions than normal operation. You, as a user, have a much harder time knowing.

    The point of this mechanism is to ensure that the application does only what the application's author intended it to do (so exploiting a bug does not lead to it doing things it was never intended to do). It's not a mechanism for users to sandbox malicious applications.

    • CobrastanJorji 6 years ago

      Exactly. It's the same reason using Linux as the root user is a bad idea. Yes, you have full permission to do whatever you want to your own system, but you want to be very explicit when you do so to avoid doing really bad stuff by mistake or trickery.

    • jerf 6 years ago

      In particular, this stems from the observation that many, many programs have an initialization phase where they may read config, open logs, hit the network for config and peers, etc., and do all sorts of things that they do not do during their steady state operation. However, when programs are "hacked", particularly remotely over the network, they are in their steady-state phase. So while there is in some sense not much theoretical gain to pledge(), in practice it is applicable to a wide variety of software where the program itself can say "Alright, I'm out of my init phase, now crank the screws down on me and make sure I don't do these other things because it's guaranteed they're bad."

  • rst 6 years ago

    The most obvious rationale for this sort of thing is limiting what an attacker can do after injecting and running their own code in the process. (Straight Morris-worm style buffer overflow attacks are hard to execute these days, but there are more sophisticated variations on the theme which still often work.)

    There are other attacks which can also be mitigated by this sort of thing. A common attacker trick is supplying paths with a lot of '../../../..' embedded, to trick the program into accessing (and potentially leaking the contents of) parts of the filesystem that a remote attacker isn't supposed to have access to. (Citrix web appliances were recently subject to a particularly nasty version of this attack, cataloged as CVE-2019-19781, leading to full compromise.) Using 'unveil()' to limit the scope of filesystem access is a viable mitigation strategy...