It drives me nuts that sandbox-exec has "sandbox" in the name, since it's nothing like a real sandbox, and much closer to something like a high-level seccomp, and not much to do with "App Sandboxes" which is a distinct macOS feature.
IMO a real sandbox let's a program act how it wishes without impacting anything outside the sandbox. In reality many of these tools just cause hard failures when attempting to cross the defined boundaries.
It's also poorly documented and IIRC deprecated. I don't know what is supposed to replace it.
If macOS simply had overlay mounts in a sandbox then it would unlock so much. Compared to Linux containers (docker, systemd, bubblewrap, even unshare) macOS is a joke.
> not much to do with "App Sandboxes" which is a distinct macOS feature
The App Sandbox is literally Seatbelt + Cocoa "containers". secinitd translates App Sandbox entitlements into a Seatbelt profile and that is then transferred back to your process via XPC and applied by an libsystem_secinit initializer early in the process initialization, shortly before main(). This is why App Sandbox programs will crash with `forbidden-sandbox-reinit` in libsystem_secinit if you run them under sandbox-exec. macOS does no OS-level virtualization.
It is a little more direct than that even. The application's entitlements are passed into the interpretation of the sandbox profile. It is the sandbox profile itself that determines which policies should be applied in the resulting compiled sandbox policy based on entitlements and other factors.
An example from /System/Library/Sandbox/Profiles/application.sb, the profile that is used for App Sandboxed applications, on my system:
What you're describing is a resource virtualization with transactional reconciliation instead of program isolation in the mediation sense (MAC/seccomp-style denial).
To let a program act as it wishes, ideally every security-relevant mutable resource must be virtualized instead of filtered. Plus, FS is only one of the things that should be sandboxed. You should also ideally virtualize network state at least, but ideally also process/IPC namespaces and other such systems to prevent leaks.
You need to offer a promotion step after the sandbox is over (or even during running if it's a long-running program) exposing all sandbox's state delta for you to decide selective reconciliation with the host. And you also must account for host-side drift and TOCTOU hazards during validation and application
I'm experimenting with implementing such a sandbox that works cross-system (so no kernel-level namespace primitives) and the amount necessary for late-bound policy injection, if you want user comfort, on top of policy design and synthetic environment presented to the program is hair-pulling.
> I'm experimenting with implementing such a sandbox that works cross-system (so no kernel-level namespace primitives) and the amount necessary for late-bound policy injection, if you want user comfort, on top of policy design and synthetic environment presented to the program is hair-pulling.
Curious, if this is cross-platform, is your design based on overriding the libc procedures, or otherwise injecting libraries into the process?
I'm not interposing libc or injecting libraries. Guests run as WASM modules, so the execution substrate is constrained. The host mediates and logs effects. Changes only propagate via an explicit, policy-validated promotion step.
> If macOS simply had overlay mounts in a sandbox then it would unlock so much. Compared to Linux containers (docker, systemd, bubblewrap, even unshare) macOS is a joke.
You'll want to look into Homebrew (or Macports) for access to the larger world
Truly, what purpose does this serve? Defining a hierarchy without using is injecting immediate debt. Just introduce it when stuff goes there! If you really insist then at least put something in the folder. It doesn't take much effort to make the change at least a tiny bit meaningful.
Better yet just do the work. If you want make a commit in a branch that's destined to be squashed or something, sure, but keep it away from the shared history and certainly remove it when it's not needed anymore.
I play around with ComfyUI on my computer to make silly images.
To manually install it, you must clone the repo. Then you have to download models into the right place. Where's the right place? Well, there's an empty directory called models. They go in there.
The simplest answer is that sometimes other existing software that I need to use treats an empty directory (or, hopefully, a directory containing just an irrelevant file like .gitkeep) differently from an absent directory, and I want that software to behave in the first way instead of the second.
A more thorough answer would be: Filesystems can represent empty directories, so a technology that supports versioned filesystems should be able to as well. And if that technology can't quite support fully versioned filesystems -- perhaps because it was never designed with that goal in mind -- but can nevertheless support them well enough to cover a huge number of use cases that people actually have, then massaging it a bit to handle those rough edges still makes sense.
Legitimately asking, please share the name of software that expects/requires an empty directory and interprets .gitkeep in this way, but chokes on a README file.
Many filesystems cannot represent empty directories. Many archive formats also do not. I don't think this a problem in practice. I find this argument extremely weak.
> please share the name of software that expects/requires an empty directory and interprets .gitkeep in this way, but chokes on a README file.
Every mail server since postfix supports Maildir, in which every email is a file in one of 3 subdirectories (tmp, new or cur) of a user directory. If there's no jbloggs/tmp dir, postfix will think user jbloggs does not exist. So if you want to take a restorable snapshot of this with git, there needs to be a file there. I don't know if jbloggs/tmp/README would cause a problem, because I don't know how postfix will treat the presence of a file with a name that violates its expected syntax (which includes a timestamp and other metadata), but what I do know is that, after running `git clone any-repo`, I can safely delete every .gitkeep file to restore the system to its original state without having to understand that repo in detail -- while I cannot safely delete every README file. That's because the commonly understood semantics of .gitkeep is "I exist solely to shoehorn this empty dir into git", which does not apply to other files.
> Many filesystems cannot represent empty directories
Anecdotally I've actually had pretty good interactions with GCP including fast turn arounds on bugs that couldn't possibly affect many other customers.
While they do not have direct SLAs, they still have to comply with rules enforced by browser vendors, as they will remove you from CT checks and you'll be marked retired/untrusted (you can find some in the above list).
This means a 99% uptime on a 90 day rolling average, a 1 minute update frequency for new entries (24 hours on an older RFC). No split views, strict append-only, sharding by year, etc.
X509 certificates published in CT logs are "pre-certificates". They contains a poison extension so you don't be able to use them with your private key.
The final certificate (without poison and with SCT proof) is usually not published in any CT logs but you can submit it yourself if you wish.
OP idea won't work unless OP will submit final certificate himself to CT logs.
I recognize H. A. Rey only as the author/illustrator of Curious George, had no idea he published anything else of note. Looks like my library has a copy. Thanks for sharing!
Conversely I feel like this is talked about a lot. I think this is a sort of essential cognitive dissonance that is present in many scenarios we're already beyond comfortable with, such as hiring consultants or off-shoring or adopting the latest hot framework. We are a species that likes things that feel good even if they're bad for us.
I'm not sure I'll ever understand why they replaced their working ~50 line shell script with a Rust program that just shells out to the same nix-* commands. I appreciate that there are some safety benefits, but that program is just not complex enough to benefit.
It took me a long time to realize this but yes asking people to just open and write to files (or S3) is in fact asking a lot.
What you describe makes sense, of course, but few can build it without it being drastically worse than abusing a database like postgres. It's a sad state of affairs.
IMO a real sandbox let's a program act how it wishes without impacting anything outside the sandbox. In reality many of these tools just cause hard failures when attempting to cross the defined boundaries.
It's also poorly documented and IIRC deprecated. I don't know what is supposed to replace it.
If macOS simply had overlay mounts in a sandbox then it would unlock so much. Compared to Linux containers (docker, systemd, bubblewrap, even unshare) macOS is a joke.
reply