The OCI container is the binary. Docker is a tool to take any existing runtime and turn it into a static binary you can feed to any future linux(ish) kernel.
I on the other hand, have had plenty of situations where some system somewhere is managed with some config management tool (such as puppet) and it is told "make this application with these dependencies" and the application, such as "manage elasticsearch index rotation" uses the system python and does pip install to install all the things. This idiom seems fine, but ultimately ends up with a hideous mess where it worked one day and didn't work a year later, because either the system python is different or the dependencies are all messed up or the pip tool itself on the system python isn't compatible with modern package distribution.
You can (correctly) say "don't do that" but the same idiom in a shell script will almost certainly work across decades regardless of if "sh" is bash or ash or ksh, to some degree of "works" (again, you can _correctly_ say that no shell script ever works correctly).
I on the other hand, have had plenty of situations where some system somewhere is managed with some config management tool (such as puppet) and it is told "make this application with these dependencies" and the application, such as "manage elasticsearch index rotation" uses the system python and does pip install to install all the things. This idiom seems fine, but ultimately ends up with a hideous mess where it worked one day and didn't work a year later, because either the system python is different or the dependencies are all messed up or the pip tool itself on the system python isn't compatible with modern package distribution.
You can (correctly) say "don't do that" but the same idiom in a shell script will almost certainly work across decades regardless of if "sh" is bash or ash or ksh, to some degree of "works" (again, you can _correctly_ say that no shell script ever works correctly).