Whenever I publish something about my Python Docker workflows, I invariably get challenged about whether it makes sense to use virtual environments in Docker containers. As always, it’s a trade-off, and I err on the side of standards and predictability.
You use them, make sure they are always pristine and cleaned after use, don’t have network connectivity and other things that could affect the build.
Or you could use Nix which builds everything this way.
Notice that you mentioned additional systems to achieve that, you wouldn’t need them if docker was truly providing it.
But that’s the whole point. A developer wants spec file to ALWAYS generate the same artifact. And most devs even believe that and get frustrated when it doesn’t (like in your example).
Nix basically solves that. It even removes the need for tools like artifactory, because there’s no longer need for it. The code fully defines the final binary. Of course you don’t want to rebuild everything every time, so a cache is introduced.
Before you say that it is just renaming artifactory. It really isn’t. It actually works like a cache. I can remove any piece of it, and the missing pieces will be rebuild if they are needed. It is also used by the builder, so it doesn’t repeat itself. I especially like it when working on feature branch and it completes the code. I eventually merge it, and if my merge did not modify code it won’t waste time rebuilding the same thing.