I'm genuinely interested in what the infrastructure was that couldn't support orchestration of Puppet clients. I hear this from people sometimes and it usually ends up being related to poorly architected Puppet infrastructure for their environment.
A properly architected Puppet environment should have no problems dealing with thousands of clients.
Depends what you want to do. If you're happy for changes to dribble in over time then size your puppetmaster pool to how many hosts you want to be able to run puppet simultaneously and stagger client execution to avoid a stampeding herd. Accept that sometimes there will be individual failures due to load and clients will just have to wait till the next time.
Alternatively, in real life, many teams have Change Management to consider and Maintenance windows. If there's a need to update thousands of systems on a saturday morning then expect teams to start puppet runs manually. You'd better have a seriously big pool of puppetmasters ready and waiting to manage the load, and don't forget Puppet DB, that has to be scaled up too to avoid lock ups. Even then, if teams start too many puppet runs at once, you'll get flattened.
We ended up scrapping all the puppetmasters in individual DCs and consolidating them in an AWS EC2 Autoscaling group. The number of puppetmasters started at 70 and just went up. That came with problems of its own. e.g. ensuring that all puppetmasters share the same copy of role versions at the same time. Being able to spin up new puppetmasters fast enough to meet spikes in demand. Various other corner case tuning issues.
It's taken a dedicated team years to get to grips with puppet, tame it and master it. Very glad I'm not involved in that any more.
As someone that has built and maintained several Puppet based infrastructure environments, I can get the frustration of Puppet when it is not configured correctly or left to its own devices. You absolutely need to keep Puppet up to date in your environment or it will steamroll you over time with pain/upgrades.
However, I do need to correct you that Puppet does support loops and various other ways to munge data for quite a few years/versions now. In addition, _Hiera_ is not that bad once you understand the overall hierarchy of your infrastructure and it gets easier if you shift the default data store to something like HashiCorp Vault for storing secret data.
We get away with not writing tests (we do heavy linting/checks though) in our Puppet infrastructure currently because everything has been documented for our development teams on best practices, trainings, and just communicating expectations well. There are some things that can sneak thru sure, but overall we've had #greatsuccess with just being open about we expect in our repos and being approachable to new committers for onboarding reasons.
Puppet can be a beast to keep up to date I will say, but if you have a good plan in place it's really a wonderful tool for everyone involved.
I wish there was more background on what/why git.io was a thing, as well as why it's now being discontinued in this announcement. First time hearing of this service.
The website I feel needs a bit of polish. All this work for what sounds like a pretty cool game but a complete lack of detail on putting download information front and center. It's buried either under a menu or down at the bottom of the page. Even then it is split between "downloads" and "demo 401".
I really don't want to download that huge demo to figure out my system doesn't even support running it for example.
There appears to be a bug where you can get caught in a "death loop" on game start. Click to start the game, but let the bird fall and immediately die. Every subsequent restart of the game will lead to an immediate restart loop when you try to start clicking to play as the intro rises.
A properly architected Puppet environment should have no problems dealing with thousands of clients.