Co-location has never scaled down to 1U very well, because of the overheads - you can't just let trust 40+ customers to slide their 1U server into the same rack without security concerns and/or losing lots of space, and having extra work with power distribution and networking that the customers themselves are responsible for with a full rack.
Just access arrangement for 40x more customers adds up in admin overheads, and colocation isn't really a tech play as much as it's a real-estate play similar to parking where the goal is minimum effort rent-extraction with as little staff as possible...
Regarding security aspect, as for what customer can do: you can bring your device and put it in shared rack while support personnel accompanies you. Power/Ethernet and if keyboard/screen is needed will be managed/connected/wired only by support staff.
One can rent a whole rack if you want dedicated access. And 1/2 or 1/4 racks are available if fullsize is not needed.
That the price for 1U will tend to not be very competitive with renting dedicated servers (I can rent a server including hosting for that price) because the overheads to the provider of subdividing that 42U rack adds up. The point wasn't that the security can't be dealt with, but that dealing with it is one more thing that contributes to a higher cost per U if you rent 1U than if you rent quarter rack or more.
A 1U can very easily contain 64+ physical cores, many TB of storage, and a few hundred GB of RAM. A 1U colo can be a great deal if you’re looking to use that much compute/storage.
The admin for those arrangements is pretty simple really. Even if you’re providing supervised access, it’s not going to be much work. I run several small colo deployments like this, and I probably only visit the sites every couple of years.
If you only need one VPS, then you potentially only need a tiny fraction of 1U worth of compute/storage. That’s not a sensible colo use case.
From the DC perspective, the biggest costs for providing colo are power, AC (which is mostly power), network and real estate. Supervising rack access is a very small line item in their accounts.
Supervising rack access and/or using physical barriers was one of a list of different reasons for why the cost per 1U is so different if you buy 1U rather than a full rack. It may not be the most significant one, but it is there.
As for power, and network, it's often charged separately, and you will still find the 1U vs. full rack difference then. Sure, you can to some extent perhaps assume a slightly lower load factor for customers that rent a full rack, and that may contribute too.
But the point remains: The person above me should not be surprised that renting space by the 1U slot is expensive.
Co-location has never scaled down to 1U very well, because of the overheads - you can't just let trust 40+ customers to slide their 1U server into the same rack without security concerns and/or losing lots of space, and having extra work with power distribution and networking that the customers themselves are responsible for with a full rack.
Just access arrangement for 40x more customers adds up in admin overheads, and colocation isn't really a tech play as much as it's a real-estate play similar to parking where the goal is minimum effort rent-extraction with as little staff as possible...