Sql databases usually come with their own date types that are implemented with integers behind the curtains. They take up less space and are easier to sort than text fields.
EDIT: also I wouldn’t consider this egregious. If the Senior explains and the person was happy to learn something then that’s a good outcome. If they are stubborn about it, then that wouldn’t be great.
But which very likely store less information than an ISO timestamp does. (The integers stored are usually POSIX timestamps.) So, you might have to store, e.g., timezone information elsewhere. (And that might also be a good thing. It might not. It depends. Without knowing more, the DB's type is probably the right choice.)
I've seen far worse things than this done to a SQL database though…
Except that ISO-8601 doesn’t include anything as useful as a political time zone, just an offset from UTC. That’d be good if UTC offset was useful for anything, but instead it just adds to the confusion.
At least in Postgresql, it is standard practice to use timestamptz type, which saves timestamp together with a time zone. Not only it uses less space on disc, but you can do a lot of operations with it natively in database itself: sort in various ways, compare to NOW(), construct time intervals and even ensure data validity with checks.
In the spirit of nitpicking, class A, B, and C referred to specific address blocks, not network sizes. A /24 in the class A range was still class A rather than class C.
I worked on network/firewall at a rather large bank from 2009-2019 and we used the class labels and "slash 24s" etc. interchangeably when talking. Not that you're incorrect, but that the slang was used and everyone knew what people meant by it.
I might be biased because I'm currently working on this same optimization but for GCC. I think that it is more complicated than it seems. Defining the layout of a structure is normally done during parse time (at least in GCC). And one cannot know if it is safe to reorder fields all the way up until link time.
This means that a lot of optimizations and assumptions that were made might not longer be valid. For example, in GCC the results of the sizeof operator are evaluated at parse time. One might need to change these values link time. Otherwise statements like:
memset(&astruct, 0, sizeof(astruct));
will overwrite memory that is no longer part of the struct.
Furthermore, constant propagation might have propagated the value of a sizeof operator and it might have been folded into different arithmetic operations. This can be solved in different ways, but if the compiler was not built this way, it might take some time to implement. Also, all pointer arithmetic will need to be changed...
I agree 100% that a developer isn’t required to implement any feature. One thing to think about is that it’s not necessarily a single user or a single team weighing costs and benefits. In an organization with separate dev and ops teams, a dev team may pick a solution for the benefits but the ops team bears the costs of operations and security.
You can also use the same basic cicd pipeline to manage infrastructure on different providers instead of a custom pipeline for each provider tailored to their tool.
EDIT: also I wouldn’t consider this egregious. If the Senior explains and the person was happy to learn something then that’s a good outcome. If they are stubborn about it, then that wouldn’t be great.