So, you have a duty of care to make a safe workplace, at least in most countries.
Consider what a job with no joy means for the ongoing mental health of your staff, where the main interaction they have all day is with an AI model that the person has to boss around; with little training on norms.
Depression, frustration, nonchalance, isolation, and corner cutting are going to be the likely responses.
So at the same time as you introduce new tooling, introduce the quality controls you would expect for someone utterly checked out of the process, and the human resources policies or prevention to avoid your team speed running Godwin's law because they dont deal with people enough to remember social niceties are important.
Examples off of the top of my head of ways to do this are:
- Increased socialisation in the design processes. Mandatory fun sucks, a whiteboard party and collaboration will bring some creativity and shared ownership.
- Budget for AI minimal or free periods, where the intent is to do a chunk of work "the hard way"; and have people share what they experienced or learnt
- Make people test each other's work (manual testing) or collaborate, otherwise you will have a dysfunctional team who reaches for "yell in all caps to make sure the prompt sticks" as the way people talk to each other/deal with conflict.
The way to justify this to management above you is the cost of staff retention - advertise, interview, hire, pay market rates, equip, train, followed 6 months later by securely off boarding, hardware return, exit interview means you get maybe 4 months productivity out of each person, and pay 2 months salary in all of the early job mistakes or late job not caring, or HR debacle.
Do you or your next level up want to spend 30% more time doing this process? Or would you rather focus on generating revenue with a team that works well together and are on board for the long term?
The answer most of the time is "we want to make money, not spend it". So do the math on what staff replacement costs are and then argue for building in enough slack to the process that it costs about half of that to maintain it/train the staff/etc.
Your company is now making a "50% efficiency gain" in the HR funnel, year over year, all by simply... not turning the dial up to 10 on forced AI usage.
I'm applying gentle pressure, not forcing everyone to use it. If necessary, I will fight for my team as much as I can, but that's not where we're headed and I would think about switching jobs if it ever is.
Having said that: The dichotomy expressed in the threads here is a bit too extreme for my taste. It's not like working with AI is pure Yes-clicking review dread; there is joy to be found in materialising your ideas out of thin air, instead of the Lego-like puzzle solving experience many developers are used to.
And as mentioned in TFA, There's risk in both using it too little and too much. This also applies to employees, of course: If I shielded junior developers from AI tools, they'd end up in their next job utterly unprepared for what may be required from them as the world keeps spinning.
> Framed like that, sounds a lot better doesn't it?
Sure does, but that's not the situation I'm in. I'm trying to figure out the local maximum of keeping my company afloat in a world where AI has kicked the PMF from under our feet to the other end of the playing field, and ensuring my team stays happy, curious, and engaged. And I'm not the only one in this spot, I suppose.
> It's not like working with AI is pure Yes-clicking review dread; there is joy to be found in materialising your ideas out of thin air
I think that's true for some developers, and not for others. My guess is that one subset of developers has more ideas than they have time/resources to implement, and they enjoy programming because they love seeing the finished product emerge. I think this subset is more likely to go into management, because it's a force multiplier for them. They're the ones getting joy out of seeing AI make their ideas into reality.
But there's another subset who enjoys programming not because they love to see a product emerge, but because they enjoy the process itself: the head-scratching, the getting past "why won't this work" to the moment when the build starts working again or the site comes back up or the UI snaps into place. It's the magic of finding, among all the possible wrong answers, the exact right combination of bits that solve the problem. This subset is not getting any joy from AI: they're seeing AI take away that whole process and turn it into the kind of work their managers and their project owners do. It's made even worse because their managers don't even understand why they're so unhappy. I think managers would do well to consider how they're going to keep these folks happy and engaged and productive, because they're the ones who are going to be fixing the production bugs introduced by their teammates' AI commits. If they've all gone off to retrain as electricians, we're going to have a problem as an industry.
I meant in the physical sense of muscles/physical adaptations. I haven't read/written C++ in 10 years - it would take me a month to get back up to speed. If you never written C++ it would take you at least 6 months/year to get to the same level (depending on what we're comparing here).
Likewise for physical exercise - it took me a year to hit 100kg squat when I started getting into shape 10 years ago. I haven't been very physically active for years but I'd hit 100x5 in a month of starting gym again.
The problem is this is the difference between one or two obscure skills fading away with disuse (normal); and potentially all ability to load programming information into your working memory being affected; as you didn't develop the neural pathways or knowledge of the codebase (not normal or desirable)
While it is a spectrum around when you choose to use AI, what seems increasingly common in my experience is some people trying to go "all in", feel frustration and burnout when they are relegated to babysitting an LLM; get angry that it has made a mistake, misinterpretation or simply left something obvious out; then thinking it's user error/they didn't prompt well enough/it is their fault. At the same time, they are increasingly cognitively blind to mistakes at a review stage, so they find out the hard way in production and enter into a cycle of hyper vigilance/distrust/justifiable paranoia.
In those cases, it's a recipe for skills loss and depression over the long term and a vicious cycle.
Basically had the same urge to write about this problem, prompted by the exact same comments around mental fatigue this week. Only got to the research stage.
Here's some of the literature I dug up when looking at what is the potential risk to cognition when you don't enjoy what you are doing.
Working memory is "gated"; you selectively process information relevant to a goal - or why you need to turn the radio off to reverse a car.
(Numerous papers take it as a given, can't find a specific one developing the exact model of gating)
I would argue that typing is better than just reading, and programming requires some extra elements - as you cut and paste to rearrange, run tests, iterate, spatially navigate to where various areas of your code is; so is likely closer to the findings around handwriting than the study. But I don't have specific studies on that.
"Participants performed a delayed-estimation orientation working memory (WM) task with reward cues indicating reward levels at the beginning of trials. The results revealed that motivational incentives significantly improved WM performance and increased pupillary dilation during maintenance. These findings provide evidence for the modulation of WM maintenance by reward through enhanced top-down cognitive control processes."
https://www.jneurosci.org/content/39/43/8549
> "During the task, the prospect of reward varied from trial to trial. Participants made faster, more accurate judgements on high-reward trials. Critically, high reward boosted neural coding of the active task rule, and the extent of this increase was associated with improvements in task performance"
You can also infer from their experiments that low reward = less care exercised.
I feel like a lot of these papers aren't really surprising, but they do measure something that many people have probably felt is true but can't prove.
While these papers don't talk about AI or decline in skills specifically, it's reasonable to say you don't get many of the benefits when it is low reward/passive task execution; where you are leaving review comments that are just reprompting a machine - you know it's not a person, so it feels even lower value to engage than a standard code review might.
I think overall, the rule of thumb around when to use AI should be closely linked to how painful / low reward a task is likely to be. Debugging something with a 10 minute build/test loop and a mystery problem that is not easy to control? AI party.
Writing a complex but fun set of business rules? Run it on your wetwear while it is still giving you a sugar hit. An "easy" bug you have stuffed up fixing three times in a row? Push through a bit of discomfort and frustration; but fall back to tooling when you have invested reasonable efforts and are starting to feel slightly fatigued.
Your best bet is probably to look for wikidata entries that are marked defunct; and match up to something like name-suggestion-index to get broad categories.
You might need to go back and read that one again, this is the faintest criticism of a lengthy screed in which the person you are replying to labels user-hostile behaviours as "acting like a jerk" and generally disapproves.
Your counter argument to this is to just be contrarian and imply they are a jerk... because, well, you don't agree with them. You didn't add substance to the discussion (facts, evidence, argument seeking middle ground), you just sought to set fire to someone because you were uncomfortable with the dim prospect you might be wrong/guilty of acting like this/be the subject of the criticism.
Do you see how this undermines your point of view/actually re-enforces the validity of the criticism?
You make one really good birthday cake. Following the success of this went to your local school fete out of the goodness of your heart and set up a cake stall, had a complaints and suggestions box on the table, maybe even had a donation tin out. You know it's out of the goodness of you heart because everyone will SEE you doing this and maybe you'll get hired by the local bakery.
But then it's a bit of a long day and you start screaming at everyone who came up to you for wasting your time, rejected requests to not put broken glass fragments in the cakes, get into a fistfight with the local health inspector who pointed out you need certain food prep hygiene practices. You get big mad, and leave your stall in a huff, where hapless strangers stumble across your cakes only to find they are now covered in bugs and get sick from eating them.
Would this be acceptable or unacceptable behaviour on your part?
Are you as the cake stall operator taking advantage of the the commons in any way (donations, showing off your bake-folio?)
Are you damaging the commons or people visiting the commons?
Does your free speech expressed in cake form outweigh the rights of people to tell you to change what you are doing?
Does your freedom of expression mean you should never be accountable?
Should people be thankful that you let them have cakes covered in bugs, even if they get sick as a result?
Does the local health inspector who is an expert in a domain that overlaps with everything food have any standing?
This is a contrived thought exercise; obviously.
But I would bet that you clearly identify that violated social norms aren't great; you would agree there are expectations about access to a commons have implied standards of behaviour for all parties; you have expectations around quality vs general safety, etc.
Now imagine I make a weird cake and I think it's interesting. I put up a poster with a photo and a recipe and say "thought this was cool, try it if you want." And then some nonce comes along and tells me off for a reckless disregard of other people's time and nerves. Compares it to an open manhole cover that could get somebody killed.
Throwing some interesting code onto a web site isn't like setting up a booth at a community event. Its not even really like putting up a poster, since posters get seen by whoever happens to come nearby whereas web sites only get seen by people who seek them out, but it's about the closest you'll get to a real-world analogy.
Why are you trusting data to some random open-source project with no documentation?
The search engine is only going to direct you to my open source repo if you're searching for whatever it does. It's as if you'd only see my cake recipe if you were searching for cake recipes. And just like cake recipes, your search results will contain everything from superb production-tested projects (if there are any) to random stuff people have put up that isn't really used.
If you're searching for software and you find some random project that isn't very well tested or maintained, and you put that project to use in a place where it can cause data loss, that again sounds like a you problem.
If I post an article about how drinking bleach makes your skin softer, I share responsibility when someone does it.
If I post an article about how to make your own bleach, and a reader says “that sounds tasty” and drinks some, that’s not my responsibility in any way.
If I put up some trash code with a README that says “this is solid, reliable code that you should use for storing all of your financial data and family photos,” I have responsibility for what happens when people do that. If I just put up some trash code and say, I thought this was interesting and wanted to share it, and some numbskull decided to use it for something critical without thoroughly evaluating it first, not my responsibility.
> README that says “this is solid, reliable code that you should use for storing all of your financial data and family photos,”
Show me a readme like that! I'll wait.
Everybody writes a legalese disclaimer that basically say it's trash software and the author has no resposibility, but here's the thing: everybody ignores it. This is the reality of FOSS software.
Nobody has the time to audit the code of every FOSS they use. We all assume some basic quality such as not deleteting /var/db and the responsibility is yours to not do that or not publish it, no matter what you wrote in the readme/disclaimer.
“ SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine. SQLite is the most used database engine in the world. SQLite is built into all mobile phones and most computers and comes bundled inside countless other applications that people use every day.”
You don’t have to fully audit what you use, but you’d better do some basic vetting. If there’s no web site, no documentation, no activity on the issue tracker, then maybe when you put all your precious data into the thing and lose it, that’s your problem.
- Don't publish a code of conduct and then be an absolute asshole to contributors (pick a lane and stick to it)
I feel there is a lot of performative policy published, which at the end of the day is lip service. Actual users or contributors come along and follow the guidance, expectations, etc? They then find themselves treated like a hostile entity and there is a weird prevailing attitude here that's "fine".
As others have expressed, sycophancy is not leadership.
How do you "safely push for change" in private if your executive leadership display sociopathic or narcissistic behaviors, where they expressly do not care about the harm they inflict on others?
Polls show that about a quarter of employees see something unethical, and half don't report it because they think nothing will happen OR they will be retaliated against.
This means that individuals who are doing misdeeds perceive there are no consequences. Part of your role is to surface that there are consequences; and you bringing them up now is far less expensive than a lawsuit later.
While you can absolutely choose your battles and there are some things that are ultimately harmful for you and achieve no great outcome; you are not a leader if you do not advocate for your team when obviously unjust things occur.
Consider what a job with no joy means for the ongoing mental health of your staff, where the main interaction they have all day is with an AI model that the person has to boss around; with little training on norms. Depression, frustration, nonchalance, isolation, and corner cutting are going to be the likely responses.
So at the same time as you introduce new tooling, introduce the quality controls you would expect for someone utterly checked out of the process, and the human resources policies or prevention to avoid your team speed running Godwin's law because they dont deal with people enough to remember social niceties are important.
Examples off of the top of my head of ways to do this are: - Increased socialisation in the design processes. Mandatory fun sucks, a whiteboard party and collaboration will bring some creativity and shared ownership. - Budget for AI minimal or free periods, where the intent is to do a chunk of work "the hard way"; and have people share what they experienced or learnt - Make people test each other's work (manual testing) or collaborate, otherwise you will have a dysfunctional team who reaches for "yell in all caps to make sure the prompt sticks" as the way people talk to each other/deal with conflict.
The way to justify this to management above you is the cost of staff retention - advertise, interview, hire, pay market rates, equip, train, followed 6 months later by securely off boarding, hardware return, exit interview means you get maybe 4 months productivity out of each person, and pay 2 months salary in all of the early job mistakes or late job not caring, or HR debacle. Do you or your next level up want to spend 30% more time doing this process? Or would you rather focus on generating revenue with a team that works well together and are on board for the long term?
The answer most of the time is "we want to make money, not spend it". So do the math on what staff replacement costs are and then argue for building in enough slack to the process that it costs about half of that to maintain it/train the staff/etc.
Your company is now making a "50% efficiency gain" in the HR funnel, year over year, all by simply... not turning the dial up to 10 on forced AI usage.
Framed like that, sounds a lot better doesn't it?
reply