I think it’s very important to be clear what studies like this are actually doing.
This study, although it has been produced by a computer science department, belongs more to the field of sociology or media studies than it does to computer science.
This is a study about the way in which human beings consume a particular media product - a consumer AI chatbot - not a study about the technological limitations or capabilities of LLMs.
The social impact of particular pieces of software is a legitimate field of study and I can see the argument that it belongs in the broadly defined field of computer science. But this sort of question is much more similar to ‘how does the adoption of spreadsheet software in finance impact the ease of committing fraud’ or ‘how does the use of presentation software to condense ideas down to bulletpoints impact organizational decision making’. Software has a social dimension and it needs to be examined.
But the question of which models were used is of much less relevance to such a study than that they used ‘whatever capability is currently offered to consumers who commonly use chat software’. Just like in a media studies investigation into how viewing cop dramas impacts jury verdicts the question is less ‘which cop dramas did they pick to study?’ So long as the ones they picked were representative of what typical viewers see.
Yes, but as soon as you start checking in and sharing access to a project with other developers these things become shared.
Working out how to work on code on your own with agentic support is one thing. Working out how to work on it as a team where each developer is employing agentic tools is a whole different ballgame.
1. Provision of optional tools: I may use an ai agent differently to all other devs on a team, but it seems useful for me to have access to the same set of project-specific commands, skills & MCP configs that my colleagues do. I amn't forced to use them but I can choose to on a case by case basis.
2. Guardrails: it seems sensible to define a small subset of things you want to dissuade everyone's agents from doing to your code. This is like the agentic extension of coding standards.
Most people do, most people don’t have wildly different setups do they? I’d bet there’s a lot in common between how you write code and how your coworkers do.
In my own group, agentic coding made sharing and collaboration go out the window because Claude will happily duplicate a bunch of code in a custom framework
In my AGENTS.md I have two lines in almost every single one:
- Under no condition should you use emoji's.
- Before adding a new function, method or class. Scan the project code base, and attached frame works to verify that something else can not be modified to fit the needs.
I'm curious about the token usage when it scans across multiple repositories to finding similar methods. As our code grows so fast, is it sustainable ?
I think the idea is that by creating these shared .claude files, you tell the agent how to develop for everyone and set shared standards for design patterns/architecture so that each user's agents aren't doing different things or duplicating effort.
The effects of the AI hyper scaling boom on the commodity hardware and energy markets are very much not like the dot com boom.
Outside of the obvious economic effect of the dot com boom - the creation of near infinitely scalable high margin online businesses - there was a secondary effect on consumer electronics, with a massive growth in demand for networked devices; there was then much more of a balance between the hardware growth in the network infrastructure and data center worlds as well as in desktop and mobile.
The AI boom’s hardware impact is much more skewed, as this article details.
The fact that there’s been a massive expansion in the nonconsumer market means the consumer market makes up a smaller proportion of the overall market, but it doesn’t mean the consumer market is any smaller than it used to be.
Seems regionally biased. This map makes it look like the Americas barely see any ship traffic, while the South China Sea is paved with ships from shore to shore.
The way I understand marinetraffic works is by having AIS receivers near shores and sending any received contacts to an API. If this works the same way then there's probably a lot fewer receivers so far.
There is no meaning in converting a conventionally destructive, random, chaotic act into a directed, aesthetic, meaningful one?
The fact he has a portrait of Kamala Harris called “glass ceiling breaker” and one of the victims of the Beirut explosion called #weareunbreakable suggests that you don’t need to dig particularly deep to find meaningful subtext in the choice of material and technique.
This is what I was driving at. I should have been more specific to say not particularly meaningful or evocative to me. From the previews I've seen it's all based around shattering and breaking. Where I will give credit, there's one: "Transformation" where natural light is reflected at the shattered glass to portray a face which I find to be fascinating. The rest feel kitschy, it's not quite to my tastes.
Outside my house right now it’s a cold, still evening with a high overcast. My expectation based on my years of experience living here and having seen these conditions before would be that it would likely clear out overnight, freeze hard, and be a beautiful day tomorrow.
In fact, though, a massive bomb cyclone is forming a few hundred miles away and it’s likely to dump over a foot of snow on us in the next 24 hours, accompanied by 50mph winds.
Weather forecasts are, not surprisingly, actually useful.
It's a numbers game. You only need one in twenty con artists to become wildly successful before they're caught, and your overall con artist portfolio is guaranteed to win out.
And of course, there's no downside for the investors. If you backed a con artist, you're not culpable - you're a victim.
This study, although it has been produced by a computer science department, belongs more to the field of sociology or media studies than it does to computer science.
This is a study about the way in which human beings consume a particular media product - a consumer AI chatbot - not a study about the technological limitations or capabilities of LLMs.
The social impact of particular pieces of software is a legitimate field of study and I can see the argument that it belongs in the broadly defined field of computer science. But this sort of question is much more similar to ‘how does the adoption of spreadsheet software in finance impact the ease of committing fraud’ or ‘how does the use of presentation software to condense ideas down to bulletpoints impact organizational decision making’. Software has a social dimension and it needs to be examined.
But the question of which models were used is of much less relevance to such a study than that they used ‘whatever capability is currently offered to consumers who commonly use chat software’. Just like in a media studies investigation into how viewing cop dramas impacts jury verdicts the question is less ‘which cop dramas did they pick to study?’ So long as the ones they picked were representative of what typical viewers see.
reply