In case someone is unaware, 641A and Utah and both references to the US mass surveillance systems in this context. Specifically interceptors that a company wouldn't be able to prevent from saving your data for the few seconds they need to process and delete it
I might be misremembering, but AFAIK, that kind of surveillance mostly worked because many companies didn't bother encrypting datacenter-to-datacenter traffic, thinking that those networks are trusted. That mistake has since been rectified though.
With almost everything going over TLS these days and HTTPS being the norm, even for server-to-server APIs, it's much harder to snoop on traffic without the collaboration of one of the endpoints, and the more companies you ask for that kind of collaboration, the higher your risk of an unhappy employee becoming a whistleblower.
That's also about US companies that can't refuse or can't bother to challenge that a dragnet is set up in their process.
ISPs themselves didn't save any data.
However, they gave interception rooms to the NSA (which is indeed technically not them).
Nowadays ISPs aren't the right scale to do it for the reasons you mentioned. But the USA lowkey moved the dragnet to the main datacenters with prism, then made it mandatory for all with the CLOUD act.
And if the threat is not coming from the USA, but some other country starts to ask Discord to BCC them the IDs of their citizens, we can do the odds on whether Discord will challenge it or not.
Now I want to ask Discord who is their third party provider ? Why don't they process IDs themselves ?
Unless you use Cloudflare (or roughly any other DDOS protection system), in which case you're letting those companies MITM all requests on purpose. Protected between you and Cloudflare by PFS and any other acronym you like.
I think the odds that Cloudflare hasn't been forced into data snooping by the government are approximately zero. It's the by far the biggest, juiciest target.
Before bots automated vulnerability detection for hackers trying to breach networks, honeypots were made to trap people. Just because the author of the honeytrap design was also a person doesn't invalidate that it could trap another person
Ahh but don't you know? A picture is worth a thousand words, so your charts would be upping the measures word count, not reducing it by an order of magnitude
In my case, I set some course-work, where they have to log in to a Linux server in the university and process a load of data, get the results, and then write the essay about the process. Because the LLM hasn't been able to log in and see the data or indeed the results, it doesn't have a clue what it's meant to talk about.
for most of the low hanging fruit it's as easy as copy-pasting the question into multiple LLMs and logging the output
do it again from a different IP or two.
there will be some pretty obvious patterns in responses. the smart kids will do minor prompt engineering "explain like you're Peter Griffin from Family Guy" or whatever, but even then there will be some core similarities.
or follow the example of someone here and post a question with hidden characters that will show up differently when copy-pasted.
I'm incredibly impressed that you managed to make that whole message without a single usage of the most frequently used letter, except in your quotations.
Such omission is a hobby of many WWW folk. I can, in fact, think back to finding a community on R*ddit known as "AVoid5", which had this trial as its main point.
I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".
It's not oft that you run into such alluring confirmation of your point.
My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.
No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)
I felt like the article had a good argument for why the AI hype will similarly be unsuccessful at erasing developers.
> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
LLM's don't learn on their own mistakes in the same way that real developers and businesses do, at least not in a way that lends itself to RLVR.
Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
> through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
That is, the problems are a) how to generate a training signal without formally verifiable results, b) hierarchical planning, c) credit assignment in a hierarchical planning system. Those problems are being worked on.
There are some preliminary research results that suggest that RL induces hierarchical reasoning in LLMs.
My argument would be that while some complexity remains, it might not require a large team of developers.
What previously needed five devs, might be doable by just two or three.
In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.
In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.
There are shortcuts.
Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.
> evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve
I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them.
The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.
> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?
It's not obvious at all. Some people believe that once AI can do the things I've listed, the role of developers will change instead of getting replaced (because advances always led to more jobs, not less).
A $3 calculator today is capable of doing arithmetic that would require superhuman intelligence to do 100 years ago.
It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.
> that would require superhuman intelligence to do 100 years ago
It had required a ton of ordinary intelligence people doing routine work (see Computer(occupation)). On the other hand, I don't think anyone has seriously considered to replace, say, von Neumann with a large collective of laypeople.
We are actually already at the level of magic genie or some sci-fi level device. It can't do anything obviously but what it can is mind blowing. And the basis of argument is obviously right - potential possibility is really low bar to pass and AGI is clearly possible.
Trees are not static, unchanging, pop into existence and forget about, things.
Trees that don't get regular "updates" of adequate sunlight, water, and nutrients die. In fact, too much light or water could kill it. Or soil that is not the right courseness or acidity level could hamper or prevent growth. Now add "bugs". Literal bugs, diseases, and even competing plants that could eat, poison, or choke the tree.
You might be thinking of trees that are indigenous to an area. Even these compete for the resources and plagues of their area, but are more apt than the trees accustom to different environments, and even they go through the cycle of life.
I think his analogy was perfect, because this is the first time coding could resemble nature. We are just used to the carefully curated human made code, as there has not been such a thing as naturally occuring, no human interaction, code before
I could be misinterpreting parent myself, but I didn't bat an eye on the comment because I interpreted it similarly to "everything humans (or anything really) do increases net entropy, which is harmful to some degree for earth". I wasn't considering the moral good vs harm that you bring up, so I had been reading the the discussion from the priorities of minimizing unnecessary computing scope creep, where LLMs are being pointed to as a major aggressor. While I don't disagree with you and those who feel that statement is anti-human (another responder said this), this is what I think parent was conveying, not that all human action is immoral to some degree.
Yes, this is what I meant. I used the word "harmful" in the context of the argument that LLMs are harmful because they consume resources (i. e. increase entropy).
But everything humans do does that. Everything increases entropy. Sometimes we find that acceptable. So when people respond to Pike by pointing out that he, too, is part of society and thus cannot have the opinion that LLMs are bad, I do not find that argument compelling, because everybody draws that line somewhere.
The OS is not very relevant to the Pixel. Compare the Pixels you like that are new (GrapheneOS drops support as models become older flagships, I think for security reasons) and get that one. IIRC, currently only Pixel is allowed, because the bootloader can be opened without rooting the device.
reply