Hacker Newsnew | past | comments | ask | show | jobs | submit | norir's commentslogin

Communism is neither the opposite of laissez-faire capitalism nor the only alternative.

Model competition does nothing to address monopoly consolidation of compute. If you have control over compute, you can exert control over the masses. It doesn't matter how good my open source model is if I can't acquire the resources to run it. And I have no doubt that the big players will happily buy legislation to both entrench their compute monopoly/cartel and control what can be done using their compute (e.g. making it a criminal offence to build a competitor).

Model competition means that users have multiple options to chose from, so if it turns out one of the models has biases baked in they can switch to another.

Which incentivizes the model vendors not to mess with the models in ways that might lose them customers.


I don't think anyone considers biases more important than, say, convenience. The model that only suggests Coca–Cola brands will win over the one that's ten times slower because it runs on your computer.

> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.

I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.


The way I put it is: AI assistance in programming is a service, not a tool. It's like you're commissioning the code to be written by an outside shop. A lot of companies do this with human programmers, but when you commission OpenAI or Anthropic, the code they provide was written by machine.

Also it's not only the work of "the model" it's the work of human beings the model is trained on, often illegally.

Copyright infringement is a tort. “Illegal” is almost always used to refer to breaking of criminal law.

This seems like intentionally conflating them to imply that appropriating code for model training is a criminal offense, when, even in the most anti-AI, pro-IP view, it is plainly not.


> “Illegal” is almost always used to refer to breaking of criminal law.

This is false, at least in general usage. It is very common to hear about civil offenses being referred to as illegal behavior.



> There are four essential elements to a charge of criminal copyright infringement. In order to sustain a conviction under section 506(a), the government must demonstrate: (1) that a valid copyright; (2) was infringed by the defendant; (3) willfully; and (4) for purposes of commercial advantage or private financial gain.

I think it’s very much an open debate if training a model on publicly available data counts as infringement or not.


I'm replying to your comment about infringement being a civil tort versus a crime, it can be both.

Or for another analogy, just substitute the LLM for an outsourced firm. Instead of hiring a firm to do the work, you're hiring a LLM.

I was about to argue, and then I suddenly remembered some past situations where a project manager clearly considered the code I wrote to be his achievement and proudly accepted the company's thanks.

How many JavaScript libraries does the average fortune 1000 developer invoke when programming?

That average fortune 1000 developer is still expected to abide by the licensing terms of those libraries.

And in practice, tools like NPM makes sure to output all of the libraries' licenses.


Prompting the AI is indeed “do[ing] it yourself”. There’s nobody else here, and this code is original and never existed before, and would not exist here and now if I hadn’t prompted this machine.

Sure. But the sentence "I am a programmer" doesn't fit with prompting, just as much as me prompting for a drawing that resembles something doesn't make me a painter.

Exactly. He's acting as something closer to a technical manager (who can dip into the code if need be but mostly doesn't) than a programmer.

So, what's your take on Andy Warhol, or sampling in music?

The line gets blurrier the more auto-complete you use.

Agentic programming is at the end of the day a higher level auto complete, with extremely fuzzy matching on English.

But when you write a block and you let copilot complete 3, 4, 5 statements. Are you really writing the code?


Truth is the highest level of autocomplete

I consider luajit a much better choice than bash if both maintainability and longterm stability are valued. It compiles from source in about 5 seconds on a seven year old laptop and only uses c99, which I expect to last basically indefinitely.

A precomputed lookup table would be about 1MB covering all of then code points. The lookup code would first compute the code point (and also could do validation) and directly look up the class in the table. The lookup table would not need to be directly embedded in go code and could just be stored in a binary file. But I'd imagine it also could be put in an array literal in its own file that would never be opened by an ide if the program needs to be distributed as a single binary.

It is depressing that our collective solution to the problem of excess boilerplate keeps moving towards auto-generation of it.

I personally find autocomplete to be detrimental to my workflow so I disagree that it is a universal productivity improvement.

Have you ever worried that by programming in this way, you are methodically giving Anthropic all the information it needs to copy your product? If there is any real value in what you are doing, what is to stop Anthropic or OpenAI or whomever from essentially one-shotting Zed? What happens when the model providers 10x their costs and also use the information you've so enthusiastically given them to clone your product and use the money that you paid them to squash you?


Zed's entire code base is already open source, so Anthropic has a much more straightforward way to see our code:

https://github.com/zed-industries/zed


That's what things like AWS bedrock are for.

Are you worried about microsoft stealing your codebase from github?


Isn’t it widely assumed Microsoft used private repos for LLM training?

And even with a narrower definition of stealing, Microsoft’s ability to share your code with US government agencies is a common and very legitimate worry in plenty of threat model scenarios.


Ha, I did not see your post before making mine. You are correct in your assessment of the blame.

Moreover, I view optimization as an anti-pattern in general, especially for a low level language. It is better to directly write the optimal solution and not be dependent on the compiler. If there is a real hotspot that you have identified through profiling and you don't know how to optimize it, then you can run the hotspot through an optimizing compiler and copy what it does.


To a large extent, this problem is primarily due to slow compilation. It is possible to write a direct to machine code compiler that compiles at greater than one million lines per second. That is more code than I am likely to write in my lifetime. A fast compiler with no need for incremental compilation is a superior default and can always be adapted to add incrementalism when truly needed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: