>> recently I realized that I read code, but almost never write it.
I think most engineers are reading code than writing it. I find it very hard to not use Emacs when reading large codebases. Interestingly, its mostly because of file navigation. I love using ido/ivy for file navigation, quickly filtering through buffers, magit.
Emacs in terminal is not an ideal experience though. So I can imagine it being multi-fold worse with phone keyboard.
it has never been my explicit goal. but i have certainly enjoyed the rewards of recognition (e.g. i was able to lean on a successful project of mine to help land a nice consulting gig) and it would be silly to ignore that.
(edit: the comment i replied to was edited to be more a statement about themselves rather than a question about other developers, so my comment probably makes less sense now)
I don't dispute your own personal motives, but if it's never been a goal for most people, then CC0 would be more popular than the BSD or MIT license - it's simpler and much more legally straightforward to apply.
I worked on several open source projects both voluntarily or for work. The recognition doesn't really need to be financial. If people out there are using what you are building, contributing back, appreciating it -- it gives you motivation to continue working. Its human nature. One of the reason why there are so many abandoned projects out there.
Emacs is my editor/IDE of choice and consider myself power-user. However, I'm no expert in its internals or elisp. I understand that things are built with single-thread execution in mind over decades. However, I think things still can be more async, where you can offload heavy stuff to separate thread and stream results. E.g. Magit status doesn't need to block my entire editor. It can run what it needs to do in separate thread, send the results back to main thread just for rendering when its ready. Same with say consult-ripgrep / consult-find-file / find-file-in-project etc -- doens't need to wait for it in main thread and block the render / event handling until entire result set is ready (e.g. in this case things can be streamed). As in maybe there is a way around to make this much better by message passing/streaming instead of sharing state itself?
I love Emacs, but it really just fails to be effective for me when I work on monorepos and even more so, when I'm on tramp.
Probably all true, what you say about magit and so on. Message passing values would be an idea, but with the current situation, when 1 concurrent execution units, a process, finishes its job, how does its "private" potentially modified state get merged back into the main Emacs global state? Lets say the concurrently running process creates some buffers to show, but in the meantime the user has rearranged their windows or split their view, but the concurrent process doesn't know about that, since it was after its creation time. Or maybe the user has meanwhile changed an important Emacs setting.
I think the current solutions for running things in separate threads are only for external tools. I guess to do more, a kind of protocol would need to be invented, that tells a process exactly what parts of the copied global state it may change and when it finishes, only those parts will be merged back into the main process' global state.
Maybe I understood things wrong and things are different than I understood them to be. I am not an Emacs core developer. Just a user, who watched a few videos.
Tramp can be sped up a bit. I remember seeing some blog posts about it. I guess if you need to go via more than 1 hop, it can get slow though.
Yes, totally agree that its not always applicable. But I think there is still lot of scope to offload some operation (e.g. magit operations like status, commit, streaming search result into minibuffer in ivy-mode). Having a dedicated protocol would of course be best (VSCode Remote works flawlessly for me).
>> What is the problem with mono repos?
If you use things like that depend on something like ivy/vertico/... find-file-in-project, projectile-find-file, ripgrep gets super slow (I think the reason is that they usually wait for entire result to be ready). LSP/Eglot gets slower. Similarly, will have to disable most of VC related stuff like highlight diff on fringe. Git will be inherently slower, so magit will hang your UI more often. Of course you can disable all these plugins and use vanilla emacs, but then if you remove enough of them you're likely going to be more productive with VSCode at that point.
Just to clarify this is experience with monorepo + tramp. Also not sure how much of its just plugins fault. Somwhat better if you use emacs locally where the monorepo is, however that often means using Emacs cli -- which usually means lose some of your keybindings.
From 2022. Funny that soon after that we figured out how to automate the Tactical Tornado programmer and collectively decided that they're the best thing ever and nobody needs other kinds of devs anymore.
I think people want multi-user because most people still need their laptops for work (or hobbies sometimes). Otherwise, I'd be on my phone (for casual messaging, media consumption). iPad is mostly just sitting around most of time, so it can be quite easily shared b/w people in same household.
>> some teams are just not permitted to contribute to OSS in any way
My understanding is that by default you are not allowed to contribute to open-source even if its your own project. Exceptions are made for teams whose function is to work on those open-source project e.g. Swift/LLVM/etc...
I talked to an apple engineer at a bar years ago and he said they aren’t allowed to work on _anything_ including side projects without getting approval first. Seemed like a total wtf moment to me.
I have never had a non wtf moment talking to an apple software engineer at a bar.
I can recall one explaining to me in the mid 20 teens that the next iPhone would be literally impossible to jailbreak in any capacity with 100% confidence.
I could not understand how someone that capable(he was truly bright) could be that certain. That is pure 90s security arrogance. The only secure computer is one powered off in a vault, and even then I am not convinced.
Multiple exploits were eventually found anyway.
We never exchanged names. That’s the only way to interact with engineers like that and talk in real terms.
Every programming job I've ever had, I've been required at certain points to make open source contributions. Granted, that was always "we have an issue with this OSS library/software we use, your task this sprint is to get that fixed".
I won't say never, but it would take an exceedingly large comp plan for me to sign paperwork forbidding me from working on hobby projects. That's pretty orwellian. I'm not allowed to work on hobby projects on company time, but that seems fair, since I also can't spend work hours doing non-programming hobbies either.
No, as far as I know, at Apple, this is strict - you cannot contribute to OSS, period. Not from your own equipment nor your friend's, not even during a vacation. It may cost you your job. Of course, it's not universal for every team, but on teams I know a few people - that's what I heard. Some companies just don't give a single fuck of what you want or need, or where your ideals lie.
I suspect it's not just Apple, I have "lost" so many good GitHub friends - incredible artisans and contributors, they've gotten well-payed jobs and then suddenly... not a single green dot on the wall since. That's sad. I hope they're getting paid more than enough.
Three patterns I've noticed on the open-source projects I've worked on:
1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.
2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).
3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").
4. People are now even more lazy to cleanup code.
Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.
> Unfortunately, all of these issues come from humans.
They are. They’ve always been there.
The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.
We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.
It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.
AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.
To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.
You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?
This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.
There are a lot of people dealing with those consequences today. This video/article is an example of it.
I've got a few open source projects out there, and I've almost never received any PRs for them until AI, simply because they were things I did for myself and never really promoted to anyone else. But now I'm getting obviously-AI PRs on a regular basis. Somehow people are using AI to find my unpromoted stuff and submit PRs to it.
My canned response now is to respond, "Can you link me to the documentation you're using for this?" It works like a charm, the clanker doesn't ever respond.
> Unfortunately, all of these issues come from humans.
I've been thinking about this recently. As annoying as all the bots on Twitter and Reddit are, it's not bots spinning up bots (yet!), it's other humans doing this to us.
If only I were lucky enough to get LLM generated responses, usually a question like "Did you consider if X would also solve this problem?" results in a flurry of force pushed commits that overwrite history to do X but also 7 other unrelated things that work around minor snags the LLM hit doing X.
reply