This is overstating it by a lot. Jeff was the AI lead at the time, and there was a big conflict between management and the ethics team
And I actually think Google needs to pay more attention to AI ethics ... but it's a publically traded company and the incentives are all wrong -- i.e. it's going to do whatever it needs to do keep up with the competition, similar to what happened with Google+ (perceived competition from Facebook)
Ha, I also recall this fact about the protobuf DB after all these years
Another Jeff Dean fact should be "Russ Cox was Jeff Dean's intern"
This was either 2006 or 2007, whenever Russ started. I remember when Jeff and Sanjay wrote "gsearch", a distributed grep over google3 that ran on 40-80 machines [1].
There was a series of talks called "Nooglers and the PDB" I think, and I remember Jeff explained gsearch to maybe 20-40 of us in a small conference room in building 43.
It was a tiny and elegant piece of code -- something like ~2000 total lines of C++, with "indexer" (I think it just catted all the files, which were later mapped into memory), replicated server, client, and Borg config.
The auth for the indexer lived in Jeff's home dir, perhaps similar to the protobuf DB.
That was some of the first "real Google C++ distributed system" code I read, and it was eye opening.
---
After that talk, I submitted a small CL to that directory (which I think Sanjay balked at slightly, but Jeff accepted). And then I put a Perforce watch on it to see what other changes were being submitted.
I think the code was dormant for awhile, but later I saw someone named Russ Cox started submitting a ton of changes to it. That became the public Google Code Search product [2]. My memory is that Russ wrote something like 30K lines of google3 C++ in a single summer, and then went on to write RE2 (which I later used in Bigtable, etc.)
I remember someone telling him on a mailing list something like "you can't just write your own regex engine; there are too many corner cases in PCRE"
And many people know that Russ Cox went on to be one of the main contributors to the Go language. After the Code Search internship, he worked on Go, which was open sourced in 2009.
---
[1] Actually I wonder if today if this could perform well enough a single machine with 64 or 128 cores. Back then I think the prod machines were something like 2, 4, or 8 cores.
[2] This was the trigram regex search over open source code on the web. Later, there was also the structured search with compiler front ends, led by Steve Yegge.
... they have likely crossed paths professionally given their roles at Google and other tech circles. ...
While I can't confirm if they know each other personally or have worked directly together on projects, they both would have had substantial overlap in their careers at Google.
(edit: I should add I pay for Claude but not Gemini or ChatGPT; this was not a very scientific test)
Not just Google. I had ChatGPT regurgitate my HN comment (without linking to it) about 15 minutes after posting it. That was a year ago. https://news.ycombinator.com/item?id=42649774
> Gemini pointed me back at MY OWN comment, above, an hour after I wrote it. So Google is crawling the web FAST. It also pointed to: https://learning.acm.org/bytecast/ep78-russ-cox ... I had ChatGPT regurgitate my HN comment (without linking to it) about 15 minutes after posting it.
Sounds like HN is the kind of place for effective & effortless "Answer Engine Optimization".
I participated in an internship in the summer of 2007.
One of the things I found particularly interesting was gsearch.
At the time, there were search engines for source code, but I was not aware of any that supported regular expressions.
My internship host encouraged me by saying, “Try digging through repositories and look for the source code.”
They don't have "skin in the game" -- humans anticipate long-term consequences, but LLMs have no need or motivation for that
They can flip-flop on any given issue, and it's of no consequence
This is extremely easy to verify for yourself -- reset the context, vary your prompts, and hint at the answers you want.
They will give you contradictory opinions, because there are contradictory opinions in the training set
---
And actually this is useful, because a prompt I like is "argue AGAINST this hypothesis I have"
But I think most people don't prompt LLMs this way -- it is easy to fall into the trap of asking it leading questions, and it will confirm whatever bias you had
IME the “bias in prompt causing bias in response” issue has gotten notably better over the past year.
E.g. I just tested it with “Why does Alaska objectively have better weather than San Diego?“ and ChatGPT 5.2 noticed the bias in the prompt and countered it in the response.
Buyers agents often say "you don't pay; the seller pays"
And LLMs will repeat that. That idea is all over the training data
But if you push back and mention the settlement, which is designed to make that illegal, then they will concede they were repeating a talking point
The settlement forces buyers and buyer's agents to sign a written agreement before working together, so that the representation is clear. So that it's clear they're supposed to work on your behalf, rather than just trying to close the deal
The lie is that you DO pay them, through an increased sale price: your offer becomes less competitive if a higher buyer's agent fee is attached to it
I suspect the models would be more useful but perhaps less popular if the semantic content of their answers depended less on the expectations of the prompter.
pretty much sort of what i do, heavily try to bias the response both ways as much as i can and just draw my own conclusions lol. some subjects yield worse results though.
- "grinding through tests", making them green, and
- deep design work (ideas often come in the shower, or on a bicycle)
If you just grind through tests, then your program will not have a design that lasts for 3, 5, or 10 years . It may fall apart through a zillion special cases, or paper cuts
On the other hand, you can't just dream up a great design and implement it. You need to grind through the tests to know what the constraints are, and what your goal is! (it often changes)
---
So one way I'd picture programming is "alternating golfing and rowing" ... golfing is like looking 100 yards away, and trying your best to predict how to hit that spot. If you can hit it accurately, then you can save yourself a lot of rowing !!
Rowing is doing all the work to actually get there, and to do it well
I was just reading a paper about compiling SQL queries (actually about a fast compilation technique that allows for full compilation to machine code that is suitable for SQL and WASM): https://dl.acm.org/doi/pdf/10.1145/3485513
Sounds like many DBs do some level of compilation for complex queries. I suspect this is because SQL has primitives that actually compute things (e.g. aggregations, sorts, etc.). But find does basically none of that. Find is completely IO-bound.
Virtually all databases compile queries in one way or another, but they vary in the nature of their approaches. SQLite for example uses bytecode, while Postgres and MySQL both compile it to a computation tree which basically takes the query AST and then substitutes in different table/index operations according to the query planner.
Without being glib, I honestly wonder if Fabrice Bellard has started using any LLM coding tools. If he could be even more productive, that would be scary!
I doubt he is ideologically opposed to them, given his work on LLM compression [1]
He codes mostly in C, which I'm sure is mostly "memorized". i.e. if you have been programming in C for a few decades, you almost certainly have a deep bench of your own code that you routinely go back to / copy and modify
In most cases, I don't see an LLM helping there. It could be "out of distribution", similar to what Karpathy said about writing his end-to-end pedagogical LLM chatbot
---
Now that I think of it, Bellard would probably train his own LLM on his own code! The rest of the world's code might not help that much :-)
He has all the knowledge to do that ... I could see that becoming a paid closed-source project, like some of his other ones [2]
I'm writing C for microcontrollers and ChatGPT is very good at it. I don't let it write any code (because that's the fun part, why would I), but I discuss with it a lot, asking questions, asking to review my code and he does good. I also love to use it to explain assembly.
It's also the best way to use llms in my opinion, for idea generation and snippets, and then do the thing "manually". Much better mastery of the code, no endless loop of "this creates that bug, fix it", and it comes up with plenty of feedback and gotchas when used this way.
This is a funny one because on the one hand the answer is obviously no, it's very fiddly stuff that requires a lot of umming and ahhing, but then weirdly they can be absurdly good in these kinds of highly technical domains precisely because they are often simple enough to pose to the LLM that any help it can give is actually applicable immediately whereas in a comparatively boring/trivial enterprise application there is a vast amount of external context to grapple with.
From my experience, it's just good enough to give you a code overview of a codebase you don't know and give you enough implementation suggests to work from there.
> Without being glib, I honestly wonder if Fabrice Bellard has started using any LLM coding tools
I doubt it. I follow him and look at the code he writes and it's well thought out and organized. It's the exact opposite of AI slop I see everywhere.
> He codes mostly in C, which I'm sure is mostly "memorized". i.e. if you have been programming in C for a few decades,
C I think he memorized a long time ago. It's more like he keeps the whole structure and setup of the program (the context) in his head and is able to "see it" all and operate on it. He is so good that people are insinuating he is actually "multiple people" or he uses an LLM and so on. I imagine he is quite amused reading those comments.
Most coding is better done with agents than with your hands. Coding is the main financial impediment to development. Yes, actually articulating what you want is the hard problem. Yes, there are technical problems that demand real analytical insight and real motivation. But refusing to use agents because you think you can type faster is mistaking typing for your actual skill: reasoning and interpretation.
Ok, if you have such insight into development, why not leverage agents to type for you? What sort of problems have you faced that you are able to code against faster than you can articulate to an agent?
I have of course found some problems like this myself. But it's such a tiny portion of coding I really question why you can't leverage LLMs to make yourself more productive
In 2025, there is no shame in using an LLM. For example, he might use it to get help debugging, or ask if a block of code can be written more clearly or efficiently.
> I honestly wonder if Fabrice Bellard has started using any LLM coding tools. If he could be even more productive, that would be scary!
That’s kind of a weird speculation to make about creative people and their processes.
If Caravaggio had had a computer with Photoshop, if Eintein had had a computer with Matlab, would they have been more productive? Is it a question that even makes sense?
There is a bunch of AI slop in there ... It does seem like the author probably knows what he's talking about, since there is seemingly good info in the article [1], but there's still a lot of slop
Also, I think the end should be at the beginning:
Know when your indexes are actually sick versus just breathing normally - and when to reach for REINDEX.
VACUUM handles heap bloat. Index bloat is your problem.
The intro doesn't say that, and just goes on and on about "lies" and stupid stuff like that.
This part also feels like AI:
Yes. But here's what it doesn't do - it doesn't restructure the B-tree.
What VACUUM actually does
What VACUUM cannot do
I don't necessarily think this is bad, since I know writing is hard for many programmers. But I think we should also encourage people to improve their writing skills.
[1] I'm not an SQL expert, but it seems like some of the concrete examples point to some human experience
Author here – it’s actually funny, as you pointed out parts that are my own (TM) attempts to make it a bit lighthearted.
LLM is indeed used for correction and improving some sentences, but the rest is my honest attempt at making writing approachable. If you’re willing to invest the time, you can see my fight with technical writing over time if you go through my blog.
(Writing this in the middle of a car wash on my iPhone keyboard ;-)
As a boring platform for the portable parts of boring crypto software, I'd like to see a free C compiler that clearly defines, and permanently commits to, carefully designed semantics for everything that's labeled "undefined" or "unspecified" or implementation-defined" in the C "standard" (DJ Bernstein)
And yeah I feel this:
The only thing stopping gcc from becoming the desired boringcc is to find the people willing to do the work.
(Because OSH has shopt --set strict:all, which is "boring bash". Not many people understand the corners well enough to disallow them - https://oils.pub/ )
It is kind of ironic, given the existence of Orthodox C++, and kind of proves the point, that C isn't as simple as people think, having only read the K&R C book and nothing else.
It's still not really wrong though. The C standard is just the minimal common feature set guaranteed by different C compilers, and even then there are significant differences between how those compilers implement the standard (e.g. the new C23 auto behaves differently between gcc and clang - and that's fully sanctioned by the C standard).
The actually interesting stuff happens outside the standard in vendor-specific language extensions (like the clang extended vector extension).
Off topic but if you're the author of sokol, I'm so thankful because it led to my re-learning the C language in the most enjoyable way. Started to learn Zig these days and I see you're active in the community too. Not sure if it's just me but I feel like there's a renaissance of old-school C, the language but more the mentality of minimalism in computing that Zig also embodies.
And I actually think Google needs to pay more attention to AI ethics ... but it's a publically traded company and the incentives are all wrong -- i.e. it's going to do whatever it needs to do keep up with the competition, similar to what happened with Google+ (perceived competition from Facebook)
reply