> In general, the bootstrap relies on a binary package of the previous version. This is unacceptable for an otherwise source-only, self-contained distribution like the NetBSD sources.
This does not paint the full picture. Rust can be bootstrapped with mrustc, which is written in C++
For a good while this process was just straight up not possible for C and C++, until someone put in a lot of effort to recreate the bootstrap process for reasons of reproducible builds (and it's still a long and complicated process I think only a few people have done). For decades people were just building from source with compilers that had extremely long and undocumented bootstrap chains (they may have documentation for how to bootstrap from a different C or C++ compiler, but there wasn't one that would start from scratch).
I laughed reading the comment above yours. I also laughed reading yours, you are so right on. Next, I am expecting someone telling that there is a javascript on github to automate that build process...
This is not what I was expecting computer science to become, 30 years ago...
Go need a smaller chainloading process from C/C++ too. But the good thing it's there are prebuilt self-contained binary ports from every OS to almost any OS so you can bootstrap it with very little.
Just do for counter in <1, 5>.rev(), which would iterate in a reversed range.
IMO it's poinless to distinguish synctactically between iterating forwards and backwards, specially if you also support things like for counter in <1, 5>.map({ return args[1] * 2) to irate on even numbers (the double of each number), rather than having to define a fordoubled macro. I mean, adding method like map and rev to ranges is more orthogonal and composes better. (See for example iterators in Rust)
Not that I don't like syntactic flexibility. I am a big fan of Ruby's unless, for example
“IMO it's pointless to distinguish syntactically between iterating forwards and backwards” — I completely agree. It’s really a compiler-macro limitation that’s preventing me from doing this.. though I don’t have to go that route.
I think what you’re suggesting would require the <a, b> syntax to produce a proper iterator type, which it doesn’t currently do. That’s definitely worth considering — then you could attach methods, etc.
Thanks for the suggestion! I’ll think about the best way to fix this..
You really really need to be upfront in the first paragraph or your docs that you are talking about the inner workings of LLMs and other machine learning stuff
LLMs are probabilistic by nature. They’re great at producing fluent, creative, context-aware responses because they operate on likelihood rather than certainty. That’s their strength—but it’s also why they’re risky in production when correctness actually matters. What I’m building is not a replacement for an LLM, and it doesn’t change how the model works internally. It’s a deterministic gate that runs after the model and evaluates what it produces.
You can use it in two ways. As a verification layer, the LLM generates answers normally and this system checks each one against known facts or hard rules. Each candidate either passes or fails—no scoring, no “close enough.” As a governance layer, the same mechanism enforces safety, compliance, or consistency boundaries. The model can say anything upstream; this gate decides what is allowed to reach the user. Nothing is generated here, nothing inside the LLM is modified, and the same inputs always produce the same decision. For example, if the model outputs “Paris is the capital of France” and “London is the capital of France,” and the known fact is Paris, the first passes and the second is rejected—every time. If nothing matches, the system refuses to answer instead of guessing.
There’s no multithreading so race conditions don’t apply. That simplifies things quite a bit.
There’s actually no ‘free’, but in the (member -> variable data) ontology of Cicada there are indeed a few ways memory can become disused: 1) members can be removed; 2) members can be re-aliased; 3) arrays or lists can be resized. In those conditions the automated/manual collection routines will remove the disused memory, and in no case is there any dangling ‘pointer’ (member or alias) pointing to unallocated memory. Does this answer your question?
I agree that my earlier statement wasn’t quite a complete explanation.
Of course, since it interfaces with C, it’s easy to overwrite memory in the callback functions.
> There’s actually no ‘free’, but in the (member -> variable data) ontology of Cicada there are indeed a few ways memory can become disused: 1) members can be removed; 2) members can be re-aliased; 3) arrays or lists can be resized. In those conditions the automated/manual collection routines will remove the disused memory, and in no case is there any dangling ‘pointer’ (member or alias) pointing to unallocated memory. Does this answer your question?
Does this mean that Cicada will happily and wildly leak memory if I allocate short lived objects in a loop?
Why don't you just add some reference counting or tracing GC like everybody else
> 1) members can be removed;
Does this causes use after free if somebody had access to this member? Or it will give an error during access?
No, there are both referenced-based and tracing-based GC routines that will deallocate short-lived objects. Sorry, I was just trying to enumerate the ways memory goes out of scope to show that none of those ways results in an invalid pointer _within the scripting language_.
The safety comes because there is no way to access a pointer address within the scripting language. The main functionality of pointers is replaced by aliases (e.g. a = @b.c, a = @array[2], etc.). The only use of pointers is behind the scenes, e.g. when you write ‘b.c’ there is of course pointer arithmetic behind the scenes to find the data in member ‘b’.
Having said that, it is certainly possible for a C callback routine to store an internal pointer, then on a second callback try to use that pointer after it has fallen out of scope. This is the only use-after-free I can imagine.
Okay, this is the usual way to perform safe memory management in managed / high level programming languages.. it was just that your "alias" terminology threw me off
Note that you can add multithreading later if you adopt message passing / actor model. Even Javascript, which is famously single threaded, gained workers with message passing at some point
Yes, multithreading seems to be a consistent theme among the comments.. so I should definitely look into that. Thanks for the comment. (I actually haven’t done much threaded programming myself so this would be a learning experience for me..)
Also, if someone else has access to the member, meaning that there is an alias to the member, then the reference count should reflect that. Here’s an example:
i :: int | 1 reference
a := @i | 2 references
remove i | 1 reference
The data originally allocated for ‘i’ should persist because its reference count hasn’t hit zero yet.
You add all those things in a single .c or .cpp source, without manual authoring of .h files (this may require language changes, maybe in C30 and C++30 or something)
Then whatever is relevant to the public interface gets generated by the compiler and put in a .h file. This file is not put in the same directory of the .c file to not encourage people to put the .h file in version control.
The C language is too complicated and too flexible to allow that. If you are starting from scratch and creating a new language, this could be a design goal from the beginning.
> The C language is too complicated and too flexible to allow that.
I disagree. In fact, I would expect the following could be a pretty reasonable exercise in a book like "Software Tools"[1]: "Write a program to extract all the function declarations from a C header file that does not contain any macro-preprocessor directives." This requires writing a full C lexer; a parser for function declarations (but for function and struct bodies you can do simple brace-matching); and nothing else. To make this tool useful in production, you must either write a full C preprocessor, or else use a pipeline to compose your tool with `cpp` or `gcc -E`. Which is the better choice?
However, I do see that the actual "Software Tools" book doesn't get as advanced as lexing/parsing; it goes only as far as the tools we today call `grep` and `sed`.
I certainly agree that doing the same for C++ would require a full-blown compiler, because of context-dependent constructs like `decltype(arbitrary-expression)::x < y > (z)`; but there's nothing like that in K&R-era C, or even in C89.
No, I think the only reason such a declaration-extracting tool wasn't disseminated widely at the time (say, the mid-to-late 1970s) is that the cost-benefit ratio wouldn't have been seen as very rewarding. It would automate only half the task of writing a header file: the other and more difficult half is writing the accompanying code comments, which cannot be automated. Also, programmers of that era might be more likely to start with the header file (the interface and documentation), and proceed to the implementation only afterward.
> Write a program to extract all the function declarations from a C header file that does not contain any macro-preprocessor directives
There you go. You just threw away the most difficult part of the problem: the macros. Even a medium-sized C library can have maybe 500 lines of dense macros with ifdef/endif/define which depends on the platform, the CPU architecture, as well as user-configurable options at ./configure time. Should you evaluate the macro ifdefs or preserve them when you extract the header? It depends on each macro!
And your tool would still be highly incomplete because it only handles function declarations not struct definitions, typedefs you expect the users to use.
> the other and more difficult half is writing the accompanying code comments, which cannot be automated
Again disagree. Newer languages have taught us that it is valuable to have two syntaxes for comments, one intended for implementation and one intended for the interface. It’s more popularly known as docstrings but you can just reuse the comment syntax and differentiate between // and /// comments for example. The hypothetical extractor tool will work no differently from a documentation extractor tool.
I interpreted OP's post to say that you take a C file after the preprocessor has translated it. How you perform that preprocessing can simply be by passing the file to an existing C preprocessor, or you can implement it as well.
Implementing a C preprocessor is tedious work, but it's nothing remotely complex in terms of challenging data structures, algorithms, or requiring sophisticated architecture. It's basically just ensuring your preprocessor implements all of the rules, each of which is pretty simple.
And you had the same misunderstanding as OP. Because you have eliminated all macros during the preprocessor step, you can no longer have macro-based APIs, including function-like macros you expect library users to use, #ifdef blocks where you expect user code to either #define or #undef, and a primitive form of maintaining API compatibility but not ABI compatibility for many things.
It’s a cute learning project for a student of computer science for sure. It’s not remotely a useful software engineering tool.
Our points of view are probably not too far off, really. Remember this whole thought-experiment is counterfactual: we're imagining what "automatic extraction of function declarations from a .c file" would have looked like in the K&R era, in response to claims (from 50 years later) that "No sane programming language should require a duplication in order to export something" and "The .h could have been a compiler output." So we're both having to imagine the motivations and design requirements of a hypothetical programmer from the 1970s or 1980s.
I agree that the tool I sketched wouldn't let your .h file contain macros, nor C99 inline functions, nor is it clear how it would distinguish between structs whose definition must be "exported" (like sockaddr_t) and structs where a declaration suffices (like FILE). But:
- Does our hypothetical programmer care about those limitations? Maybe he doesn't write libraries that depend on exporting macros. He (counterfactually) wants this tool; maybe that indicates that his preferences and priorities are different from his contemporaries'.
- C++20 Modules also do not let you export macros. The "toy" tool we can build with 1970s technology happens to be the same in this respect as the C++20 tool we're emulating! A modern programmer might indeed say "That's not a useful software engineering tool, because macros" — but I presume they'd say the exact same thing about C++20 Modules. (And I wouldn't even disagree! I'm just saying that that particular objection does not distinguish this hypothetical 1970s .h-file-generator from the modern C++20 Modules facility.)
[EDIT: Or to put it better, maybe: Someone-not-you might say, "I love Modules! Why couldn't we have had it in the 1970s, by auto-generating .h files?" And my answer is, we could have. (Yes it couldn't have handled macros, but then neither can C++20 Modules.) So why didn't we get it in the 1970s? Not because it would have been physically difficult at all, but rather — I speculate — because for cultural reasons it wasn't wanted.]
> What one should do about this? I mean, beside working on lowering that number.
Every business in Brazil has an whatsapp to talk to their clients. Sometimes this whatsapp goes into the phone or computer of a real human being. Other times, it's manned by a bot (usually a dumb choose-your-own-adventure bot - I don't see business using LLMs for this here)
Indeed I use food delivery apps (ifood here) only to check out the menu of delivery restaurants, then I search for them in Google so I can order directly from them through whatsapp. This won't work for some dark kitchens, but other than that it's pretty reliable and avoid the middleman
Using `` to interpolate command arguments is very clever! What's missing is a discussion on how you do quoting (for example, how to ls a directory with spaces in name)
Anyway, what kills this for me is the need to add await before every command.
This does not paint the full picture. Rust can be bootstrapped with mrustc, which is written in C++
https://github.com/thepowersgang/mrustc
Now, mrustc supports only Rust 1.74. To build Rust 1.92, you need almost 20 builds. But this can be done from source
Guix has written about bootstrapping Rust from source (they care a lot about this). Here is how it looked like in 2018
https://guix.gnu.org/nb-NO/blog/2018/bootstrapping-rust/
reply