Huh? Homebrew supports and frequently uses dependencies between formulae. It’s a bit janky around upgrades in my experience, but you’re going to have to clarify what you mean.
> Your premise is that a YouTube video published Dec 26, 2025 somehow motivated a federal law enforcement action that commenced on Dec 4, 2025.
The surge of ~2,000 officers only happened around Jan 5-6, 2026. And this happened after the video in question was reposted by J.D. Vance and referenced by Kash Patel.
It's true that Operation Metro Surge as a whole started a month earlier. But, zooming out from that specific video, the operation was from the start motivated by the fraud scandal, according to reliable sources [1]. Official statements around this time of the later surge also referenced the fraud scandal [2]. Meanwhile, Trump delivered several speeches during this time complaining in racist terms about Somalis in general.
It's also true that the actual activity of Operation Metro Surge is mostly unrelated to the fraud. But that's the whole point! The administration seized on the fraud, and later seized on that specific video, as an excuse to send in ICE and CBP agents to do something completely different. As the parent said, this probably did not "make it more [..] likely that we'll be able to investigate and resolve the fraud situation in MN". Or if it did, it did so very inefficiently compared to other possible approaches such as getting the FBI to focus on it (which also happened). The surge did focus public attention on the issue, which might encourage local officials to resolve it. But it also massively poisoned the well. How much you want to talk about the fraud issue is now a proxy for how much you _don't_ want to talk about what liberals see as the much larger issue of abuses by ICE/CBP.
At one point it says “fully pronated like we can, or bunnies can”, which sounds like a reference to actual rabbits, but some quick Googling suggests that rabbits don’t pronate? (I know nothing about the subject myself.)
I don't really understand what "pronating" is supposed to mean if you're not referring to human hands. This isn't a problem for the phrase "bunny hands", which refers to human hands.
But for, say, human feet, "pronation" would appear to refer to a position in which the soles of the feet face toward the ground, just as in hands it refers to a position in which the palms face toward the ground, or in humans overall it refers to a position in which the face and belly face toward the ground. That is the meaning of "prone" ("lying on your front"; it is the opposite of supine, "lying on your back"), and "pronation" just means "making something be prone".
But obviously all feet are always pronated in this sense. The article seems to have a model of the word which is more like "pronation [in the hands] involves a certain configuration of the bones in the arm, and I'm going to call that configuration pronation too". But then they also refer to rotating the forearm, which confuses bone configuration with yet another issue, the changeability of the configuration.†
So I'm left mystified as to how this single-or-possibly-manifold concept is supposed to apply to feet, human or otherwise. The article suggests that pronat_ed_ feet have the toes facing forward, parallel to the direction of the gaze, and also that pronat_ing_ feet requires the ability to rotate the lower part of the leg.
In humans, these claims cannot both be true. Toes are angled forward, but the lower leg doesn't rotate. Something else has happened.
So it's hard to say what I should conclude about the mammoth legs that the article also complains about.
† The article complains about a dinosaur skeleton in which the hands aren't pronated - they face inwards, in a pose we might call "karate chop hands". But it says that this pose requires "pronation" in what is presumably the arm-bones sense. In "bunny hands", the hands are pronated according to the normal definition of the word, facing the ground.
Looks like you need to be careful with the definition of pronation and supination for feet. There's a lot of results for running where they use the term dynamically, and it looks to be different from the original technical meaning.
For feet, the word pronating seems to also mean (perhaps colloquially) rolling the foot inwards at the ankle. Not clear at all: although some of the images show twisting the shin or not (toe in vs duck feet).
The downside is that you miss the chance to brush up on your math skills, skills that could help you understand and express more complicated requirements.
...This may still be worth it. In any case it will stop being a problem once the human is completely out of the loop.
edit: but personally I hate missing out on the chance to learn something.
That would indeed be the case if one has never learned the stuff. And I am all in for not using AI/LLM for homework/assignments. I don't know about others, but when I was in school, they didn't let us use calculators in exams.
Today, I know very well how to multiply 98123948 and 109823593 by hand. That doesn't mean I will do it by hand if I have a calculator handy.
Also, ancient scholars, most notably Socrates via Plato, opposed writing because they believed it would weaken human memory, create false wisdom, and stifle interactive dialogue. But hey, turns out you learn better if you write and practice.
In later classes in school, the calculator itself didn't help. If you didn't know the material well enough, you didn't know what to put into the calculator.
Not an OP, but seems like you might be talking about different things.
Security could be about not adding certain things/making certain mistakes. Like not adding direct SQL queries with data inserted as part of the query string and instead using bindings or ORM.
If you have insecure raw query that you feed into ORM that you added on top - that's not going to make query more secure.
But on the other hand when you're securing some endpoints in APIs you do add things like authorization, input validation and parsing.
So I think a lot depends on what you mean when you're talking about security.
Security is security - making sure bad things don't happen and in some cases it's different approach in the code, in some cases additions to the code and in some cases removing things from the code.
If you already know where the start of the opening tag is, then I think a regex is capable of finding the end of that same opening tag, even in cases like yours. In that sense, it’s possible to use a regex to parse a single tag. What’s not possible is finding opening tags within a larger fragment of HTML.
For any given regex, an opponent can craft a string which is valid HTML but that the regex cannot parse. There are a million edge cases like:
<!—- Don't count <hr> this! -—> but do count <hr> this -->
and
<!-- <!-- Ignore <ht> this --> but do count <hr> this —->
Now your regex has to include balanced comment markers. Solve that
You need a context-free grammar to correctly parse HTML with its quoting rules, and escaping, and embedded scripts and CDATA, etc. etc. etc. I don't think any common regex libraries are as powerful as CFGs.
Basically, you can get pretty far with regexes, but it's provably (like in a rigorous compsci kinda way) impossible to correctly parse all valid HTML with only regular expressions.
If you're talking about tokenizers, then you're no longer parsing HTML with a regex. You're tokenizing it with a regex and processing it with an actual parser.
If you are talking about detecting tags, you (and the person asking that SO question) is talking about tokenization, and everybody (like the one making that famous answer) bringing parsing into the discussion is just being an asshole.
Comments start with the string `<!--` and end with the string `-->`, generally with text in between. This text cannot start with the string `>` or `->`, cannot contain the strings `-->` or `--!>`, nor end with the string `<!-`, though `<!` is allowed. [...] The above is true for XML comments as well. In addition, in XML, such as in SVG or MathML markup, a comment cannot contain the character sequence `--`.
Meaning that you can recognize HTML comments with (one branch of) a RegEx—you start wherever you see `<!--` and consume everything up to one of the listed alternatives. No nesting required.
Be it said that I find the precise rules too convoluted for what they do. Especially XML's prohibition on `--` in comments is ridiculous taken on its own. First you tell me that a comment ends with three characters `-->`, and then you tell me I can't use the specific substring `--`, either? And why can't I use `--!>`?
An interesting bit here is that AFAIK the `<!` syntax was used in SGML as one of the alternatives to write a 'lone tag', so instead of `<hr></hr>` or `<hr/>` (XHTML) or `<hr>` (HTML) you could write `<!hr>` to denote a tag with no content. We should have kept this IMO.
*EDIT* On the quoted HTML source you see things like `-—` (hyphen-minus, em-dash). This is how the Vivaldi DevTools render it; my text editor and HN comment system did not alter these characters. I have no idea whether Chrome's rendering engine internally uses these em-dashes or whether it's just a quirk in DevTool text output.
Personally I’ve handled this by just ignoring the gradual part and keeping everything strictly typed. This sometimes requires some awkwardness, such as declaring a variable for an expression I would otherwise just write inline as part of another expression, because Pyright couldn’t infer the type and you need to declare a variable in order to explicitly specify a type. Still, I’ve been quite satisfied with the results. However, this is mostly in the context of new, small, mostly single-author Python codebases; I imagine it would be more annoying in other contexts.
That's the third-best design they could have. Second-best would be having a toggle to turn on AI. Best would be going back to building a browser and leaving out the AI entirely, or putting it in some other product that they only consider funding after they get back to 50% market share for the browser.
Right, but I think that's what the question of "Why is the linker too late?" is getting at. With zig libc, the compiler can do it, so you don't need fat objects and all that.
---
expanding: so, this means that you can do cross-boundary optimizations without LTO and with pre-built artifacts. I think.
I will say first that C libc does this - the functions are inline defined in header files, but this is mainly a pre-LTO artifact.
Otherwise it has no particular advantage other than disk space, it's the equivalent of just catting all your source files together and compiling that.
If you thikn it's better to do in the frontend, cool, you could make it so all the code gets seen by the frontend by fake compiling all the stuff, writing the original source to an object file special section, and then make the linker really call the frontend with all those special sections.
You can even do it without the linker if you want.
Now you have all the code in the frontend if that's what you want (I have no idea why you'd want this).
It has the disadvantage that it's the equivalent of this, without choice.
If you look far enough back, lots of C/C++ projects used to do this kind of thing when they needed performance in the days before LTO, or they just shoved the function definitions in header files, but stopped because it has a huge forced memory and compilation speed footprint.
Then we moved to precompiled headers to fix the latter, then LTO to fix the former and the latter.
Everything old is new again.
In the end, you are also much better off improving the ability to take lots of random object files with IR and make it optimize well than trying to ensure that all possible source code will be present to the frontend for a single compile. Lots of languages and compilers went down this path and it just doesn't work in practice for real users.
So doing stuff in the linker (and it's not really the linker, the linker is just calling the compiler with the code, whether that compiler is a library or a separate executable) is not a hack, it's the best compilation strategy you can realistically use, because the latter is essentially a dream land where nobody has third party libraries they link or subprojects that are libraries or multiple compilation processes and ....
Zig always seems to do this thing in blog posts and elsewhere where they add these remarks that often imply there is only one true way of doing it right and they are doing it.
It often comes off as immature and honestly a turnoff from wanting to use it for real.
As I understand it, compiling each source file separately and linking together the result was historically kind of a hack too, or at least a compromise, because early unix machines didn't have enough memory to compile the whole program at once (or even just hold multiple source files in memory at a time). Although later on, doing it this way did allow for faster recompilation because you didn't need to re-ingest source files that hadn't been changed (although this stopped being true for template-heavy C++ code).
But according to https://ircv3.net/software/clients, none of the clients you mentioned actually support emoji reactions (draft/react), and other features like multi-line messages and image uploads are likewise extremely limited in server/client support. So, for the time being, you can't use these features if you want to actually be interoperable with existing IRC users and their clients. Sounds like if you want decentralized, Matrix is still the better bet.
reply