Hacker Newsnew | past | comments | ask | show | jobs | submit | yowlingcat's favoriteslogin

I've called things shaped like this "polyentendre".

In my head I think of it has just really high linguistic compression. Minus intent, it is just superimposing multiple true statements into a small set of glyphs/phonemes.

Its always really context sensitive. Context is the shared dictionary of linguistic compression, and you need to hijack it to get more meanings out of words.

Places to get more compression in:

- Ambiguity of subject/object with vague pronouns (and membership in plural pronouns)

- Ambiguity of English word-meaning collisions

- Lack of specificity in word choice.

- Ambiguity of emphasis in written language or delivery. They can come out a bit flat verbally.

A group people in a situation:

- A is ill

- B poisoned A

- C is horrified about the situation but too afraid to say anything

- D thinks A is faking it.

- E is just really cool

"They really are sick" is uttered by an observer and we don't know how much of the above they have insight into.

I just get a kick out of finding statements like this for fun in my life. Doing it with intent is more complicated.

What the author describes seems more like strategic ambiguity but slightly more specific. I don't think it is a useful label they try to cast here.


> the core rhetorical tactic of the progressive left in a nutshell

I live in Wyoming and have MAGA and ultra-progressive friends.

Multiple messaging is a hallmark of all elites. Sometimes it’s functional: being able to say something sharp that if repeated is ambiguous is a skill. Anyone who has any power or authority wields it. It is so common to suggest requirement. (Other times, multiple messaging lets one apologise in a public setting without making things awkward.)

In many respects, it’s an essential feature of commanding language. Compressing multiple meanings into fewer words is the essence of poetry and literature.


Someone who works out every day will obviously have different metabolic and microRNA profiles; assuming that line of research holds up and those biomolecular profiles make it into the zygote, survive many replication cycles, and act as developmental signalling molecules affecting gene expression during embryonic and fetal development, there could be life-long effects.

What can't happen is inter-generational transmission of particular subjective experiences that aren't paired with specific, unique metabolic, hormonal, and gene-expression signatures. Only biomolecular-mediated phenotypes, the most general and obvious of which would be things like stress or exercise or diet, make sense to be transmitted that way.

For instance, someone who's chronically afraid might transmit some kind of stress/fear modulating signals to offspring. Someone who's afraid of a specific thing, however, cannot transmit fear of that specific thing unless there's some incredible and unexplored cognition-to-biomolecular signalling mechanism that's entirely unexplored and undescribed. Therefore, I don't know why the article uses the term "lived experience", which is too broad a term to describe what the research suggests might be occurring.


I'm taking a moment to recognize once more the work that user @atdrummond (Alex Thomas Drummond) did for a couple years to help others here. I did not know him, don’t think I ever interacted with him, and I did not benefit from his generosity, but I admired his kindness. Just beautiful.

Ask HN: Who needs holiday help? (Follow up thread) - https://news.ycombinator.com/item?id=38706167 - Dec 2023 (9 comments)

Ask HN: Who needs help this holidays? - https://news.ycombinator.com/item?id=38492378 - Dec 2023 (210 comments)

Tell HN: Thank You - https://news.ycombinator.com/item?id=34140096 - Dec 2022 (42 comments)

Tell HN: Everyone should have a holiday dinner this year - https://news.ycombinator.com/item?id=34122118 - Dec 2022 (58 comments)

Unfortunately, Alex died a few months after his last round of holiday giving, about 1½ years ago now.

Tell HN: In Memory of Alexander Thomas Drummond - https://news.ycombinator.com/item?id=40508725 - May 2024 (5 comments)

If you read the comments in that last thread, know that @toomuchtodo followed through last year and kept the tradition alive. Amazing and magnificent.

Ask HN: Who needs help this holidays? - https://news.ycombinator.com/item?id=42291246 - Dec 2024 (46 comments)


Great job! I've achieved comparable results on my Android TV with Stremio[1] and the Torrentio[2] plugin. Being able to use the terminal for streaming would be a nice thing to have in Linux. It would also be cool to check for malicious files before downloading.

[1]: https://www.stremio.com [2]: https://torrentio.org/


GPT-oss-120B was also completely failing for me, until someone on reddit pointed out that you need to pass back in the reasoning tokens when generating a response. One way to do this is described here:

https://openrouter.ai/docs/guides/best-practices/reasoning-t...

Once I did that it started functioning extremely well, and it's the main model I use for my homemade agents.

Many LLM libraries/services/frontends don't pass these reasoning tokens back to the model correctly, which is why people complain about this model so much. It also highlights the importance of rolling these things yourself and understanding what's going on under the hood, because there's so many broken implementations floating around.



SQLAlchemy has its own frozendict which we've had in use for many years, we have it as a pretty well performing cython implementation these days, and I use it ubiquitously. Would be a very welcome addition to the stdlib.

This proposal is important enough that I chimed in on the thread with a detailed example of how SQLAlchemy uses this pattern:

https://discuss.python.org/t/pep-814-add-frozendict-built-in...


Indeed, one of the main selling points of DBOS. All the functionality of Temporal without any of the infrastructure.

You are describing Windows 11 LTSC which is a product that exists because Microsoft knows people want to turn this crap off.

It is of course only available in volume licensing to keep it away from normal users. Only businesses get to control their computers.


Presto (a.k.a. AWS Athena) might be a faster/better alternative? Also would like to see if 650GB data is available locally.

Actually for this kind of workload 15Gbps is still mediocre. What you actually want is the `n` variant of the instance types, which have higher NIC capacity.

In the c6n and m6n and maybe the upper-end 5th gens you can get 100Gbps NICs, and if you look at the 8th gen instances like the c8gn family, you can even get instances with 600Gbps of bandwidth.


If anyone wishes to use this study as a catalyst to shift one’s attitude, then I highly recommend dropping the dopaminergic doomloop apps like Reddit/Bluesky/X/tiktok/IG.

Your life will be better for it. Snapchat can stay…for reasons.


Why not just use Ducklake?[1] That reduces complexity[2] since only DuckDB and PostgreSQL with pg_duckdb are required.

[1] https://ducklake.select/

[2] DuckLake - The SQL-Powered Lakehouse Format for the Rest of Us by Prof. Hannes Mühleisen: https://www.youtube.com/watch?v=YQEUkFWa69o


We at https://github.com/tensorchord/VectorChord solved most of the pgvector issues mentioned in this blog:

- We're IVF + quantization, can support 15x more updates per second comparing to pgvector's HNSW. Insert or delete an element in a posting list is a super light operation comparing to modify a graph (HNSW)

- Our main branch can now index 100M 768-dim vector in 20min with 16vcpu and 32G memory. This enables user to index/reindex in a very efficient way. We'll have a detailed blog about this soon. The core idea is KMeans is just a description of the distribution, so we can do lots of approximation here to accelerate the process.

- For reindex, actually postgres support `CREATE INDEX CONCURRENTLY` or `REINDEX CONCURRENTLY`. User won't experience any data loss or inconsistency during the whole process.

- We support both pre-filtering and post-filtering. Check https://blog.vectorchord.ai/vectorchord-04-faster-postgresql...

- We support hybrid search with BM25 through https://github.com/tensorchord/VectorChord-bm25

The author simplifies the complexity of synchronizing between an existing database and a specialized vector database, as well as how to perform joint queries on them. This is also why we see most users choosing vector solution on PostgreSQL.


I've worked in about 40 languages and have a Ph. D. in the subject. Every language has problems, some I like, some I'm not fond of

There is only one language that I have an active hatred for, and that is Julia.

Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?

In Julia, this is a hard problem, and you can wind up getting crashes deep in someone else's code.

The reason is that this causes modules that don't import the new file to have different implementations of the same generic function in scope. Julia features the ability to run libraries on data types they were never designed for. But unlike civilized languages such as C++, this is done by randomly overriding a bunch of functions to do things they were not designed to do, and then hoping the library uses them in a way that produces the result you want. There is no way to guarantee this without reading the library in detail. Also no kind of semantic versioning that can tell you when the library has a breaking change or not, as almost any kind of change becomes a potentially-breaking change when you code like this.

This is a problem unique to Julia.

I brought up to the Julia creators that methods of the same interface should share common properties. This is a very basic principle of generic programming.

One of them responded with personal insults.

I'm not the only one with such experiences. Dan Luu wrote this piece 10 years ago, but the appendix shows the concerns have not been addressed: https://danluu.com/julialang/


Fun fact: Suppressed/hidden/lost memories due to trauma that appear to re-surface through therapy are not a real thing, as previously thought (and still by some psychotherapists). Nowadays it's understood by psychology that any memories "re-surfacing" in therapy are in fact newly created, although the patient themselves cannot tell the difference. Allegedly, whole accusations of childhood abuse may have been created out of thin air, without the victim realizing.

https://en.wikipedia.org/wiki/Recovered-memory_therapy (see research section)


I had a good impression of "Montessori" from hearing that Larry/Sergey/Bezos went to one. When I put my kid in it at 3 years old, he hated it. As I looked into it more, it seems to me that it is actually very rigid, with kids being able to play with just a small set of toys that don't really exercise their creativity, and with little opportunity for group play. We switched him to a Reggio Emilia school where the kids are constantly doing group projects and art and he enjoys it a lot more. I recommend parents observe what's actually happening in classrooms and think about what's best for their kid in the early years instead of assuming "Montessori" is the best path.

I'm a tedious broken record about this (among many other things) but if you haven't read this Richard Cook piece, I strongly recommend you stop reading this postmortem and go read Cook's piece first. It won't take you long. It's the single best piece of writing about this topic I have ever read and I think the piece of technical writing that has done the most to change my thinking:

https://how.complexsystems.fail/

You can literally check off the things from Cook's piece that apply directly here. Also: when I wrote this comment, most of the thread was about root-causing the DNS thing that happened, which I don't think is the big story behind this outage. (Cook rejects the whole idea of a "root cause", and I'm pretty sure he's dead on right about why.)


I think it's somewhat tribal webdev knowledge that if you host user generated content you need to be on the PSL otherwise you'll eventually end up where Immich is now.

I'm not sure how people not already having hit this very issue before is supposed to know about it beforehand though, one of those things that you don't really come across until you're hit by it.


It's probably due to the Electron bug[1]. A lot of common apps haven't patched up yet.

I also have an M2 Pro with 32GB of memory. When I A/B test with Electron apps running vs without, the lag disappears when all the unpatched Electron apps are closed out.

1. https://avarayr.github.io/shamelectron/

Here's a script I got from somewhere that shows unpatched Electron apps on your system:

Edit: HN nerfed the script. Found a direct link: https://gist.github.com/tkafka/e3eb63a5ec448e9be6701bfd1f1b1...


I don't have accessibility issues, but even so I've been a fan of these settings for a few iOS versions now:

  Settings > Accessibility > Display & Text Size > Reduce Transparency
  Settings > Accessibility > Display & Text Size > Increase Contrast
  Settings > Accessibility > Display & Text Size > Differentiate Without Colour
  Settings > Accessibility > Motion > Reduce Motion
  Settings > Accessibility > Motion > Prefer Cross-Fade Transitions
To try and make my phone less interesting so I spend less time on it, I also use Settings > Accessibility > Display & Text Size > Colour Filters > Greyscale with Intensity turned up to max so it's black and white. If you set Settings > Accessibility > Accessibility Shortcut to Colour Filters you can toggle this with a triple slick of the side button, in case you want to show someone a photo or something.

This is a great opportunity to get HN's take on these tools: systems to streamline the management of containerized services deployed on self-managed hardware.

We've been running both CapRover and Coolify for a couple years. We quite like renting real dedicated servers (Hetzner, OVH), it is so much cheaper than the cloud and a minor management burden. These tools make it easy to bridge the gap and treat these physical servers like PaaS.

We have dozens of apps and services deployed on a couple large-ish servers with backups. Most modern back-ends do so little computationally and lots of containers can comfortably live together. 128GB of RAM and 64 cores can go a long way and surprisingly cheap in Hetzner, and having that fixed monthly cost removes a mental burden. It is cheap, simple and availability issues are so much rarer than people expect, maybe a couple mishaps a year that are easy to recover from and don't really have a meaningful impact for a startup.

Coolify feels more complete and mature, but frankly, after using both a lot, we now steer more towards the simplicity of CapRover. I see that Dokploy is also a major alternative to Coolify, don't know much about it.

How does /dev/push compare? Do you have any other recommendations in this vein? Or differing opinions on the tools I mentioned?


> Porsche 919 Hybrid EVO did it in 5:19

If anyone hasn't seen this, I highly recommend it, even if you're not a car fan.

https://www.youtube.com/watch?v=PQmSUHhP3ug


In the 2010s, we had a similar situation but it wasn’t illegal.

I used to work for a large drug distributor both pre and during the opioid epidemic.

At the time (pre-SUPPORT Act), distributors weren’t required to notify the DEA about anomalous ordering so we didn’t provide data to law enforcement unless they sent a subpoena.

To increase profits, we identified our best customers of opioids and updated our inventory tracking system to send rebates and early warning notifications to providers so they’d buy more earlier.

Each provider has a sales rep (territory) mapped so we could figure out bonuses easily.

We the software engineering team were paid well for it, but not as much as the sales reps who got a percentage of the buy.


> WARNING - these components don't fit if you try to copy this build. The bottom GPU is resting on the Arctic p12 slim fans at the bottom of the case and pushing up on the GPU.

I built a dual 3090 rig, and this point was why I spent a long time looking for a case where the GPU's could fit side by side with a little gap for airflow

I eventually went with a SilverStone GD11 HTPC which is a PC case for building a media centre, but it's huge inside, has a front fan that takes up 75% of width of the case and also allows the GPUs to stand up right so they don't sag and pull on their thin metal supports.

Highly recommend for a dual GPU build! If you can get dual 5090s instead of 3090s (good luck!) you'd even be able to get "good" airflow in this case.


This is completely bullshit. If you find a proper download, you’ll usually see something like “NFLX.WEB-DL” on the file name. That means it got ripped and downloaded from Netflix.

The DRM decryption isn’t the hard bit - it’s actually mostly a standard thing, and there are plenty of tools on GitHub that will decrypt it from you if you have a key, e.g. Devine.

The issue is mostly around getting a key, but those are easy enough to get if you know where to look (e.g. TV firmware dumps).

Once you have this though, and any piracy group will have this, it’s so much easier to do this than to screen record, and will give you the original quality as well.


> The implications of these geometric properties are staggering. Let's consider a simple way to estimate how many quasi-orthogonal vectors can fit in a k-dimensional space. If we define F as the degrees of freedom from orthogonality (90° - desired angle), we can approximate the number of vectors as [...]

If you're just looking at minimum angles between vectors, you're doing spherical codes. So this article is an analysis of spherical codes… that doesn't reference any work on spherical codes… seems to be written in large part by a language model… and has a bunch of basic inconsistencies that make me doubt its conclusions. For example: in the graph showing the values of C for different values of K and N, is the x axis K or N? The caption says the x axis is N, the number of vectors, but later they say the value C = 0.2 was found for "very large spaces," and in the graph we only get C = 0.2 when N = 30,000 and K = 2---that is, 30,000 vectors in two dimensions! On the other hand, if the x axis is K, then this article is extrapolating a measurement done for 2 vectors in 30,000 dimensions to the case of 10^200 vectors in 12,888 dimensions, which obviously is absurd.

I want to stay positive and friendly about people's work, but the amount of LLM-driven stuff on HN is getting really overwhelming.


First of all if negative thinking is associated with cognitive decline and if what you say is also generally true then humans will also be pretty much, in general, be in cognitive decline.

Humans all being generally in a state of cognitive decline doesn’t make sense from an evolutionary perspective because natural selection will weed out degraded cognitive performance. So most people won’t be in this state. Anecdotally, you likely don’t see all your friends in cognitive decline so likely most of them don’t have a negative bias.

So your conclusion is likely to not be true. In fact I’m being generous here. Your conclusion is startling and obviously wrong both from a scientific perspective and an anecdotal one.

In fact the logic from this experiment and additionally many many other psychological studies points to the opposite. Humans naturally have a positive bias for things. People lie to themselves to stay sane.

Anecdotally what I observed is people don’t like to be told they are wrong. They don’t like to be told they are fat and overweight slobs. Additionally stupid people by all objective standards exist but practically every culture on earth has rules about directly calling someone a dumbass even if it’s the truth.

Like this is not a minor thing if I violate these positive cognitive biases with hard truths it will indeed cause a visceral and possibly violent reaction from most people who want to maintain that positive cognitive bias.

For example racial equality. Black people in America are in general taller and stronger than say Asians. It’s a general truth. You can’t deny this. Strength and height has an obvious genetic basis putting equality from a physical standpoint to be untrue. It is objective reality that genetics makes Asians weaker and smaller than black people in America.

So genetics effects things like size between races, it even effects things like size between species… black people are bigger than mouses. But you know what else? it affects intelligence between species. So mice genetically are less intelligent than black people and also black people are genetically more intelligent than fish. So what am I getting at here?

Genetics affects hair color, physicality, height, skin color between races. Genetics also effects intelligence between species (you are more intelligent than a squirrel) but by some black magic this narrow area of intelligence between races say Asians and black people… it doesn’t exist. Does this make sense to you? Is this logical? Genetics changes literally everything between species and races but it just tip toes around intelligence leaving it completely equal? Is all intelligence really just from the environment when everything else isn’t?

I mean at the very least the logic points to something that can be debated and discussed but this is not an open topic because it violates our cognitive biases.

Some of you are thinking you’re above it. Like you see what I’m getting at and you think you can escape the positive bias. I assure you that you can’t escape it, likely you’re only able to escape it because you’re not black. If you were black there’s no way what I said is acceptable.

But I’m Asian. How come I can accept the fact that I’m shorter and weaker than black people? Maybe it’s because height is too obvious of a metric that we can’t escape it and intelligence isn’t as obvious in the sense that I can’t just look at someone and know how smart he is.

But let’s avoid the off topic tangent here about racial intelligence and get back to my point. I know this post will be attacked but this was not my intention. I need to trigger a visceral reaction in order for people to realize how powerful positive cognitive bias is. That’s my point. It is frighteningly powerful and it’s also frighteningly evident but mass delusion causes us to be blind to it. Seriously don’t start a debate on racial intelligence. Stick to the point: positive cognitive bias.

Humans as a species that viscerally and violently bias in the cognitively positive direction.

Parent poster could not be more wrong. We are delusional and we lie to ourselves to shield ourselves from the horrors of the real world. It is so powerful that we will resort to attacks and even violence to maintain our cognitively positive delusions.


https://accountinginsights.org/how-to-abbreviate-million-in-...

  The abbreviation “M” for million can lead to confusion in finance. Historically, “M” derives from the Latin word “mille,” meaning “thousand.” As such, it has traditionally been used in some accounting or construction contexts to denote a thousand. For example, $5M could historically represent $5,000, creating ambiguity.

  To represent one million in finance, the abbreviation “MM” is widely used. This notation originates from “mille mille,” meaning “thousand thousands” in Latin, equating to one million. This clarity makes “MM” a preferred choice in financial statements and reports. 

  Other abbreviations for million, such as “mn” or “mln,” are also encountered.. The Financial Times, for example, adopted “mn” for millions to improve accessibility for text-to-speech software. While these alternatives exist, “MM” remains a prevalent and widely understood abbreviation for million in American finance.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: