Hacker Newsnew | past | comments | ask | show | jobs | submit | fdsjgfklsfd's commentslogin

Reporting spam on GitHub requires you to click a link, specify the type of ticket, write a description of the problem, solve multiple CAPTCHAs of spinning animals, and press Submit. It's absurd.


The current biggest problem in the US is that the President is violating the Constitution with impunity


The Constitution and Founding Fathers are pretty great compared to what we have now.

"At this point, Elbridge Gerry objected to Butler’s earlier-raised proposition that the clause be shifted to a presidential power. Gerry remarked that he never expected to hear in a republic a motion to empower the Executive alone to declare war."

"What is called a republic is not any particular form of government. It is wholly characteristical of the purport, matter or object for which government ought to be instituted, and on which it is to be employed … in this sense it is naturally opposed to the word monarchy, which … means arbitrary power in an individual person; in the exercise of which, himself, and not the res-publica, is the object."



"Overplayed"? Did you see the actual footage of the event? The event in which people attacked Capitol Police and broke into the Capitol and tried to take power by force?


Yes, I saw a lot of footage and kept doing so as more was released. Not sure what power you thought they were trying to take by force, it was like the Dark Knight scene on Wall Street. There's no "power" in that building. A guy took Pelosi's podium- is that the power you're referring to?

I saw non-rioters open doors for them, the calm and polite lines of rioters walking out, a person asking where they should be giving their speech, and a few people doing funny pictures like they were in their elementary school classroom on a weekend.

I also saw two BBC leaders resigning because they purposefully doctored footage to fit their narrative. I saw how selective the footage shown was, and how specific the phrasing was to incite rage and/or fear. If there wasn't so much manipulation and so many lies around it, I wouldn't have to question the integrity of the people pushing the narrative.


They aren't actually trying to solve any real problem.


Feel free to cite some sources. I have plenty of anecdotes to suggest the problem exists, although I've not looked for data to prove it either way. However if you would like suggest it's not real you should prove it.


What's openai/gpt-5 vs openai/gpt-5-chat?


I think they're just reaching the limits of this architecture and when a new type is invented it will be a much bigger step.


Working in the theory, I can say this is incredibly unlikely. At scale, once appropriately trained, all architectures begin to converge in performance.

It's not architectures that matter anymore, it's unlocking new objectives and modalities that open another axis to scale on.


Do we really have the data on this? I mean, it does happen on a smaller scale, but where's the 300B version of RWKV? Where's hybrid symbolic/LLM? Where are other experiments? I only see larger companies doing relatively small tweaks to the standard transformers, where the context size still explodes the memory use - they're not even addressing that part.


True, we can't say for certain. But there is a lot of theoretical evidence too, as the leading theoretical models for neural scaling laws suggest finer properties of the architecture class play a very limited role in the exponent.

We know that transformers have the smallest constant in the neural scaling laws, so it seems irresponsible to scale another architecture class to extreme parameter sizes without a very good reason.


Do you mean "all variants of the same stacked transformer architecture converge in performance"? Or do you know of tests against some other architecture? The diffusion-based LLMs?


Could you elaborate with a few more paragraphs? What do you mean by “working in the theory?”


People often talk in terms of performance curves or "neural scaling laws". Every model architecture class exhibits a very similar scaling exponent because the data and the training procedures are playing the dominant role (every theoretical model which replicates the scaling laws exhibit this property). There are some discrepancies across model architecture classes, but there are hard limits on this.

Theoretical models for neural scaling laws are still preliminary of course, but all of this seems to be supported by experiments at smaller scales.


When I've had Grok evaluate images and dug into how it perceives them, it seemed to just have an image labeling model slapped onto the text input layer. I'm not sure it can really see anything at all, like "vision" models can.

It was giving coordinate bounding boxes and likelihood matches to generic classifications for each:

    - *Positions*:
      - Central cluster: At least five bugs, spread across the center of the image (e.g., x:200-400, y:150-300).
      - Additional bugs: Scattered around the edges, particularly near the top center (x:300-400, y:50-100) and bottom right (x:400-500, y:300-400).
    - *Labels and Confidence*:
      - Classified as "armored bug" or "enemy creature" with ~80% confidence, based on their insect-like shape, spikes, and clustering behavior typical of game enemies.
      - The striped pattern and size distinguish them from other entities, though my training data might not have an exact match for this specific creature design.

    - *Positions*:
      - One near the top center (x:350-400, y:50-100), near a bug.
      - Another in the bottom right (x:400-450, y:350-400), near another bug.
    - *Labels and Confidence*:
      - Classified as "spider" or "enemy minion" with ~75% confidence, due to their leg structure and body shape.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: