Hacker Newsnew | past | comments | ask | show | jobs | submit | QueensGambit's commentslogin

Hi HN,

I asked GPT-5 to count the characters in this essay [1]. Here's what happened:

- JavaScript: correct in 1ms

- GPT-5 with Python tool: correct in 26 seconds

- GPT-5 pure reasoning: wrong after 3.2 minutes

So why can't we let AI reason in Python or JavaScript instead of English? In this essay, I explore this question through graph transformers that preserve code structure rather than flattening it into token sequences. Thoughts?

[1] https://github.com/ManiDoraisamy/devforever/blob/c41f69283a8...


From the paper: "sleep insufficiency was significantly associated with lower life expectancy when controlling for traditional predictors of mortality, with only smoking displaying a stronger association."


Both the US and China rely on uranium-based fission, but they seem to be diverging on their next bet. China is exploring thorium based fission, while US is leaning toward fusion [1]

[1] Trump Media’s merger with the fusion company TAE - https://www.globenewswire.com/news-release/2025/12/18/320754...


China's doing fusion stuff too.


Not as much as US. Everyone from trump to Altman is betting on fusion. But China is more pragmatic by focusing on making fission resistant to supply chain shocks in uranium. Since they are fast follower, their plan might be to catch up, once fusion is viable for practical use.


Their government is chipping in a lot - there's a CNBC video about it with footage of a lot of stuff going on there and the US https://youtu.be/nyn0HUqluVM Says China has 10x as many fusion PhDs and more patents. It'll be interesting to see how it pans out. They kind of overtook in batteries, solar and EVs by doing the 10x as many engineers thing.


University of Florida scientists use light to perform AI image convolutions with near-zero energy, making AI 10–100× more power-efficient. [1]

[1] https://news.ufl.edu/2025/09/optical-ai-chip/


Tsinghua scientists use light to do AI math extremely fast (12.5 GHz, trillionths of a second), enabling real-time decisions like trading or robotics. [1]

[1] https://scitechdaily.com/this-chip-computes-with-light-break...


Hi HN,

Ever since "Attention Is All You Need", I've been reading research papers directly instead of waiting for tech news coverage. My information supply chain has evolved from news sites as explainer to following experts on Twitter to ChatGPT these days. I'm experimenting with one more step: what if the papers themselves were memes?

For example, mapping AlexNet's 50-year journey to the Pirates of the Caribbean sinking ship scene [1]. Or using Sheldon's milking stool argument to explain transformer architecture [2]. The absurdity seems to make the concepts more memorable. Each meme has a quiz to dig deeper into the paper.

What do you think? Is humor a legitimate tool for learning about research papers, or does it undermine the seriousness of the work?

[1] https://near.tl/tech/post/CKANRc66UN8majA3prVu

[2] https://near.tl/tech/post/gzibcV5d6RQI6PlukYuM


Me too, but it gets harder during winter. High intensity yoga in this paper seems interesting. I’m planning to try that over the next few months.


Three years ago, when we started making profit as a bootstrapped startup, I was stunned by how little money I could reinvest in my own company compared to a funded competitor. We paid ourselves 20% of the profit and paid 40% in taxes and reinvested the rest i.e. 40% into our business. Meanwhile, a VC-backed competitor could show losses and invest 100% of the revenue plus the $10 million or $50 million they raised from investors. In this essay, I explain you how screwed up the incentives are for bootstrapped companies and a solution to fix this:

Think Stripe Atlas for bootstrapped companies, a service that incorporates you in countries with favorable tax treatment for reinvestment, where you only pay taxes on founder distributions, not on profits you reinvest.


Hi HN,

OP here. In my previous post [1], I argued that code generation is the kingpin behind reasoning models. The bottleneck is LLMs generate code lossily due to tokenization fragmentation and treating code like natural language instead of structured graphs. In this post I propose:

1. Parsing user prompts into input graphs (using controlled English like ACE)

2. Parsing code into output graphs (AST trees)

3. Using graph transformers to map input graphs → output graphs

Core hypothesis: eliminating tokenization fragmentation ("ladlen" → ["lad", "len"]) and preserving tree structure could improve FrontierMath accuracy from 26% to 35-40%. No benchmarks yet. Just theory and a plan to test the improvement.

I've built compilers, not transformers, so would love technical feedback on:

- Is tokenization & linear structure really the bottleneck in code generation, or am I missing bigger issues?

- Is 35-40% improvement plausible, or overly optimistic?

- For those working on graph transformers: what approaches look promising?

Thanks in advance!

[1] Previous post - https://manidoraisamy.com/reasoning-not-ai.html

HN thread - https://news.ycombinator.com/item?id=45683113


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: