Hacker Newsnew | past | comments | ask | show | jobs | submit | fchollet's commentslogin

It is 100% ARC-AGI-3 specific though, just read through the prompts https://github.com/symbolica-ai/ARC-AGI-3-Agents/blob/symbol...

What a dick move. Making that prompt open source will probably mean that every other model that doesn't want to cheat will scrape that and accidentally cheat in the next models.

(disclaimer: i worked on early versions of agentica_sdk; but wasn't involved in recent developments and the ARC solver)

As other comments point out this is about harness development and harness efficiency. Agentica SDK is a sort of meta harness, that makes things easy: plug any "internal API" (as defined natively in your codebase) directly into your agent. Agentica SDK itself is not application specifc; but the APIs of your application are... application specific.

Re: the linked prompt. A harness is a set of tools and descriptions how to best use those tools, and sometimes some external control flow based on the outcome of using those tools. How to "best use the tools" should always be part of the prompt (like in this case).

So this work tries to answer: "short of telling the agent any solutions, make a simple but efficient API to play the games, hand it to the agent, and see how it does". In the world of harness development I think that's an interesting question to answer!


>In the world of harness development I think that's an interesting question to answer!

The challenge isn't about harness development though, and a sufficiently complex harness can solve these tasks rather easily.

And presenting it as if you've made a novel development for solving ARC-AGI-3 leads me to believe you're willing to waste all of our time for your benefit at every step in the future.


> a sufficiently complex harness can solve these tasks rather easily.

I claim this is not so easily done, and earlier iterations of ARC-AGI did not have the constraint in the first place. You want something that generalizes across all puzzles (hopefully even the private ones), and these puzzles are extremely diverse ... and hard; telling the model the controls and some basic guidelines for the game is the only "obvious" thing you can do.

The other point of my reply was efficiency, both in terms of creating and using the harness; the discussed solution is something that anyone (in fact, likely even an LLM itself) can cook up in a few minutes; it's not much more than a game control wrapper so the agent can play around with the game in live python and some generalities as laid out in the prompt.

(But I'm always happy to be proven wrong. What harnesses did you have in mind?)


this is so disingenuous on symbolica's part. these insincere announcements just make it harder for genuine attempts and novel ideas

Um, yes this is a extremely specific as a benchmark harness. It has a ton of knowledge encoded about the tasks at hand. The tweet is dishonest even in the best light.

The hard part of these tests isn't purely reasoning ability ffs.


Francois here. The scoring metric design choices are detailed in the technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf - the metric is meant to discount brute-force attempts and to reward solving harder levels instead of the tutorial levels. The formula is inspired by the SPL metric from robotics navigation, it's pretty standard, not a brand new thing.

We tested ~500 humans over 90 minute sessions in SF, with $115-$140 show up fee (then +$5/game solved). A large fraction of testers were unemployed or under-employed. It's not like we tested Stanford grad students. Many AI benchmarks use experts with Ph.D.s as their baseline -- we hire regular folks as our testers.

Each game was seen by 10 people. They were fully solved (all levels cleared) by 2-8 of them, most of the time 5+. Our human baseline is the second best action count, which is considerably less than an optimal first-play (even the #1 human action count is much less than optimal). It is very achievable, and most people on this board would significantly outperform it.

Try the games yourself if you want to get a sense of the difficulty.

> Models can't use more than 5X the steps that a human used

These aren't "steps" but in-game actions. The model can use as much compute or tools as it wants behind the API. Given that models are scored on efficiency compared to humans, the cutoff makes basically no difference on the final score. The cutoff only exists because these runs are incredibly expensive.

> No harness at all and very simplistic prompt

This is explained in the paper. Quoting: "We see general intelligence as the ability to deal with problems that the system was not specifically designed or trained for. This means that the official leaderboard will seek to discount score increases that come from direct targeting of ARC-AGI-3, to the extent possible."

...

"We know that by injecting a high amount of human instructions into a harness, or even hand-crafting harness configuration choices such as which tools to use, it is possible to artificially increase performance on ARC-AGI-3 (without improving performance on any other domain). The purpose of ARC-AGI-3 is not to measure the amount of human intelligence that went into designing an ARC-AGI-3 specific system, but rather to measure the general intelligence of frontier AI systems.

...

"Therefore, we will focus on reporting the performance of systems that have not been specially prepared for ARC-AGI-3, served behind a general-purpose API (representing developer-aware generalization on a new domain as per (8)). This is similar to looking at the performance of a human test-taker walking into our testing center for the first time, with no prior knowledge of ARC-AGI-3. We know such test takers can indeed solve ARC-AGI-3 environments upon first contact, without prior training, without being briefed on solving strategies, and without using external tools."

If it's AGI, it doesn't need human intervention to adapt to a new task. If a harness is needed, it can make its own. If tools are needed, it can chose to bring out these tools.


Suppose you construct a Mechanical Turk AI who plays ARC-AGI-3 by, for each task, randomly selecting one of the human players who attempted it, and scoring them as an AI taking those same actions would be scored. What score does this Turk get? It must be <100% since sometimes the random human will take more steps than the second best, but without knowing whether it's 90% or 50% it's very hard for me to contextualize AI scores on this benchmark.

The people recruited weren’t experts. I can imagine it’s straightforward to find humans (such as those that play many video games) that can score >100% on this benchmark.

So, if you look at the way the scoring works, 100% is the max. For each task, you get full credit if you solve in a number of steps less than or equal to the baseline. If you solve it with more steps, you get points off. But each task is scored independently, and you can't "make up" for solving one slowly by solving another quickly.

Like suppose there were only two tasks, each with a baseline score of solving in 100 steps. You come along and you solve one in only 50 steps, and the other in 200 steps. You might hope that since you solved one twice as quickly as the baseline, but the other twice as slowly, those would balance out and you'd get full credit. Instead, your scores are 1.0 for the first task, and 0.25 (scoring is quadratic) for the second task, and your total benchmark score is a mere 0.625.


The purpose is to benchmark both generality and intelligence. "Making up for" a poor score on one test with an excellent score on another would be the opposite of generality. There's a ceiling based on how consistent the performance is across all tasks.

>"Making up for" a poor score on one test with an excellent score on another would be the opposite of generality.

Really ? This happens plenty with human testing. Humans aren't general ?

The score is convoluted and messy. If the same score can say materially different things about capability then that's a bad scoring methodology.

I can't believe I have to spell this out but it seems critical thinking goes out the window when we start talking about machine capabilities.


Just because humans are usually tested in a particular way that allows them to make up for a lack of generality with an outstanding performance in their specialization doesn't mean that is a good way to test generalization itself.

Apparently someone here doesn't know how outliers affect a mean. Or, for that matter, have any clue about the purpose of the ARC-AGI benchmark.

For anyone who is interested in critical thinking, this paper describes the original motivation behind the ARC benchmarks:

https://arxiv.org/abs/1911.01547


>Apparently someone here doesn't know how outliers affect a mean.

If the concern is that easy questions distort the mean, then the obvious fix is to reduce the proportion of easy questions, not to invent a convoluted scoring method to compensate for them after the fact. Standardized testing has dealt with this issue for a long time, and there’s a reason most systems do not handle it the way ARC-AGI 3 does. Francois is not smarter than all those people, and certainly neither are you.

This shouldn't be hard to understand.


How do you define "easy question" for a potential alien intelligence? The solution, like most solutions when dealing with outliers, in my opinion, is to minimize the impact of outliers.

I mean presumably that's what the preview testing stage would handle right ? It should be clear if there are a class of obviously easy questions. And if that's not clear then it makes the scoring even worse.

And in some sense, all of these benchmarks are tied and biased for human utility.

I don't think ARC would be designed and scored the way it is if giving consideration for an alien intelligence was a primary concern. In that case, the entire benchmark itself is flawed and too concerned with human spatial priors.

There are many ways to deal with a problem. Not all of them are good. The scoring for 3 is just bad. It does too much and tells too much.

5% could mean it only answered a fraction of problems or it answered all of them but with more game steps than the best human score. These are wildy different outcomes with wildly different implications. A scoring methodology that can allow for such is simply not a good one.


Thanks, I mostly agree with your approach except for one thing: eyesight feels like a "harness" that humans get to use and LLMs do not.

I'm guessing you did not pass the human testers JSON blobs to work with, and suspect they would also score 0% without the eyesight and visual cortex harness to their reasoning ability.


I'm all for testing humans and AI on a fair basis; how about we restrict testing to robots physically coming to our testing center to solve the environments via keyboard / mouse / screen like our human testers? ;-)

(This version of the benchmark would be several orders of magnitude harder wrt current capabilities...)


This counterpoint doesn't address the issue, and I would argue that it is partially bad faith.

Yes, making it to the test center is significantly harder, but in fact the humans could have solved it from their home PC instead, and performed the exact same. However, if they were given the same test as the LLMs, forbidden from input beyond JSON, they would have failed. And although buying robots to do the test is unfeasible, giving LLMs a screenshot is easy.

Without visual input for LLMs in a benchmark that humans are asked to solve visually, you are not comparing apples to apples. In fact, LLMs are given a different and significantly harder task, and in a benchmark that is so heavily weighted against the top human baseline, the benchmark starts to mean something extremely different. Essentially, if LLMs eventually match human performance on this benchmark, this will mean that they in fact exceed human performance by some unknown factor, seeing as human JSON performance is not measured.

Personally, this hugely decreased my enthusiasm for the benchmark. If your benchmark is to be a North star to AGI, labs should not be steered towards optimizing superhuman JSON parsing skills. It is much more interesting to steer them towards visual understanding, which is what will actually lead the models out into the world.


I just realized that this also means that the benchmark is in practice unverified by third parties, as all tasks are not verified to be solvable through the JSON interface. Essentially there is no guarantee that it is even possible to understand how to complete every task optimally through the JSON interface alone.

I assume you did not develop the puzzles by visualizing JSON yourselves, and so there might be non obvious information that is lost in translation to JSON. Until humans optimally solve all the puzzles without ever having seen the visual version, there is no guarantee that this is even possible to do.

I think the only viable solution here is to release a version of the benchmark with a vision only harness. Otherwise it is impossible to interpret what LLM progress on this benchmark actually means.


Oookay. I actually tried the harness myself, and there was a visual option. It is unclear to me if that is what the models are using on the official benchmark, but it probably is. This probably means that much of my critique is invalid. However, in the process of fiddling with the harness, building a live viewer to see what was happening, and playing through the agent API myself, I might have found 3-4 bugs with the default harness/API. Dunno where to post it, so of all places I am documenting the process on HN.

Bug 1: The visual mode "diff" image is always black, even if the model clicked on an interactive element and there was a change. Codex fixed it in one shot, the problem was in the main session loop at agent.py (line 458).

Bug 2: Claude and Chatgpt can't see the 128x128 pixel images clearly, and cannot or accurately place clicks on them either. Scaling up the images to 1028x1028 pixels gave the best results, claude dropped off hard at 2048 for some reason. Here are the full test results when models were asked to hit specific (manually labeled) elements on the "vc 33" level 1 (upper blue square, lower blue square, upper yellow rectangle, lower yellow rectangle):

Model | 128 | 256 | 512 | 1024 | 2048

claude-opus-4-6 | 1/10 | 1/10 | 9/10 | 10/10 | 0/10

gemini-3-1-pro-preview | 10/10 | 10/10 | 10/10 | 10/10 | 10/10

gpt-5.4-medium | 4/10 | 8/10 | 9/10 | 10/10 | 8/10

Bug 3: "vc 33" level 4 is impossible to complete via the API. At least it was when I made a web-viewer to navigate the games from the API side. The "canal lock" required two clicks instead of one to transfer the "boat" when water level were equilibriated, and after that any action whatsoever would spontaneously pop the boat back to the first column, so you could never progress.

"Bug" 4: This is more of a complaint on the models behalf. A major issue is that the models never get to know where they clicked. This is truly a bit unfair since humans get a live update of the position of their cursor at no extra cost (even a preview of the square their cursor highlights in the human version), but models if models fuck up on the coordinates they often think they hit their intended targets even though they whiffed the coordinates. So if that happens they note down "I hit the blue square but I guess nothing happened", and for the rest of the run they are fucked because they conclude the element is not interactive even though they got it right on the first try. The combination of an intermediary harness layer that let the models "preview" their cursor position before the "confirmed" their action and the 1024x1024 resolution caused a major improvement in their intended action "I want to click the blue square" actually resulting in that action. However, even then unintended miss-clicks often spell the end of a run (Claude 4.6 made it the furthest, which means level 2 of the "vc 33" stages, and got stuck when it missed a button and spent too much time hitting other things)

After I tried to fix all of the above issues, and tried to set up an optimal environment for models to get a fair shake, the models still mostly did very badly even when they identified the right interactive elements...except for Claude 4.6 Opus! Claude had at least one run where it made it to level 4 on "vc 33", but then got stuck because the blue squares it had to hit became too small, and it just couldn't get the cursor in the right spot even with the cursor preview functionality (the guiding pixel likely became too small for it to see clearly). When you read through the reasoning for the previous stages though, it didn't truly fully understand the underlying logic of the game, although it was almost there.


Well, yes, and would hand even more of an advantage to humans. My point is that designing a test around human advantages seems odd and orthogonal to measuring AGI.

The whole point of AGI is "general" intelligence, and for that intelligence to be broadly useful it needs to exist within the context of a human centric world

General intelligence not owning retinas.

Denying proper eyesight harness is like trying to construct speech-to-text model that makes transcripts from air pressure values measured 16k times per second, while human ear does frequency-power measurement and frequency binning due to it's physical construction.


Does this mean blind people are not intelligent?

Blind people do function within the context of a human-centric world, though, so they would qualify as intelligent.

Yes, but they use various "harnesses" to do so (dog guides, text to speech software, assistance of other humans when needed..). Why can't AI?

Assistance of other humans? You do realise we're talking about an intelligence test right, at that point what are you even testing for. I'm sure you've taken exams where you couldn't bring your own notes, use Google or get help from someone, even though real life doesn't have those constraints

Then why deny it a harness it can also use in a human centric world?

There is no general purpose harness.

The human testers were provided with their customary inputs, as were the LLMs. I don't see the issue.

I guess it could be interesting to provide alternative versions that made available various representations of the same data. Still, I'd expect any AGI to be capable of ingesting more or less any plaintext representation interchangeably.


The issue is that ARC AGI 3 specifically forbids harnesses that humans get to use.

So what? Are you suggesting that an agent exhibiting genuine AGI will be tripped up by having to ingest json rather than rgb pixels? LLMs are largely trained on textual data so json is going to be much closer to whatever native is for them.

But by all means, give the agents access to an API that returns pixel data. However I fully expect that would reduce performance rather than increase it.


Because it is. Opus 4.6 jumps from 0.0% to 97.1% when given visual input

Source? I haven't seen anything like that for ARC-AGI performance.

Also, if it makes that big of a difference, then make a renderer for your agent that looks like the web page and have it solve them in the graphical interface and funnel the results to the API. I guarantee you won't get better performance, because the AGI is going to have to "understand" the raw data can be represented as a 2D matrix regardless of whether it gets a 2D matrix of pixels or a 2D matrix of enumeration in JSON. If anything, that makes it a more difficult problem for a AI system that "speaks" in tokens.


That score is in the arc technical paper [1]. It's the full benchmark score using this harness [2] (which is just open code with read, grep, bash tools).

This is already a solved benchmark. That's why scoring is so convoluted and a self proclaimed Agent benchmark won't allow basic agent tools. ARC has always been a bit of a nothing burger of a benchmark but this takes the cake.

[1] https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

[2] https://blog.alexisfox.dev/arcagi3


> For example, in a variant of environment TR87, Opus 4.6 scores 0.0% with no harness and 97.1% with the Duke harness (12), yet in environment BP35, Opus 4.6 scores 0.0% under both configuration

This is with a harness that has been designed to tackle "a small set of public environments: ls20, ft09, and vc33" (of the arc-agi-3 challenge), yet it looks like it does not solve the full arc-agi-3 benchmark, just some of them.


The harness was designed with the preview, but no it was still tested on the full public set in that environment. You can run the benchmark in different 'environments' though it's unclear what the difference between them is.

>We then tested the harnesses on the full public set (which researchers did not have access to at the time)


It may have been tested on the full set, but the score you quote is for a single game environment. Not the full public set. That fact is verbatim in what you responded to and vbarrielle quoted. It scored 97% in one game, and 0% in another game. The full prelude to what vbarrielle quoted, the last sentence of which you left out, was:

> We then tested the harnesses on the full public set (which researchers did not have access to at the time). We found extreme bimodal performance across the two sets, controlling for the same frontier model...

The harness only transfers to like-environments and the intelligence for those specific games is baked into the harness by the humans who coded it for this specific challenge.

The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments. Having a human give it more powerful tools in a harness defeats the purpose. You should go back and read the original ARC-AGI paper to see what this is about+. Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?

+ https://arxiv.org/abs/1911.01547


> intelligence for those specific games is baked into the harness

This is your claim but the other commenter claims the harness consists only of generic tools. What's the reality?

I also encountered confusion about this exact issue in another subthread. I had thought that generic tooling was allowed but others believed the benchmark to be limited to ingesting the raw text directly from the API without access to any agent environment however generic it might be.


1) Pointing out what tools to use is part of the intelligence that LLMs aren't great at.

2) one of the tools is a path finding algorithm. A big improvement/crutch over a regular LLM that has no such capability.

You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick. But apparently they just can't do that without humans stepping in to make it a standard tool for them.

Here's the paper on what they did for the Duke Harness:

https://blog.alexisfox.dev/arcagi3


>You'd think if LLMs are intelligent they'd be able to determine that a path finding algorithm is necessary and have a sub agent code it up real quick.

ARC 3 doesn't allow that so.

>Here's the paper on what they did for the Duke Harness: https://blog.alexisfox.dev/arcagi3

Yeah, and the tools are general, not 'baked into the harness by the humans who coded it for this specific challenge.'


Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me. Sad part is, it's a cheat that only works on environments where pathfinding is a major part. And when it doesn't have those clues it bombs on everything.

I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.

Have a great day.


>Adding a path finding algorithm and environment transform tools to a supposed "AGI", sure does seem like cheating to me.

You would need all that if you, a human wanted any chance of solving this benchmark in the format LLMs are given. The funny thing about this benchmark is that we don't even know how solvable it is, because the baseline is tested with radically different inputs.

>I guess you really want to love the current SOTA LLMs. It's a shame they're dumb af.

I guess you really don't want to think critically. Yeah good day lol.


Really tired of you making up stuff about this. The baseline and entire benchmark evaluation is clearly defined, with a statistically sound number of participants for the baseline using the same consistent deterministic environments to perform evaluation. The fact you don't like where the "human performance" line was drawn or how the scale is derived is not the same as the benchmark being tested with "radically different inputs". Clearly you would rather hype AI than critically advance it. So I won't waste time with someone who is clearly not posting in good faith.

Byebye now.


Humans and LLMs are not seeing the benchmark in the same format. What's made up about that ? Can you solve this in the JSON format ?

Look man, don't reply if you don't want to.


>The point of ARC-AGI is to test the intelligence of AI systems in novel, but simple, environments.

The point is whatever Francois wants it to be.

>Having a human give it more powerful tools in a harness defeats the purpose.

Why does it defeat the purpose? Restricting the tools available is an arbitrary constraint. The Duke harness is a few basic tools. What's the problem ? In what universe would any AI Agent worth its salt not have access to read, grep and bash ? If his benchmark was as great and the difference as wide as he claimed, then it simply wouldn't matter if those tools were available. Francois removed access to tools because his benchmark falls apart with them. Simple as.

>You should go back and read the original ARC-AGI paper to see what this is about+.

>Are you upset about the benchmark because frontier LLM models do so poorly exhibiting the ability to generalize when the benchmarks are released?

I’m not upset about anything. I do not care about ARC, and I never have. I think it is a nothingburger of a benchmark: lots of grand claims about AGI, but very little predictive power or practical utility.

When models started climbing FrontierMath, that benchmark actually told us something useful: their mathematical capabilities were becoming materially stronger. And now state-of-the-art systems have helped with real research and even contributed to solving open problems. That is what a good benchmark is supposed to do.

ARC ? Has 0 utility on its own and manages to tell you nothing at the same time.

Unsaturated benchmarks matter because they help show where the state of the art actually is. The value is not “look, the score is low,” but whether the benchmark tells you something real and useful about capability. ARC has always struggled on that front, but 3 has taken that to a new level of useless.


That's impressive. I'm also a bit surprised - I wouldn't have expected it to be trained much at all on that sort of visual input task. I think I'd be similarly surprised to learn that a frontier model was particularly good at playing retro videogames or actuating a robot for example.

However, if it can't figure out to render the json to a visual on its own does it really qualify as AGI? I'd still say the benchmark is doing its job here. Granted it's not a perfectly even playing field in that case but I think the goal is to test for progress towards AGI as opposed to hosting a fair tournament.


> However, if it can't figure out to render the json to a visual on its own does it really qualify as AGI? I'd still say the benchmark is doing its job here.

Can you render serialized JSON text blob to a visual with your brain only? The model can't do anything better than this - no harness means no tool at all, no way to e.g. implement a visualizer in whatever programming language and run it.

Why don't human testers receive the same JSON text blob and no visualizers? It's like giving human testers a harness (a playable visualizer), but deliberately cripples it for the model.


Huh. I thought it wasn't supposed to receive any instructions tailored to the task but I didn't understand it to be restricted from accessing truly general tools such as programming languages. To do otherwise is to require pointless hoop jumping as frontier models inevitably get retrained to play games using a json (or other arbitrary) representation at which point it will be natural for them and the real test will begin.

This is my understanding as well, I thought tools where allowed.

My sense is that a powerful enough AI would have the sense to think something like "ah, this sounds like a video game! Let me code up an interactive GUI, test it for myself, then use it to solve these puzzles..." and essentially self-harness (the way you would if you were reading a geometry problem, by drawing it out on paper).

Yeah but thats literally above ASI, let alone AGI. Average human scores <1% on this bench, opus scores 97.1% when given an actual vision access, which means agi was long ago achieved

> opus scores 97.1% when given an actual vision access

Do you have a source for this? I would be very curious to see how top models do with vision.


No, there is no source for this. Opus is scoring around 1% just like all the other frontier models. It would be fairly trivial to add a renderer intermediary. And if it improves to 97+%... Then you would get a huge cut of $2 million dollars. The assertion that Opus gets 97% if you just give it a gui is completely bogus.


I tried ls20 and it was surprisingly fun! Just from a game design POV, these are very well made.

Nit: I didn't see a final score of how many actions I took to complete 7 levels. Also didn't see a place to sign in to see the leaderboard (I did see the sign in prompt).


Agree 100%. I want to be able to see how many actions it took me. And it would be good if it were possible to see how well I'm doing compared to other humans, i.e. what is my percentile.

While I think all of your design choices are defensible, I do think you should release the full human baseline data. The second best action count is fine, but other choices are reasonable as well.

Something that I don't understand after reading the technical report is: Why is having access to a python interpreter as part of the harness not allowed (like the Duke harness), but using one hidden behind the model API (as a built-in tool) considered kosher?

The Duke harness was specifically designed for these puzzles, that's why they don't want to measure it.

My reading of that part in the technical report (models "could be using their own tools behind the model’s API, which is a blackbox"), is that there's no way to prevent it.

But from fchollet's comment here, using tools and harnesses is encouraged, as long as they are generic and not arc-agi specific. In that case, the models should be benchmarked by prompting through claude code and codex, rather than the through API (as from the api we only expect raw LLM output, and no tool use).


OpenAi does have python execution behind general purpose api, but it has to be enabled with a flag so I don't think it was used.

There's a very simple solution to this problem here. Instead of wink-wink-nudge-nudge implying that 100% is 'human baseline', calculate the median human score from the data you already have and put it on that chart.

Its below 1% lmao

where did you get this 1%?

> If a harness is needed, it can make its own. If tools are needed, it can chose to bring out these tools.

If I understand correctly the model can carry only very limited memory among tests, so it looks like it's not really possible for the model to self specialize itself under this assumptions.


Don't you see the massive problem with requiring visual input? Are blind people not intelligent because they cannot solve ARC-AGI-3 without a "harness"?

A theoretical text-only superintelligent LLM could prove the Riemann hypothesis but fail ARC-AGI-3 and won't even be AGI according to this benchmark...


Well, it would be AGI if you could connect a camera to it to solve it, similar to how blind people would be able to solve it if you restored their eyesight. But if the lack of vision is a fundamental limitation of their architecture, then it seems more fair not to call them AGI.

People blind from birth literally lack the neural circuits to comprehend visual data. Are they not intelligent?

I think I can confidently say they are not visually intelligent at all.

If you were phrasing things to quantify intelligence, you would have a visual intelligence pillar. And they would not pass that pillar. It doesn't make them dysfunctional or stupid, but visual intelligence is a key part of human intelligence.


Visual intelligence is a near meaningless term as it's almost entirely dependant on spatial intelligence. The visually impaired do have high spatial intelligence, I wouldn't be surprised if their spatial intelligence is actually higher on average than those without visual impairment.

I think they don't actually lack them, or lack only a small fraction (their brains are ≈99% like a normal human brain), such that if they were an AI model, they could be fairly trivially upgraded with vision capability.

Think of it as spatial input, not visual. Blind people do have spatial inputs, and high spatial intelligence.

New benchmark idea: 20 questions of guess the number 1-10, with different answers. We run this on 10,000 humans, take best score. Then we take 50 ai attempts, but take the worst attempt so "worst case scenarior robustness or so". We also discard questions where human failed but ai passed because uhhh reasons... Then we also take the final relative score to the power of 100 so that the benchmark punishes bad answers or sum. Good benchmark?

This is a gross misrepresentation of the scoring process.

Maybe this is a neither can confirm or deny thing, but are there systems in place or design decisions made that are meant to surface attempts at benchmark optimizing (benchmaxxing), outside of just having private sets? Something like a heuristic anti-cheat I suppose.

Or perhaps the view is that any gains are good gains? Like studying for a test by leaning on brute memorization is still a non-zero positive gain.


There are no tricks. Our approach to reducing the impact of targeting (without fully eliminating it) is described in the paper.

Are you prompting the models through their APIs, which are not designed to use tools or harnesses? Or do the "system prompt" results come from prompting into the applications (i.e. claude code, or codex, or even the web front-ends)?

Off topic but I have been following your Twitter for a while and your posts specifically about the nature of intelligence have been a read.

One interesting observation is that French-derived words in English tend to be fancier -- formal, sophisticated, higher-class -- while Germanic ones tend to be more casual, everyday vocabulary.


Many of these words transferred during the Norman Conquest. During that time, England was ruled by French speakers. The upper class and nobility in England were French (and French speakers).

When someone in the upper class wanted boeuf, they wanted the meat of a cow - not the cow itself. And so beef entered the English language as the meat. This extended to other animals. In general, the word for the meat in English is the French word for the animal and the word for the animal is derived from the German word.

https://www.etymonline.com/word/beef and https://www.etymonline.com/word/cow

This also extended to the language law and things that the upper classes (rather than the commoners) dealt with. When the common English (germanic) did have to deal with those topics, they used the French words and those words were brought into English.


I believe this is because the Normans were wealthier than the native Brits


My rough estimate is that words of two syllables or less are mostly Germanic and words of three syllables or more are mostly Romantic.


ça je ne crois pas


Um I meant words in English. Sorry..


only the peasants spoke Old English. The nobility spoke French. eventually the two languages merged into modern English.


You can easily convert these tasks to token strings. The reason why ARC does not use language as part of its format is that it seeks to minimize the amount of prior knowledge needed to approach the tasks, so as to focus on fluid intelligence as opposed to acquired knowledge.

All ARC tasks are built entirely on top of "Core Knowledge" priors, the kind of elementary knowledge that a small child has already mastered and that is possessed universally by all humans.


Can you explain to me? Would the token strings be as easy to solve for humans as well?

Or let me ask differently. Can we still design text questions that are easy for humans and tough for AI?


The reason these tasks require fluid intelligence is because they were designed this way -- with task uniqueness/novelty as the primary goal.

ARC 1 was released long before in-context learning was identified in LLMs (and designed before Transformer-based LLMs existed), so the fact that LLMs can't do ARC was never a design consideration. It just turned out this way, which confirmed our initial assumption.


Is there any other confirmation of the assumptions, other than the LLM behaviour, because that still feels like circular reasoning.

I think a similar claim could be levelled against other benchmarks or LLM evaluation tasks. One could say that the Turing test was designed to assess human intelligence, and LLMs pass it, therefore LLMs have human intelligence. This is generally considered to be false now, because we can plainly see that LLMs do not have intelligence in the same way as humans (yet? debatable, not the point), and instead we concluded that the Turing test was not the right benchmark. That's not to diminish its importance, it was hugely important as a part of AI education and possibly even AI development for decades.

ARC does seem to be pushing the boundaries, I'm just not convinced that it's testing a provable step change.


I'm not sure that's quite correct about the Turing test. From Wikipedia:

"Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward."


>> The reason these tasks require fluid intelligence is because they were designed this way -- with task uniqueness/novelty as the primary goal.

That's in no way different than claiming that LLMs understand language, or reason, etc, because they were designed that way.

Neural nets of all sorts have been beating benchmarks since forever, e.g. there's a ton of language understanding benchmarks pretty much all saturated by now (GLUE, SUPERGLUE ULTRASUPERAWESOMEGLUE ... OK I made that last one up) but passing them means nothing about the ability of neural net-based systems to understand language, regardless of how much their authors designed them to test language understanding.

Failing a benchmark also doesn't mean anything. A few years ago, at the first Kaggle competition, the entries were ad-hoc and amateurish. The first time a well-resourced team tried ARC (OpenAI) they ran roughshod over it and now you have to make a new one.

At some point you have to face the music: ARC is just another benchmark, destined to be beat in good time whenever anyone makes a concentrated effort at it and still prove nothing about intelligence, natural or artificial.


I mostly agree with what your are saying but…

> passing them means nothing about the ability of neural net-based systems to understand language, regardless of how much their authors designed them to test language understanding.

Does this implicitly suggest that it is impossible to quantitatively assess a system’s ability to understand language? (Using the term “system” in the broadest possible sense)

Not agreeing or disagreeing or asking with skepticism. Genuinely asking what your position is here, since it seems like your comment eventually leads to the conclusion that it is unknowable whether a system external to yourself understands language, or, if it is possible, then only in a purely qualitative way, or perhaps purely in a Stewart-style-pornographic-threshold-test - you’ll know it when you see it.

I don’t have any problem if that’s your position- it might even be mine! I’m more or less of the mindset that debating whether artificial systems can have certain labels attached to them revolving around words like “understanding,” “cognition,” “sentience” etc is generally unhelpful, and it’s much more interesting to just talk about what the actual practical capabilities and functionalities of such systems are on the one hand in a very concrete, observable, hopefully quantitative sense, and how it feels to interact with them in a purely qualitative sense on the other hand. Benchmarks can be useful in the former but not the latter.

Just curious where you fall. How would you recommend we approach the desire to understand whether such systems can “understand language” or “solve problems” etc etc… or are these questions useless in your view? Or only useful in as much as they (the benchmarks/tests etc) drive the development of new methodologies/innovations/measurable capabilities, but not in assigning qualitative properties to said systems?


>> Does this implicitly suggest that it is impossible to quantitatively assess a system’s ability to understand language? (Using the term “system” in the broadest possible sense)

I don't know and I don't have an opinion. I know that tests that claimed to measure language understanding, historically, haven't. There's some literature on the subject if you're curious (sounds like you are). I'd start here:

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

Emily M. Bender, Alexander Koller

https://aclanthology.org/2020.acl-main.463/

Quoting the passage that I tend to remember:

>> While large neural LMs may well end up being important components of an eventual full-scale solution to human-analogous NLU, they are not nearly-there solutions to this grand challenge. We argue in this paper that genuine progress in our field — climbing the right hill, not just the hill on whose slope we currently sit —depends on maintaining clarity around big picture notions such as meaning and understanding in task design and reporting of experimental results.


The first time a top lab spent millions trying to beat ARC was actually in 2021, and the effort failed.

By the time OpenAI attempted ARC in 2024, a colossal amount of resources had already been expended trying to beat the benchmark. The OpenAI run itself costs several millions in inference compute alone.

ARC was the only benchmark that highlighted o3 as having qualitatively different abilities compared to all models that came before. o3 is a case of a good approach meeting an appropriate benchmark, rather than an effort to beat ARC specifically.


>> The first time a top lab spent millions trying to beat ARC was actually in 2021, and the effort failed.

Which top lab was that? What did they try?

>> ARC was the only benchmark that highlighted o3 as having qualitatively different abilities compared to all models that came before.

Unfortunately observations support a simpler hypothesis: o3 was trained on sufficient data about ARC-1 that it could solve it well. There is currently insufficient data on ARC-II to solve it therefore o3 can't solve it. No super magickal and mysterious qualitatively different abilities to all models that came before required whatsoever.

Indeed, that is a common pattern in machine learning research: newer models perform better on benchmarks than earlier models not because their capabilities increase with respect to earlier models but because they're bigger models, trained on more data and more compute. They're just bigger, slower, more expensive- and just as dumb as their predecessors.

That's 90% of deep learning research in a nutshell.


I'm sorry, but what observations support that hypothesis? There were scores of teams trying exactly that - training LLMs directly on Arc-AGI data - and by and large they achieved mediocre results. It just isn't an approach that works for this problem set.

To be honest your argument sounds like an attempt to motivate a predetermined conclusion.


In which case what is the point of your comment? I mean what do you expect me to do after reading it, reach a different predetermined conclusion?


Provide some evidence for your claims? This empty rhetoric stuff in every AI thread on HN wears me out a bit. I apologise for being a little aggressive in my previous comment.


There have been some human studies on ARC 1 previously, I expect there will be more in the future. See this paper from 2021, which was one of the earliest works in this direction: https://arxiv.org/abs/2103.05823


ARC 3 is still spatially 2D, but it adds a time dimension, and it's interactive.


I think a lot of people got discouraged, seeing how openai solved arc agi 1 by what seems like brute forcing and throwing money at it. Do you believe arc was solved in the "spirit" of the challenge? Also all the open sourced solutions seem super specific to solving arc. Is this really leading us to human level AI at open ended tasks?


Strong emphasis on "seems".

I'd encourage you to review the definition of "brute force", and then consider the absolutely immense combinatoric space represented by the grids these puzzles use.

"Brute force" simply cannot touch these puzzles. An amount of understanding and pattern recognition is strictly required, even with the large quantities of test-time compute that were used against arc-agi-1.


Also there's no clear way to verify the solution. There could be easily multiple rules which works on the same examples


It's useful to know what current AI systems can achieve with unlimited test-time compute resources. Ultimately though, the "spirit of the challenge" is efficiency, which is why we're specifically looking for solutions that are at least within 1-2 order of magnitude of cost from being competitive with humans. The Kaggle leaderboard is very resource-constrained, and on the public leaderboard you need to use less than $10,000 in compute to solve 120 tasks.


Efficiency sounds like a hardware problem as much as a software problem.

$10000 in compute is a moving target, today's GPUs are much much better than 10 years ago.


> $10000 in compute is a moving target

And it's also irrelevant in some fields. If you solve a "protein folding" problem that was a blocker for a pharma company, that 10k is peanuts now.

Same for coding. If you can spend 100$ / hr on a "mid-level" SWE agent but you can literally spawn 100 today and 0 tomorrow and reach your clients faster, again the cost is irrelevant.


Are you in the process of creating tasks that behave as an acid test for AGI? If not, do you think such a task is feasible? I read somewhere in the ARC blog that they define AGI as when creating tasks that is hard for AI but easy for humans becomes virtually impossible.


If you aren't joking, that will filter most humans.


They said at least two people out of 400 solved each problem so they're pretty hard.


I don't think that's correct. They had 400 people receive some questions, and only kept the questions that were solved by at least 2 people. The 400 people didn't all receive 120 questions (they'd have probably got bored).

If you go through the example problems you'll notice that most are testing the "aha" moment. Once you do a couple, you know what to expect, but with larger grids you have to stay focused and keep track of a few things to get it right.


> Who would be buying bitcoin right now?

Well, maybe the US government? What if the US starts dedicating 10-15% of yearly federal receipts to serve as exit liquidity for Bitcoin holders?


What all top models do is recombine at test time the knowledge they already have. So they all possess Core Knowledge priors. Techniques to acquire them vary:

* Use a pretrained LLM and hope that relevant programs will be memorized via exposure to text data (this doesn't work that well)

* Pretrain a LLM on ARC-AGI-like data

* Hardcode the priors into a DSL

> Which is to say, a data augmentation approach

The key bit isn't the data augmentation but the TTT. TTT is a way to lift the #1 issue with DL models: that they cannot recombine their knowledge at test time to adapt to something they haven't seen before (strong generalization). You can argue whether TTT is the right way to achieve this, but there is no doubt that TTT is a major advance in this direction.

The top ARC-AGI models perform well not because they're trained on tons of data, but because they can adapt to novelty at test time (usually via TTT). For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy. This demonstrates empirically that ARC-AGI cannot be solved purely via memorization and interpolation.


>> So they all possess Core Knowledge priors.

Do you mean the ones from your white paper? The same ones that humans possess? How do you know this?

>> The key bit isn't the data augmentation but the TTT.

I haven't had the chance to read the papers carefully. Have they done ablation studies? For instance, is the following a guess or is it an empirical result?

>> For instance, if you drop the TTT component you will see that these large models trained on millions of synthetic ARC-AGI tasks drop to <10% accuracy.


>This demonstrates empirically that ARC-AGI cannot be solved purely via memorization and interpolation

Now that the current challenge is over, and a successor dataset is in the works, can we see how well the leading LLMs perform against the private test set?


I think the "semi-private" numbers here already measure that: https://arcprize.org/2024-results

For example, Claude 3.5 gets 14% in semi-private eval vs 21% in public eval. I remember reading an explanation of "semi-private" earlier but cannot find it now.


It is correct that the first model that will beat ARC-AGI will only be able to handle ARC-AGI tasks. However, the idea is that the architecture of that model should be able to be repurposed to arbitrary problems. That is what makes ARC-AGI a good compass towards AGI (unlike chess).

For instance, current top models use TTT, which is a completely general-purpose technique that provides the most significant boost to DL model's generalization power in recent memory.

The other category of approach that is working well is program synthesis -- if pushed to the extent that it could solve ARC-AGI, the same system could be redeployed to solve arbitrary programming tasks, as well as tasks isomorphic to programming (such as theorem proving).


"However, the idea is that the architecture of that model should be able to be repurposed to arbitrary problems"

From a mathematical perspective, this doesn't sound right. All NNs are universal apprxomators and in theory can all learn the same thing to equal ability. It's more about the learning algorithm than the architecture IMO.


François, have you coded and tested a solution yourself that you think will work best?


Hey, he's the visionary. You come up with the nuts and bolts.


is keras nuts and bolts enough?


Keres is a good abstraction model but poorly implemented.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: