Hacker Newsnew | past | comments | ask | show | jobs | submit | vidarh's commentslogin

Probably a lot easier, but the moon looses a major selling point of data centres in space, namely reasonable latency. To be clear, I don't think it's a good idea. But I think that specifically the way Musk is trying to position it, the moon would be an even harder sell.

> But I think that specifically the way Musk is trying to position it, the moon would be an even harder sell.

I agree. I would be quite a moonshot.


The ISS has giant heat sinks[1]. Those heat sinks are necessary for just the modest heat generated on the ISS, and should give an idea of what a sattelite full of GPU's might require...

[1] https://en.wikipedia.org/wiki/External_Active_Thermal_Contro...


For agent, read sub-agent. E.g. the contents of your .claude/agents directory. When Claude Code spins up an agent, it provides the sub-agent with a prompt that combines the agents prompt and information composed by Claude from the outer context based on what Claude thinks needs to be communicated to the agent. Claude Code can either continue, with the sub-agent running in the background, or wait until it is complete. In either case, by default, Claude Code effectively gets to "check in" on messages from the sub-agent without seeing the whole thing (e.g. tool call results etc.), so only a small proportion of what the agent does will make it into the main agents context.

So if you want to do this, the current workaround is basically to have a sub-agent carry out tasks you don't want to pollute the main context.

I have lots of workflows that gets farmed out to sub-agents that then write reports to disk, and produce a summary to the main agent, who will then selectively read parts of the report instead of having to process the full source material or even the whole report.


OK, so you are essentially using sub-agents as summarizing tools of the main agent, something you could implement by specialized tools that wrap independent LLM calls with the prompts of your sub-agents.

A bit of caution: it's perfectly able to look up and read the slash-command, so while it may be true it technically can't "invoke" a slash-command via TaskTool, it most certainly can execute all of the steps in it if the slash-command is somewhere you grant it read access, and will tend to try to do so if you tell it to invoke a slash command.

Agents add a docs index in context for skills, so this is an issue of finding that the current specific implementation of skills in Claude Code is suboptimal.

Their reasoning about it is also flawed. E.g. "No decision point. With AGENTS.md, there's no moment where the agent must decide "should I look this up?" The information is already present." - but this is exactly the case for skills too. The difference is just where in the context the information is, and how it is structured.

Having looked at their article, ironically I think the reason it works is that they likely force more information into context by giving the agent less information to work with:

Instead of having a description, which might convince the agent a given skill isn't relevant, their index is basically a list of vague filenames, forcing the agent to make a guess, and potentialy reading the wrong thing.

This is basically exactly what skills were added to avoid. But it will break if the description isn't precise enough. And it's perfectly possible that current tooling isn't aggressive enough about pruning detail that might tempt the agent to ignore relevant files.


The current tooling isn't aggressive enough in that it's not the first thing that the agent checks for when it is prompted, at least for claude code. Way more often than not, i remind the agent that the skill exists before it does anything. It's very rare that it will pick a skill unprompted. Which to me kind of defeats the purpose of skills, I mean if I have to tell the thing to go look somewhere, I'll just make any old document folder in any format and tell it to look there.

Agreed. I think being overly formal about what can be in the frontmatter would be a mistake, but the beauty of doing this with an LLM is that you can pretty much emulate skills in any agent by telling it to start by reading the frontmatter of each skills file and use that to decide when to read the rest, so given that as a fallback, it's hardly imposing some massive burden to standardise it a bit.

The description "just" needs to be excruciatingly precise about when to use the skill, because the frontmatter is all the model will see in context.

But on the other hand, in Claude Code, at least, the skill "foo" is accessible as /foo, as the generalisation of the old commands/ directory, so I tend to favour being explicit that way.


I mean, the entire name of Mechanical Turk plays on "packaging up humans as technology", given the original Mechanical Turk was a "machine" where the human inside did the work.

Yes, but isn't that pretty much the point of the person you replied to? We know that a lot of inventions were motivated by that, and so it is incredibly myopic to not pause and try to think through the likely far broader implications.

OK, so what are they?

Scaling photovoltaic production doesn't seem likely to have many broader implications on its own. At best, it makes it easier to change the grid to renewable power, if you ignore the intermittency problem that still exists even at huge scales. PV fabs aren't really reusable for other purposes though, and PV tech is pretty mature already, so it's not clear what scaling that up will do.

Scaling rocketry has several fascinating implications but Elon already covered many of them in his blog post.

Scaling AI - just read the HN front page every day ;)

What are we missing here? Some combinatoric thing?


Scaling up PV production to the point where we could convert the entire Earth's electricity generation to solar is incredibly significant.

Yes there's the problem of intermittency, varying sun availability and so forth - which is why solar will never provide 100% of our power and we'll also need grid-scale storage facilities and domestic batteries and all sorts of stuff - but just imagine being able to make that many panels in the first place! Literally solar on every roof, that's transformative.

But sure, let's send it all to space to power questionable "AI" datacentres so we can make more fake nudes.


> doesn't seem likely to have many broader implications on its own

Considering how foundational energy is to our modern economy, energy several orders of magnitude cheaper seems quite likely to have massive implications.

Yes it might be intermittent, but I'm quite confident that somebody will figure out how to effectively convert intermittent energy costing millicents into useful products and services.

If nothing else, incredibly cheap intermittent energy can be cheaply converted to non-intermittent energy inefficiently, or to produce the enablers for that.


> Scaling photovoltaic production doesn't seem likely to have many broader implications on its own

Musk is suggesting manufacture at a scale sufficient to keep the Earth's entire land area tiled in working PV.

If the maths I've just looked at is correct (first glance said yes but I wouldn't swear to it), that on the ground would warm the earth by 22 C just by being darker than soil; that in the correct orbit would cool it by 33 C by blocking sunlight.


Just scratching at the surface, assuming the increase in production capacity is only realistically possible if you can bring prices down (or this "project" would start to consume a proportion of economic output large enough to seem implausible), you can address the intermittency problem in several ways:

Driving down the cost makes massive overprovision a means of reducing the intermittency because you will be able to cover demand at proportionally far lower output, which also means you'll be able to cover demands in far larger areas, even before looking at storage.

But lower solar costs would also make storage more cost effective, since power cost will be a lower proportion of the amortised cost of the total system. Same with increasing transmission investments to allow smoothing load. Ever cost drop for solar will make it able to cover a larger proportion of total power demand, and we're nowhere near maximising viable total capacity even at current costs.

A whole lot of industrial costs are also affected by energy prices. Drive down this down, and you should expect price drops in other areas as well as industrial uses where energy expensive processes are not cost-effective today.

The geopolitical consequences of a dramatic acceleration of the drop in dependency on oil and gas would also take decades to play out.

At the same time, if you can drive down the cost of energy by making solar so much cheaper, you also make earth-bound data centres more cost-competive, and the cost-advantage of space-bound data centres would be accordingly lower.

I think it's an interesting idea to explore (but there's the whole issue of cooling being far harder in space), but I also think the effects would be far broader. By all means, if Musk wants to poor resources into making solar cheap enough for this kind of project to be viable, he should go ahead - maybe it'll consume enough of time to give him less time to plan a teenage edgelor - because I think the societal effects of driving down energy costs would generally be positive, AI or not, it just screams of being a justification for an xAI purchase done mostly for his personal financial engineering.


Going public also brings with it a lot of pesky reporting requirements and challenges. If it wasn't for the benefit of liquidity for shareholders, "nobody" would go public. If the bigger shareholders can get enough liquidity from private sales, or have a long enough time horizon, there's very little to be gained from going public.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: