Hacker Newsnew | past | comments | ask | show | jobs | submit | sinuhe69's commentslogin

As the focus here is solely on the US, and the comments focus too much on the impossibility of heat dissipation, I want to include some information to broaden the perspective.

- In the EU, the ASCEND study conducted in 2024 by Thales Alenia Space found that data center in space could be possible by 2035. Data center in space could contribute to the EU's Net-Zero goal by 2050 [1]

- heat dissipation could be greatly enhanced with micro droplet technology, and thereby reducing the required radiator surface area by the factor of 5-10

- data center in space could provide advantages for processing space data, instead of sending them all to earth. - the Lonestar project proved that data storage and edge processing in space (moon, cislunar) is possible.

- A hybrid architecture could dramatically change the heat budget: + optical connections reduce heat + photonic chips (Lightmatter and Q.ANT) + processing-in-memory might reduce energy requirement by 10-50 times

I think the hybrid architecture could provide decisive advantages, especially when designed for AI inference workloads,

[1] https://ascend-horizon.eu/


> Data center in space could contribute to the EU's Net-Zero goal by 2050

How unbelievably crass. "Let's build something out of immense quantities of environmentally-destructive-to-extract materials and shoot it into space on top of gargantuan amounts of heat and greenhouse gas emissions; since it won't use much earth-sourced energy once it's up there, that nets out to a win!"

Insane.


Blue Origin at least runs its rockets on hydrogen whose exhaust is only water.

Where do they get the hydrogen without putting a load of CO2 into the atmosphere just to manufacture the hydrogen to begin with?

One thing to think about is debt which is not in terms of money.

People are becoming more familiar with "technical debt" since otherwise it comes due by surprise.

With hamsterwheels in space you've got energy debt.

Separate from all other forms of debt that are involved.

Like financial debt, which is only a problem if you can't really afford to do the project so you have to beg, borrow, and/or steal to get it going.

On that point I think I'd be a little skeptical if the richest known person can't actually afford this easily. Especially if he really wants it with all his heart, and has put in any worthwhile effort so far.

Anyway, solar cells are kind of weak when you think about it, they don't produce the high output of a suitable chemical reaction, like the kind that launches the rockets themselves. Which releases so much energy so fast that it's always going to take a serious amount of time for the "little" solar cells to have finally produced an equal amount of energy before a net positive can begin to accrue.

Keeping the assets safely on the home planet simply provides a jump-start that can not be matched.

All other things being unequal or not.


And heat and pressure. Negligible amounts in terms of the biosphere, but not in terms of flora and fauna in proximity to launch sites.

Flora and fauna near launch sites is not a battle you are going to win. The next space race will need more launch sites, not fewer and we're gonna have to accept a negative impact around those sites.

> micro droplet technology

Intentionally causing Kessler Syndrome?

> A hybrid architecture could dramatically change the heat budget: + optical connections reduce heat + photonic chips (Lightmatter and Q.ANT) + processing-in-memory might reduce energy requirement by 10-50 times

It would also make ground-based computation more efficient by the same amount. That does nothing to make space datacenters make sense.


Kessler syndrome is only a problem if the satellites are in LEO. They don't have to be.

They do have to be if they want to be approved by the FCC.

And btw Kessler syndrome applies to any orbital band. You've got the logic backwards. Kessler syndrome is usually only considered a threat for LEO because that's where most of the satellites are. But if you're throwing million(s) of satellites into orbit, it becomes an issue at whatever orbital height you pick.


Sorry I misspoke, you're totally correct. What I meant to say was it's only a problem if they're orbiting around the Earth. I've heard sun orbits mentioned as a possibility for data centers.

It would still be a space junk problem. Space is big, but amazingly not that big. If you start ejecting little hot BBs at interplanetary speeds, you are creating broad swath of buckshot that will eventually impact something with the force of a missile. Put millions of these satellites into solar orbits (I’m ignoring the huge increase in launch cost this would require, and all the other issues like latency and comms), and you could very well make trips to other planets impossible.

It wouldn’t be Kessler syndrome as you would not have a chain reaction of collisions, but the end result would be the same.


Yeah if you leave enough junk in any orbit it'll become a problem, but I don't think that's necessarily an argument not to put things in that orbit. You'd just need to not hit that critical limit where things become untenable.


Is sort of somewhat handling 10,000 by enabling them to make orbital adjustments more quickly. By the time you have a million, you will run out of prop way too quick.

> reduce energy requirement by 10-50 times

This is only relevant to the compute productivity (how much useful work it can produce), but it's irrelevant to the heat dissipation problem. The energy income is fundamentally limited by the solar facing area (x 1361 W/m^2). So the energy output cannot exceed it, regardless useful signals or just waste heat. Even if we just put a stone there, the equilibrium temperature wouldn't be any better or worse.


The danger is that these nuanced, legitimate use cases get rhetorically stretched to support much bigger claims. That's where skepticism kicks back in.

It’s so hard to understand that the foreign staff are now afraid for their safety and their lives?

After the killing of Pretti (execution is probably the more correct word), I guess even some US staff can not be so sure about what would happen to them.

__“But are there not many fascists in your country?"

"There are many who do not know – but will find it out when the time comes.”__


My general take on any AI/ML in medicine is that without a proper clinical validation, they are not worth to try. Also, AI Snake Oil is worth reading.

Clinical validation, proper calibration, ethnic and community and population variants, questioning technique and more ...

Exactly. There's a lot of potential, but it needs to be done right, otherwise it is worse than useless.

What a wonderful teacher! I wish all teachers were like him.

Regarding the collaboration before the exam, it's really strange. In our generation, asking or exchanging questions was perfectly normal. I got an almost perfect score in physics thanks to that. I guess the elegant solution was still in me, but I might not have been able to come up with it in such a stressful situation. 'Almost' because the professor deducted one point from my score for being absent too often :)

However, oral exams in Europe are quite different from those at US universities. In an oral exam, the professor can interact with the student to see if they truly understand the subject, regardless of the written text. Allowing a chatbot during a written exam today would be defying the very purpose of the exam.


Pyret, a teaching language for CS, in the vein of Racket, does require testing by writing functions.

https://pyret.org/docs/latest/testing.html


My experience is exactly the opposite. With AI, it's better to start small and simplify as much as possible. Once you have working code, refactor and abstract it as you deem fit, documenting along the way. Not the other way around. In a world abound of imitations and perfect illusions, code is the crucial reality to which you need to anchor yourself, not documents.

But that’s just me, and I'm not trying to convince anyone.


in my experience you do both. small ai spike demos to prove a specific feature or logic, then top-down assemble them into a superstructure. The difference is that I do the spikes on pure vibe, while reserving my design planning for the big system.

or at least they can cache the results for a while and update so they can compare the answers over time and not waste the planet's energy due to their dumb design.


Reading the comments here about lawyering to make the prediction seems accurate, I have to say for me the value of prediction is not firstly about its binary accuracy, but more about the insights I get from the prediction and adjustment process. As our language is always limited, putting everything in a binary yes/no will almost always result in some dissatisfaction. We learn never much from binary values but from the gray values and how we must change our judgements to adapt. Perhaps that’s why punishment can cause a reaction but not deep learning and good educators always strive for insights and self-correction.

Useful predictions should also not in black or white but should be presented with an uncertainty, percentage of confidence if one can. It helps one to adjust ones prediction and confidence when new facts come along and I argue every serious predictors should do that.


To protect innocent people for examples, or to not reveal some secrets.


Some of these don't feel like they fall into those. For example in [1], on page 41, I can't imagine how that redaction fits either.

1: https://drive.google.com/drive/u/0/folders/1HFqpFLOJgYLiAgjT...


What is in this particular case that requires outdated tools? If they are code, certainly you can write them on VS Code or whatever you likes, and only need to compile and load on the original tools, can’t you?


It’s more the library and language side. Typically you are years behind and once a version has proven to be working, the reluctance to upgrade is high. It’s getting really interesting with the rise of package managers and small packages. Validating all of them is a ton of effort. It was easier with larger frameworks


Sometimes it's because you need to support ancient esoteric hardware that's not supported by any other tools, or because you've built so much of your own tooling around a particular tool that it resembles application platform in it's own right.

Other times it's just because there are lots of other teams involved in validation, architecture, requirements and document management and for everyone except the developers, changing anything about your process is extra work for no benefit.

At one time I worked on a project with two compiler suites, two build systems, two source control systems and two CI systems all operating in parallel. In each case there was "officially approved safe system" and the "system we can actually get something done with".

We eventually got rid of the duplicate source control, but only because the central IT who hosted it declared it EOL and thus the non-development were forced, kicking and screaming to accept the the system the developers had been using unofficially for years.


That’s what we often do. Develop with one set of non validated tools but in the end put everything into the validated system for submission.


You need tracability from requirements down to lines of code. It's a very painstaking process.


Painstaking and often done with terrible tools and badly written requirements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: