Hacker Newsnew | past | comments | ask | show | jobs | submit | ElectronCharge's commentslogin

I agree that if the best we can do is something that can't be self-sustaining, Mars should wait until that changes.

I disagree with KSR's main points. Perchlorates are solvable, the effects of Martian gravity are not known (and are solvable if there is a problem), and finally radiation is a non-issue for those living in the only sane place on Mars, underground.

Whether or not Mars is a target in the near term, we need to proceed with our current plan of establishing a permanent base on the Moon. The only way to improve on Earth's resource limitations is to exploit the virtually unlimited riches available beyond her atmosphere, and the Moon is the first step. It's also a great place for heavy industry, not to mention astronomy!


LLMs are amazing technology. It's crazy to interact with something that knows a lot about effectively everything that's ever been written, as well as mimicking human cognition to a large degree.

What LLMs are NOT is intelligent in the same way as a human, which is to say they are not "AGI". They may be loosely AGI-equivalent for certain tasks, software development being the poster child. LLMs have no equivalent of "judgement", and they lie ("hallucinate") with impunity if they don't know the answer. Even with coding, they'll often do the wrong thing, such as writing tests that don't test anything.

It seems likely that LLMs will be one component of a truly conscious AI (AGI+), in the same way our subconscious facility to form sentences is part of our intelligence. We'll see how quickly the other pieces arrive, if ever.


According to current theory AIUI, cosmic inflation greatly influenced the CMB. It ended approximately 10^-32 seconds after the Big Bang:

"Cosmic inflation is believed to have occurred in an incredibly brief, rapid, and exponential expansion phase lasting from approximately 10^-37 to 10^-32 seconds after the Big Bang. During this minute interval, the universe expanded by a factor of at least 10^26, and potentially as much as 10^50."

Quite a theory, cosmic inflation...


Any Faraday cage, bag or not, will eliminate surveillance of the phone until it comes online. Physics wins again.


Sadly, I’m old enough to remember that Ada was the result of a US initiative to standardize a capable language for embedded development.

A good friend worked on the well regarded Telesoft compiler…


Total nonsense. Just avoid Minnesota…


I'm surprised the author of this article thinks Go is a "system language".

Go uses GC, and therefore can't be used for hard real time applications. That's disqualifying as I understand it.

C, C++, Rust, Ada, and Mojo are true system languages IMO. It is true that as long as you can pre-allocate your data structures, and disable GC at runtime, that GC-enabled languages can be used. However, many of them rely on GC in their standard libraries.


The Go creators declared it a systems language and it's stuck around for some reason.

Their definition was not the one most people would have used (leading to C, C++, Rust, Ada, etc. as you listed) but systems as in server systems, distributed services, etc. That is, it's a networked systems language not a low-level systems language.


I think the broad consensus (and I agree with it) is that a systems language cannot have a mandatory GC. The issue with GCs isn’t just latency-optimized applications like hard real-time. GCs also reduce performance in throughput-optimized applications that are latency insensitive, albeit for different reasons.

Anything that calls itself a “systems language” should support performance engineering to the limits of the compiler and hardware. The issue with a GC is that it renders entire classes of optimization impossible even in theory.


You can preallocate your data structures and control memory layout in Go.

Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.

Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.


I think there are some major problems with this thinking. How does this relate to human artists who studied prior art and then produced something?

I’ll grant you that AI isn’t actually intelligent, but I’ve seen many images and video that exhibited a good bit of originality, and were at a minimum a derived work…


… TBC


Ayn Rand had many insightful ideas, however she took them to an extreme.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: