Hacker Newsnew | past | comments | ask | show | jobs | submit | halfmatthalfcat's commentslogin

Welcome to Slopworld

I loved the idea of Swift adopting actors however the implementation seems shoehorned. I wanted something more like Akka or QP/C++...

I feel the reverse. I can see one can claim Swift has everything but the kitchen sink, but its actors, to me, don’t look shoehorned in.

Reading https://docs.swift.org/swift-book/documentation/the-swift-pr..., their first example is:

  actor TemperatureLogger {
      let label: String
      var measurements: [Int]
      private(set) var max: Int

      init(label: String, measurement: Int) {
          self.label = label
          self.measurements = [measurement]
          self.max = measurement
      }
  }
Here, the ‘actor’ keyword provides a strong hint that this defines an actor. The code to call an actor in Swift also is clean, and clearly signals “this is an async call” by using await:

  await logger.max
I know Akka is a library, and one cannot expect all library code to look as nice as code that has actual support from the language, but the simplest Akka example seems to be something like this (from https://doc.akka.io/libraries/akka-core/current/typed/actors...):

  object HelloWorld {
    final case class Greet(whom: String, replyTo: ActorRef[Greeted])
    final case class Greeted(whom: String, from: ActorRef[Greet])

    def apply(): Behavior[Greet] = Behaviors.receive { (context, message) =>
      context.log.info("Hello {}!", message.whom)
      message.replyTo ! Greeted(message.whom, context.self)
      Behaviors.same
    }
  }
I have no idea how naive readers of that would easily infer that’s an actor. I also would not have much idea about how to use this (and I _do_ have experience writing scala; that is not the blocker).

And that gets worse when you look at Akka http (https://doc.akka.io/libraries/akka-http/current/index.html). I have debugged code using it, but still find it hard to figure out where it has suspension points.

You may claim that’s because Akka http isn’t good code, but I think the point still stands that Akka allows writing code that doesn’t make it obvious what is an actor.


> I wanted something more like Akka

https://github.com/apple/swift-distributed-actors is more like Akka, but with better guarantees from the underlying platform because of the first-class nature of actors.


Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.

- Robert Virding


This is my feeling as well. It feels to me that based on the current product, Swift had two different designers: one designer who felt swift needed to be a replacement for Objective C and therefore needed to feel like a spiritual successor to that language, which meant it had to be fundamentally OOP, imperative, and familiar to iOS devs; and another designer who wanted it to be a modern functional, concurrent language for writing dynamic user interfaces with an advanced type checker, static analysis, and reactive updates for dynamic variables.

The end result is a language that brings the worst of both worlds while not really bringing the benefits. An example I will give is SwiftUI, which I absolutely hate. You'd think this thing would be polished, because it's built by Apple for use on Apple devices, so they've designed the full stack from editor to language to OS to hardware. Yet when writing SwiftUI code, it's very common for the compiler to keel over and complain it can't infer the types of the system, and components which are ostensibly "reactive" are plagued by stale data issues.

Seeing that Chris Lattner has moved on from Swift to work on his own language, I'm left to wonder how much of this situation will actually improve. My feeling on Swift at this point is it's not clear what it's supposed to be. It's the language for the Apple ecosystem, but they also want it to be a general purpose thing as well. My feeling is it's never not going to be explicitly tied to and limited by Apple, so it's never really going to take off as a general purpose programming language even if they eventually solve the design challenges.


The thing I often ask or mention in discussions about SwiftUI is, if SwiftUI is so good and easy to use and made for cross-platform, why did take Apple themselves for example so long to port their Journal app to macOS? This is a trivial application, something you'd have found in a beginner programming book as an example project 10 or 20 years ago.

I get all the points about Swift and SwiftUI in theory, I just don't see the results in practice. Also or especially with Apple's first party applications.


Journal has a lot of extra features where it autogenerates suggestions based on what you've done lately.

Most of these extra features are still pretty trivial. What does it do? Suggest entries based on recent walks, music you listened and photos. Nothing that would justify a multi-year porting effort IMHO.

And these features don't even work across devices, or rather, from what I can tell, they don't exist at all yet in the macOS Tahoe version.

Similar things could be said about for example the Passwords app. It works, it's functional, sure. But compared to apps like 1Password it's really, really barebones. You can't even change the way the it generates passwords if you need to comply with a specific policy for example.


It's an unpopular opinion, but my belief is that trying to go all-in on one paradigm is the actual mistake. There's several types of awkwardness that arise when a UI library is strictly declarative, for example.

On Apple platforms, I've had a lot of success in a hybrid model where the "bones" of the app are imperative AppKit/UIKit and declarative SwiftUI is used where it's a good fit, which gives you the benefits of both wherever they're needed and as well as an escape hatch for otherwise unavoidable contortions. Swift's nature as something of a hodgepodge enables this.


> the implementation seems shoehorned.

Because it's extremely hard to retrofit actors (or, really, any type of concurrency and/or parallelism) onto a language not explicitly designed to support it from scratch.


This sounds like my complete and utter nightmare. No art or finesse in building the thing - only an exercise in torturing language to someone who at a fundamental level doesn't understand a thing.

Nothing stopping you from hand sculpting software like we did in the before times.

Mass production however won’t stop, it’s barely started literally a couple months ago and it’s the slowest and worst it’ll ever be.


I'm not viewing AI tooling as an extinction of the art of programming, only illuminating how telling an AI how to create programs isn't in the same universe as programming, where the technical skill to do such a thing is on par with punching in how long my microwave should nuke my popcorn.

This isn't my experience. It's more like discussing with another skilled developer on my team how we should code the solution, what APIs we should use, what techniques, what algorithms. Firing ideas back and forth until we settle on a reasonable plan of attack. That plan usually consists of a mix of high level ideas and chunks of example code.

I keep hearing "it's the slowest and worst it'll ever be" as though software ability and performance only ever increase and yet mass produced software is slower and enshittier than it was 10-15 years ago and we're all complaining about it. And you can't say "but it does so much more" because I never asked for 90% of the "more" and just want to turn most of it off.

I’m also not convinced that any of these models are going to stick around at the same level once the financial house of cards they’re built on comes tumbling down. I wonder what the true cost of running something like Claude opus is, it’s probably unjustifiably expensive. If that happens, I don’t think this stuff is going to completely disappear but at some point companies are going to have to decide which parts are valuable and jettison the rest.

It definitely feels like we're living in the golden time when all the LLMs are getting massively subsidized. You could just tab between all the free accounts all day right now and still get some amazing code results without paying a dime.

I can think of a few things that could happen to sink "it's the slowest and worst it'll ever be". Even ignoring things that could happen, I think in general we're hitting a ceiling with LLMs. All the annoyances and bugs and frankly incompetence with the current models are not going away soon, despite $tn of investments. At this point it is now just about propping up this bubble so the USA doesn't have another big recession.

I don’t really understand how you got that from my post. I can and do drop in to refactor or work on the interesting parts of a project. At every checkpoint where I require a review I can and do make medications by hand.

Are you complaining about code formatters or auto fix linters? What about codegen based on APIs specs? A code agent can do all of those and more. It can do all the boring parts while I get to focus on the interesting bits. It’s great.

Here’s another fantastic use case: have an agent gen the code, think about its prototype, delete, and then rewrite it. I did that on a project with huge success: https://github.com/neurosnap/zmx


Not really at all like this, more like being a tech lead for a team of savants who simultaneously are great at parts of software engineering, and limited at others. Though that latter category is slimmer than a year ago…

The point is, you can get lots of quality work out of this team if you learn to manage them well.

If that sounds like a “complete and utter nightmare”, then don’t use AI. Hopefully you can keep up without it in the long run.


Let go of your AI gods and embrace the abyss. We've survived for decades without them and will survive in spite of them.

Wow - can we coin "Slopbrain" for people who are so far gone into AI eventualism that they can no longer function? Liked "cooked" but "slopped" or something. Good grief lol. Talk about getting lost in the sauce...

WSJ has been writing increasingly about "AI Psychosis" (here's their most recent piece [0]).

I'm increasingly seeing that this is the real threat of AI. I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new. While not as dramatic, the normalization of the use of "AI as therapist" is equally concerning. I know tons of people that rely on LLMs to guide them in difficult family decisions, career decisions, etc on an almost daily basis. If I'm honest, I myself have had times where I've leaned into this too much. I've also had times where AI starts telling me how clever I am, but thankfully a lifetime of low self worth signals warning flags in my brain when I hear this stuff! For most people, there is real temptation to buy into the praise.

Seeing Karpathy claim he can't keep up was shocking. It also immediately raises the question to anyone with a clear head: "Wait, if even Karpathy cannot use these tools effectively... just what is so useful about AI?" Isn't the entire point of AI that I can merely describe my problem and have a solution in a fraction of the time.

The fact that so many true believers in AI seem to forever be just a few more tricks away from really unleashing this power, starts to make it feel very much like magical thinking on a huge scale.

The real danger of AI is that we're entering into an era of mass hallucination across multiple fields and areas of human activity.

0. https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d...


> I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new.

Cryptoboys did it first, please recognize their innovation ty


That's NOT AI psychosis, which is real, and which I've seen close-up.

AI psychosis is getting lost in the sauce and becoming too intimate with your ChatGPT instance, or believing it's something it's not.

Skepticism, or a fear of being outside the core loop is the exact opposite, and that's what Karpathy is talking about here. If anything, this kind of post is an indicator that you're absolutely NOT in AI psychosis.


"the core loop"? What is this?

Cyberpunk was right!

I would really like to hear more about these acquaintances who think they are evolving.

WSJ is Fox News Platinum, I wouldn't overthink it

I feel Karpathy is smart enough to deserve a less dismissive response than this.

A mix of "too clever by half" and "never meet your heroes".

Why do you feel that way?

You think we should appeal to authority rather than address the ideas on their own merits?

How is saying the author has “slopbrain” is “addressing the idea on its own merits”? It’s just name calling.

They aren't addressing my comment (which is obviously an overreaction to the tweet), he's asking you why we should appeal to authority rather than evaluate whether Karpathy is completely overreacting and in way too deep.

The intent of my comment was to state that you should write something more substantive than dismissing Karpathy as “slopbrain”. I wasn’t appealing to authority by saying that he was correct — just that he deserves more than name calling in a response.

Evidently by "LLM/AI psychosis" coming into the mainstream zeitgeist, "slopbrain" isn't too far off.

Now you're just saying "AI psychosis exists" (true) and then saying Karpathy has it. That is, again, essentially name calling, like saying someone is insane rather than addressing their points.

If you really think Karpathy is psychotic you should explain why, but I don't think anything in the Tweet suggests that. My read of his tweet is that there is a lot of churn and new concepts in the software engineering industry, and that doesn't seem like a very psychotic thing to say.


I call it being "oneshot" by the AI.

Twitter folks call this LLM or AI Psychosis.

We could call it "Hacker News syndrome"

Slopbrain is interesting because Karpathy's fallacious argumentation mirrors the glib argument of an LLM/AI, it's like cognitively recursive, one feeding the other in a self-selecting manner.

Slippery slop?

[flagged]


This is what I keep hearing. "You just need something more agentic" "if you had the context length you could've fixed that" etc etc. yeah sure. I'll believe it when I see it. For me it's parsing 3000 page manuals for relevant data. I can do it fairly competently from experience, but I see a lot of people not familiar with them struggle to extract the info they need, and AIs just cannot hold all that context in my experience

SPAs are early 2010s homie. SSR has been dominant for the past 5 years.

They usually are, by the CFTC, but Polymarket was able to skate by until the last couple years as an unregulated derivatives market. Polymarket bought their way back into the US by buying an already existing CFTC approved exchange (QCX).

Because nobody outside of the HN-sphere cares about HTML purism, nor should they.

It's not HTML purism. It's simply recognizing that HTML and CSS have evolved a lot and many things don't need (or are close to not need) JS anymore. This shouldn't be taken as an anti-JS article, everyone benefits from these gradial improvements. Especially our users who can now get a uniform experience.

Not all high school educations are created equal - See Carmel High School (Carmel, IN), New Trier High School (Winnetka, IL), or any other High School in a densely high wealth area.

While Pasadena is a relatively wealthy city, historically there has been significant avoidance of its public schools by affluent residents: https://southerneducation.org/in-the-news/new-polling-data-f...

Pasadena school district spends $28K / student for their total $390M expenditures across ~14k students in 2023-2024 school year. I would bet dollars to doughnuts it's $30k+ per high school student since they are more expensive.

It’s not even the languages or runtimes that inhibit embedded adoption but the software to hardware tooling. Loader scripts, HAL/LL/CMSIS, flashing, etc. They all suck.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: