Hacker Newsnew | past | comments | ask | show | jobs | submit | azath92's commentslogin

And to separate my thoughts from the info blob:

i think the culture war point is also super true of the game design industry, not just the consumers, where the already ultra competitive nature of the work means that the creatives and the industry as a whole have taken a veeeery strong stance against genai. Thats a reckon, and i dont know if its good or bad.

It does feel a little counter to the march of progress, but in a medium where high effort can be enjoyed by many, im personally cool with artisinal handmade games.


Rather than an affinity for artisanal stuff or there being some bias against AI itself, I think it's simply that most stuff that's going to be made with AI is going to be very derivative. Even before AI you'd read posts from people, including on here, like 'I made a highly competent knockoff of [popular indie game] but got no sales. Woe is me.' But games aren't commodities. If people like a game, that doesn't mean they want to play, let alone buy, a complete knockoff of it.

The biggest barrier to success has always been having a good idea and AI is just going to make that ever more apparent, because you'll be able to cook up knockoffs ever more rapidly.


Only because it is something i find fascinating: > There are those Orcs in that one Lord of the Rings game who hold grudges against you.

Is referring to the nemesis system in Middle-Earth: Shadow of Mordor and Shadow of War, and its an amazing set of interlocking procedural systems that do genuinely feel like its AI, but is really AI in the sense its always been used by games (the rules the games follow to govern NPCs+world) and not AI in the sense of modern LLMs or even other generative systems. This video is a great look at what it is and why its great IMO https://www.youtube.com/watch?v=Lm_AzK27mZY

I think a system like this could really work well with some modern LLM stuff, but it certainly feels magic without it.


> I think a system like this could really work well

Too bad, it's patented! https://www.eurogamer.net/shadow-of-mordors-brilliant-nemesi...

Fuck software patents and every single person who has ever filed one.


It’s not really, the patent is very narrow

https://old.reddit.com/r/Games/comments/1iyrbzd/clearing_up_...

https://patents.google.com/patent/US10926179B2/en

The patent was filed in 2016 and granted in 2021. If the system was so useful we would’ve seen it in another game before then, or a very similar system after.


Ah. Well, maybe we could improve the patent system instead? If most property is going digital, and we agree as a society that idea generators and executors deserve compensation for their effort, I think the answer would be better more evolved compensation systems and I agree stopping patents that are clearly a troll though would be better to be fucked.

> maybe we could improve the patent system instead?

The patent system benefits the uber-wealthy at the expense of everyone else, so no, that won't be happening.


> idea generators and executors deserve compensation for their effort

To be fair, in this specific example executors of the idea were already compensated by selling a well-received game with a cool new mechanic.


Climatealigned | Onsite | London

We've spent the last three years making climate finance data at a fraction of the time and cost using AI. We started pre GPT and have continued to evolve and rebuild our stack with the times. These days we are fully agentic, powered by opus/haiku using python/js as best fits the job, but we ride the wave and aren't attached to the past. We are a small, focused, in-person, and technically capable team with deep industry connections.

We are looking for an early career builder to take ownership of end to end data creation, and who can stay on their toes with rapidly changing tech stack and ways of working as the models continue to evolve.

Reach out to our CEO aleksi[at]climatealigned[dot]co to discuss, or drop me a line (founding engineer)

Check some of our past work at https://climatealigned.co or touch base with any of the team with questions on LI


This whole comment thread here is really echoing and adding to some thoughts ive had lately on the shift from considering LLMs replacing engineering to make software (much of which is about integration, longevity and customization of a general system), vs LLMs replacing buying software.

If most software is just used by me to do a specific task, then being able to make software for me to do that task will become the norm. Following that thought, we are going to see a drastic reduction in SASS solutions, as many people who were buying a flexible-toolbox for one usecase to use occasionally, just get an llm to make them the script/software to do that task as and when they need it, without any concern for things like security, longevity, ease of use by others (for better or for worse).

I guess what im circling around is that if we define engineering as building the complex tools that have to interact with many other systems, persist, be generally useful and understandable to many people, and we consider that many people actually dont need that complexity for their use of the system, the complexity arises from it needing to serve its purpose at huge scale over time. then maybe there will be less need for enginners, but perhaps first and foremost because the problems that engineering is required to solve are much less if much more focused and bespoke solutions to peoples problems are available on demand.

As an engineer i have often felt threatened by LLMs and agents of late, but i find that if i reframe it from Agents replacing me, to Agents causing the type of problems that are even valuable to solve to shift, it feels less threatening for some reason. Ill have to mull more.


Taking it further, imagine a traditional desktop OS but it generates your programs on the fly.

Google's weird AI browser project is kind of a step in this direction. Instead of starting with a list of programs and services and customizing your work to that workflow, you start with the task you need accomplished and the operating system creates an optimized UI flow specifically for that task.


but bringing it back, you 1° need to pitch this idea to investors liberate money to cover the Sahara desert with a huge server to suffice these sci-fi needs /s

It's hard to swallow. I'm a 14 YOE software engineer working in an office of about 40 people, with five on the software team. We could cut our software team to 3 people and then maybe 2 after a couple years. The rest of the office could be skimmed to maybe 5 or 10 people. The engineers would babysit the systems and the other personnel would handle the face to face. With these systems developing in the OS the last year or so, it seems as though everything can be automated... Everyone has an X on their back, not just engineers.

Luckily my org has a bit of a pushback attitude towards AI systems, but it will only be a matter of time before we have to compete and adapt. It's kind of depressing, and only the strong will survive.


I am continually surprised by the reference to "voluntary actions taken by companies" being brought up in discussion of the risks of AI, without some nuance given to why they would do that. The paragraph on surgical action goes in to about 5-10 times more detail on the potential issues with gov't regulation, implying to me that voluntary action is better. Even for someone at anthropic, i would hope that they would discuss it further.

I am genuinely curious to understand the incentives for companies who have the power to mitigate risk to actually do so. Are there good examples in the past of companies taking action that is harmful to their bottom line to mitigate societal risk of harm their products on society? My premise being that their primary motive is profit/growth, and that is revenue or investment dictated for mature and growth companies respectively (collectively "bottom line").

Im only in my mid 30s so dont have as much perspective on past examples of voluntary action of this sort with respect to tech or pre-tech corporates where there was concern of harm. Probably too late to this thread for replies, but ill think about it for the next time this comes up.


Major incentives currently in play are "PR fuckups are bad" and "if we don't curb our shit regulators will". Which often leads to things like "AI safety is when our AI doesn't generate porn when asked and refuses to say anything the media would be able to latch on to".

The rest is up to the companies themselves.

Anthropic seems to walk the talk, and has supported some AI regulation in the past. OpenAI and xAI don't want regulation to exist and aren't shy about it. OpenAI tunes very aggressively against PR risks, xAI barely cares, Google and Anthropic are much more balanced, although they lean towards heavy-handed and loose respectively.

China is its own basket case of "alignment is when what AI says is aligned to the party line", which is somehow even worse than the US side of things.


My understanding is that modern mobile phone cameras do heaps of "stacking" across multiple axes focus, exposure, time etc to compose a photo that saves onto your phone. I believe its one of the reasons for the multiple cameras on most flagship phones, and then each of them might take many "photos" or runs of data from their sensors per "photo" you take. id love to see a good writeup of the process, but my gut says exactly what they do under the hood would be pretty "trade secret"ie.


Almost an esolang, but orca is an amazing example of spatial programming for music production (GH https://github.com/hundredrabbits/Orca and video https://www.youtube.com/watch?v=gSFrBFBd7vY to see it in action)


Is this the one from the hippie(non perjorative) group living off a boat?

If it’s the same, it’s one that if I win the lottery I’d spend my time learning along with this tool from Imogen https://mimugloves.com/

I don’t think I’d ever produce something worth listening to, but if I won the lottery, why would I care beyond my own enjoyment?


‘Your own enjoyment’ is a rich reward. My unsolicited advice: Try making a mess with it everyday for a week / month / year and see if you don’t start to appreciate something in what you make. Orca is a brilliant piece of work.


My own enjoyment was predicated on the money side. If I was independently wealthy I’d be splitting my time between this and gem faceting as hobbies



That screenshot is super interesting, never seen anything like it.

It's giving me some ideas for a TUI video editor using that grid interface. What a cool project.


For small models this is for sure the way forward, there are some great small datasets out there (check out the tiny stories dataset that limits vocab to a certain age but keeps core reasoning inherent in even simple language https://huggingface.co/datasets/roneneldan/TinyStories https://arxiv.org/abs/2305.07759)

I have less concrete examples but my understanding is that dataset curation is for sure the way many improvements are gained at any model size. Unless you are building a frontier model, you can use a better model to help curate or generate that dataset for sure. TinyStories was generated with GPT-4 for example.


OP here: one thing that surprised me in this experiment was that the model trained on the more curated FineWeb-Edu dataset was worse than the one trained on FineWeb. That is very counterintuitive to me.


Totally agree, one of the most interesting podcasts i have listened to in a while was a couple of years ago on the Tiny Stories paper and dataset (the author used that dataset) which focuses on stories that only contain simple words and concepts (like bedtime stories for a 3 year old), but which can be used to train smaller models to produce coherent english, both with grammar, diversity, and reasoning.

The podcast itself with one of the authors was fantastic for explaining and discussing the capabilities of LLMs more broadly, using this small controlled research example.

As an aside: i dont know what the dataset is in the biological analogy, maybe the agar plate. A super simple and controlled environment in which to study simple organisms.

For ref: - Podcast ep https://www.cognitiverevolution.ai/the-tiny-model-revolution... - tinystories paper https://arxiv.org/abs/2305.07759


I like the agar plate analogy. Of course, the yeast is the star of the show, but so much work goes into prepping the plate.

As someone in biotech, 90% of the complaints I hear over lunch are not about bad results, but about bad mistakes during the experiment. E.G. someone didn't cover their mouth while pipetting and the plates unusable now.


Ha! I remember where I was when I listened to that episode (Lakeshore Drive almost into Chicago for some event or other) — thanks for triggering that memory — super interesting stuff


Im not sure about how this translates to react native, AFAICT build chains for apps less optimiside, but using vercel for deployment, neon for db if needed, Ive really been digging the ability for any branch/commit/pr to be deployed to a live site i can preview.

Coming from the python ecosystem, ive found the commit -> deployed code toolchain very easy, which for this kind of vibe coding really reduces friction when you are using it to explore functional features of which you will discard many.

It moves the decision surface on what the right thing to build to _after_ you have built it. which is quite interesting.

I will caveat this by saying this flow only works seamlessly if the feature is simple enough for the llm to oneshot it, but for the right thing its an interesting flow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: