... Did you just complain about modern technology taking power away from users only to post an AI generated song about it? You know, the thing taking away power from musicians and filling up all modern digital music libraries with garbage?
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
> Did you just complain...only to post an AI generated song about it?
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
It didn't augment your skills, it replaced skills you lack. If I generate art using DallE or Stable Diffusion, then edit in Krita/Photoshop/etc. it doesn't suddenly cover up the fact that I was unable to draw/paint/photograph the initial concept. It didn't augment my skills, it replaced them. If you generate "music" like that, it's not augmenting your poetry that you wish to use as lyrics - which may or may not be of good quality in it's own right - it replaced your ability to make music with it.
Computers are meant to be tools to expand our capabilities. You didn't do that. You replaced them. You didn't ride a bike, you called an Uber because you never learned to drive, or you were too lazy to do it for this use.
AI can augment skills by allowing for creative expressions - be it with AI stem separation, neural-network based distortion effects, etc. But the difference is those are tools to be used together with other tools to craft a thing. A tool can be fully automated - but then, if it is, you are no longer a artist. No more than someone that knows how to operate a CNC machine but not design the parts.
This is hard for some people to understand, especially those with an engineering or programming background, but there is a point to philosophy. Innate, valuable knowledge in how a thing was produced. If I find a stone arrow head buried under the dirt on land I know was once used for hunting by native Americans, that arrow head has intrinsic value to me because of its origin. Because I know it wasn't made as a replica and because I found it. There is a sliding scale, shades of gray here. An arrow head I had verified was actually old but which I did not find is still more valuable than one I know is a replica. Similarly, you can, I agree, slowly un-taint an AI work with enough input, but not fully. Similarly, if an digital artist painted something by hand then had StableDiffusion inpaint a small region as part of their process, that still bothers many, adds a taint of that tool to it because they did not take the time to do what the tool has done and mentally weigh each pixel and each line.
By using Suno, you're firmly in the "This was generated for me" side of that line for most people, certainly most musicians. That isn't riding a bike. That's not stretching your muscles or feeling the burn of the creative process. It's throwing a hundred dice, leaving the 6's up, and throwing again until they're all 6's. Sure, you have input, but I hardly see it as impressive. You're just a reverse centaur: https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
And for the record, I could write a multi-page rant about how Suno is not actually what I want; its shitty UI (which will no doubt change soon) and crappy reinvention of the DAW is absolutely underpowered for tweaking and composing songs how I want. We should instead be integrating these new music creation models into both professional tools and also making the AI tools less of a push-button one-stop shop, but giving better control rather than just meakly pawing in the direction of what you want with prompts.
Because none of these AI tech bros give a dam about music. I thought with ai we would be able to put all the "timbres" of instruments into vector database and create a truly new instrument sound. Like making a new color for the first time.
But no we get none of that. We get mega shitty corporate covers. I would rather hear music that's a little bad than artificially perfect sounding.
I had a seaboard. They didn't catch on because the surface isn't very consistent, it's hard to actually hit a note and not bend without setting the "dead zone" pretty large, and the surface itself is just not a great texture to play on.
The ExpressiveE Osmose is proving to be quite popular. I have one, as do 3 other musicians I know personally. It's a very similar idea, but a lot more mechanical.
There's other options too. The Ableton Push 3, Linstrument, Haken Continuum, and a few other MPE synths/controllers all do a better job than the Seaboard by miles. The Osmose is my reccomendation for most people currently, based on the half dozen or so MPE controllers I've had my own hands on and it's price, but I'd love to get my hands on a Continuum.
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.