I personally feel that goes against what this community should be about but I’d like feedback, especially on wording.

I don’t feel generative “ai” audio is fitting here but a eurorack/modular/generative sequencer type thing would be welcome.

Given a recent post I believe the general community agrees but I need help with wording.

Any thoughts are appreciated.

Edit: not the focus, but I do think stem splitting models are fine, but I would interested in discussion about that too.

  • Leon@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I think there’s a difference between say a sequencer that has a machine learning model behind it, and a model that’s trained on stolen music that takes words as input and outputs slop.

    If you look at Synthesizer V for example, the voice providers were compensated and aware of what they are signing up for. The providers in the past back when it was concatenative synthesis who didn’t feel comfortable with a machine learning library, haven’t gotten one (e.g. Cangqiong). It’s also a much smaller company operating on a different scale. You won’t find Kanru Hua or Dreamtonics lobbying the American government bribing the U.S. president so they can be above the law. Sure, nowadays I’m sure they rent a server farm for their training, but I recall Hua training the initial models on his PC in the late 201Xs.

    I think for me personally that’s where the sweet spot is, machine learning used to be a helpful tool in a creative process. Whether it replicates a voice, or an instrument, or helps filter noise. It can be done in a manner that doesn’t involve theft or the ruination of communities.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        I’m not sure who he is or what he uses, so I can’t say.

        I don’t oppose machine learning. It’s a tool, and it’s quite handy for things. For example, I make ample use of RNNoise If you were to say, record a bunch of samples from an instrument and train a model on that, I don’t really see the problem with that.

        My problem lies more in the underhanded tactics big corporations are doing right now, stealing masses of data, meddling hard in politics, destroying communities, etc. all for what’s ultimately a rather underwhelming and extremely expensive end product no one even asked for.