cross-posted from: https://lemmy.ml/post/19683130

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

  • RubberDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 days ago

    Rofl… excellent comparison between AI and loons like Elmo on the self reinforcing that leads to collapse…

    • Andy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 days ago

      I was trying to explain what AI alignment is to my mom, and I ended up using the behavior of companies like OpenAI, and how they’re distorted by profit motive as an example of a misaligned decision making system. And I realized that late stage capitalism is basically the paperclip maximizer made real.

      This is a very good article. I think AI models have more to teach us about epistemology than people want to believe right now.