Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 1 Post
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • It’d be one thing if X didn’t actively promote disinformation but they are doing that. They’re picking what and who to promote via their algorithm.

    If they had a hands-off approach to free speech (like any given Mastodon instance) I’d agree with you. Since that’s not the case I can’t see how it’s a, “slippery slope”. They’re actively promoting disinformation in order to push a political agenda that actively hurts the Australian people.

    It’s basic liability, not really related to freedom of speech. You can say whatever you want but there can also be legal consequences for what you say. It’s always been like that. Even in the US.


  • Just a point of clarification: Copyright is about the right of distribution. So yes, a company can just “download the Internet”, store it, and do whatever TF they want with it as long as they don’t distribute it.

    That the key: Distribution. That’s why no one gets sued for downloading. They only ever get sued for uploading. Furthermore, the damages (if found guilty) are based on the number of copies that get distributed. It’s because copyright law hasn’t been updated in decades and 99% of it predates computers (especially all the important case law).

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works. Which is kinda what’s going on but also not really. Let’s say that someone asks ChatGPT to write a few paragraphs of something in the style of Stephen King… His “style” isn’t even cooyrightable so as long as it didn’t copy his works word-for-word is it even a derivative? No one knows. It’s never been litigated before.

    My guess: No. It’s not going to count as a derivative work. Because it’s no different than a human reading all his books and performing the same, perfectly legal function.