I think it’s an AI summary (if you read just the highlighted part)
I think it’s an AI summary (if you read just the highlighted part)
It baffles me that these types of jobs exist in the same area as mine. My company doesn’t care what hours I work as long as I get things done, has gone fully remote and never going back, encourages people to not burn themselves out and take time off, we have actual unlimited PTO (i.e. nobody coming after me for using too much), etc. I always thought that’s just the Silicon Valley mentality, but I keep seeing news of big tech companies doing all kinds of crazy backwards things and I don’t get it. All the perks I get are not because my company is run by angels, it’s because they understand we’re actually more productive that way.
He didn’t get arrested for AI generated music. He got arrested for faking multiple accounts to upload music and using bots to generate fake listens, thus stealing millions of dollars. If he did the same thing with music he actually wrote and played, he would still be arrested.
For me it’s just convenience. It’s not because vim is better, but because it works on any terminal. I don’t depend on a particular IDE setup, I can jump on any computer and start working. And since I’ve been using it for so many years I’m very fast in it. The best tool is often the one you know best.
Not sure why you’d remember the ones you rarely need. I just memorized the things I use. Remembering stuff you use is much easier than learning a programming language. I’ve been programming for over 30 years and I’ve been using vim as my only “IDE” for the last 14 years. It would take me significantly less time to teach someone vim than to teach them programming.
And vim/emacs are rated just as difficult as a programming language
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
If you read the whole text and interpret the highlights as emphasis then it’s just annoying and hard to read (sort of like those people who add random commas everywhere). If you read just the highlighted text then it sounds like a summary, but there are mistakes in it, which is why I assumed AI.