pirat@lemmy.worldtoTechnology@lemmy.world•After charger, Apple removes USB-C cable from the boxEnglish
8·
6 days agoNo, that would make it too easy to do third-party screen repairs. Apple wouldn’t allow that…
No, that would make it too easy to do third-party screen repairs. Apple wouldn’t allow that…
How can I become such a fungi?
In conclusion, the conclusion most often concludes by concluding that we can conclude, that the answer is the answer.
To summarize, the summary most often summarizes the problem and its nuances, and concludes by summarizing the conclusion as the correct answer.
Therefore, the correct answer is correct.
Let me know if there is anything else I can help you with today.
It’s published as an audio podcast, with email, video and Morse code versions.
Any chance of a SSTV version? 😂
Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.
However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.
This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.
Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.