Non-Obvious Prompt Engineering Guide
LLMs are autoregressive, which means they generate content by predicting the next word fragment (called a token) based on previous text. This process is similar to speaking in syllables:ThisThis isThis is anThis is an exThis is an examThis is an example.The problem with such content generation is that if you choose a token incorrectly, you can't remove it. What's worse, its presence affects how subsequent tokens are selected. Here's proof:I showed you this using Completion mode and the GPT-3.5-t...
Read more at techsistence.com