Fundamentos de los LLMs
El poder del contexto en el Prompt
Vectores, Embeddings y Espacios N-Dimensionales
Tokenización
El Mecanismo de Atención y Razonamiento en Modelos de IA
El Playground de OpenAI
Tipos de Prompts y sus Aplicaciones
Zero-Shot Prompting y Self-Consistency
Técnicas para refinar un prompt Zero Shot
Few-Shot Prompting
Chain of Thought y Prompt Chaining
Meta-Prompting
Técnicas Avanzadas de Prompt Engineering
Iteración de Prompts
Least to most prompting
Prompt Chaining
Uso de Restricciones y Formatos de Respuesta
Optimización y Aplicaciones del Prompt Engineering
Generación de Imágenes con GPT4o y Generación de Audio
Ajustando la Temperatura y el Top P
You don't have access to this class
Keep learning! Join and start boosting your career
Did you know that your brain and models like ChatGPT use similar principles to anticipate responses? This similarity comes from something called priming and the attention mechanism, central concepts in understanding how language models like ChatGPT work.
Priming is a psychological phenomenon that causes the brain to respond automatically after receiving certain stimuli, even if you are not aware of it.
For example, when you hear simple mathematical questions about adding small numbers such as 1+1 or 2+2, the immediate answers you give later in other types of questions may be subtly influenced by those first interactions.
In similar classes or experiments, many people tend to think of the same vegetable after answering simple questions about sums, a clear effect of priming.
Language models (LLMs) like ChatGPT use a similar mechanism called attention. This mechanism allows assigning specific weights to words according to their importance within the given context.
For example, in the sentence "the black cat is sleeping":
This differentiates ChatGPT from simpler methods like your cell phone's predictive keyboard, which usually only consider the last word typed, ignoring the broader context.
The context window refers to the total volume of information that a model can hold during your interaction with it. In ChatGPT 4.0, this window goes up to 128,000 tokens.
To give you an idea:
This concept is vital, as ChatGPT uses the entire previous conversation, not just the last interaction, to generate accurate responses. This explains why it sometimes seems to "forget" previous information; in reality, it simply changes the priority of the context used.
When you perform an exercise similar to the numerical addition exercise followed by a request to name a vegetable, priming also affects the responses generated by the model.
This clearly reflects how attention and context act on artificial intelligence and make its responses resemble human reactions.
What was your experience, did you agree with other people? Share it in the comments, it will be interesting to see how many thought of the same vegetable.
Contributions 37
Questions 0
Want to see more contributions, questions and answers from the community?