📚 Recursos y Papers Relevantes
Este documento reúne investigaciones, papers académicos, repositorios y enlaces útiles asociados a cada técnica de ingeniería de prompts. Agrupados por técnica para facilitar su consulta.
🥇 Zero-Shot / One-Shot / Few-Shot Prompting
- Brown et al. (2020) – Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165
🧠 Chain of Thought (CoT)
- Wei et al. (2022) – Chain of Thought Prompting Elicits Reasoning in LLMs https://arxiv.org/abs/2201.11903
🔁 Self-Consistency Prompting
- Wang et al. (2022) – Self-Consistency Improves Chain of Thought Reasoning https://arxiv.org/abs/2203.11171
🌳 Tree of Thought Prompting (ToT)
- Jieyi Long (2023) – Tree-of-Thought: Improving Reasoning with Exploration https://arxiv.org/abs/2305.08291
- Repositorio oficial (Sudoku Solver): https://github.com/jieyilong/tree-of-thought-puzzle-solver
⚡ ReAct (Reason + Act)
- Yao et al. (2022) – ReAct: Synergizing Reasoning and Acting in Language Models https://arxiv.org/abs/2210.03629
🤖 Automatic Prompt Engineering (APE)
- Zhou et al. (2023) – Large Language Models Are Human-Level Prompt Engineers https://arxiv.org/abs/2305.06500
- PromptPG: Prompt Selection via Policy Gradient – Lu et al. (2023) https://arxiv.org/abs/2302.12246
💻 Code Prompting
- GitHub Copilot – Docs https://docs.github.com/copilot
Este archivo se irá actualizando a medida que surjan nuevas técnicas y publicaciones.