From Fundamentals to Action – Automatic & Code Prompting
Overview
This lesson teaches learners how to apply advanced prompt engineering techniques to practical tasks such as code generation, debugging, translation, and multimodal interactions. We also introduce automatic prompt optimization, a technique to let LLMs help refine their own prompts.
Concept Explanation
1. Automatic Prompt Engineering (APE)
- LLMs can self-optimize prompts through iterative improvement.
- Steps:
- Provide an initial prompt.
- Ask the LLM to analyze and improve it.
- Generate outputs using the improved prompt.
- Benefits:
- Reduces manual trial-and-error.
- Improves accuracy and creativity for repetitive tasks.
Example:
Original Prompt: "Explain recursion in Python."
Automatic Refinement: "Explain recursion in Python with a real-world analogy and a small code example."
2. Code Prompting
LLMs excel at code-related tasks. Techniques include:
a) Writing Code
- Generate new code from natural language descriptions.
- Example:
Prompt: "Write a Python function to reverse a linked list."
b) Explaining Code
- Ask the LLM to explain code line-by-line.
- Example:
Prompt: "Explain the following Python code: def factorial(n): ..."
c) Translating Code
- Convert code between languages.
- Example:
Prompt: "Convert this Python function to JavaScript."
d) Debugging & Reviewing Code
- Identify errors and suggest improvements.
- Example:
Prompt: "Find and fix bugs in this Python function."
3. Multimodal Prompting
- LLMs increasingly support text + images or other inputs.
- Examples:
- Text description → image captioning.
- Image + text → question answering.
- Use cases: AI assistants, design generation, documentation analysis.
4. Combining Techniques
- For complex tasks, combine:
- Automatic prompt engineering.
- Few-shot or chain-of-thought reasoning.
- System/role instructions.
- Ensures outputs are accurate, structured, and aligned with user goals.
Practical Examples / Prompts
- Automatic Prompt Engineering
Prompt: "Summarize this article."
Refinement Request: "Improve this prompt for clarity, conciseness, and bullet-point output."
- Code Writing
Prompt: "Write a Python function to sort a list of integers using merge sort."
- Code Translation
Prompt: "Convert this Python function to Java: def add(a, b): return a + b"
- Multimodal Prompt
Prompt: "Analyze this image and describe any objects you see."
Input: [Image of a kitchen]
Hands-on Project / Exercise
Task: Build a mini AI assistant that can write, explain, and debug code.
Steps:
- Choose a simple programming problem (e.g., calculator or data parser).
- Write a base prompt for generating code.
- Use few-shot examples for explanations or translations.
- Apply automatic prompt refinement to improve outputs.
- Test multiple completions and document which prompt yields the best results.
Goal: Produce an LLM workflow that reliably creates, explains, and debugs code.
Tools & Techniques
- APIs: OpenAI GPT, Vertex AI, Claude, LangChain for chaining prompts.
- Automatic Prompt Optimization: Let the model refine itself.
- Few-shot + CoT: Guide reasoning in code generation.
- Multimodal support: Explore text + image inputs for richer applications.
Audience Relevance
- Developers: Automate code writing, translation, and debugging.
- Students: Learn AI-assisted programming and reasoning.
- Business Users: Create internal tools for reporting, analysis, or documentation.
Summary & Key Takeaways
- Automatic prompt engineering reduces manual trial-and-error.
- LLMs are powerful for writing, explaining, translating, and debugging code.
- Multimodal inputs expand AI applications beyond text.
- Combining techniques (APE + CoT + system prompts) produces robust and structured outputs.
- Hands-on experimentation is key to mastering practical prompt engineering.


