#9
3 months ago
🦄 Cracking the Prompting Interview
Ready to level up your prompting skills? Join us for a deep dive into advanced prompting techniques that separate good prompt engineers from great ones. We'll cover systematic prompt design, testing tools / inner loops, and tackle real-world prompting challenges. Perfect prep for becoming a more effective AI engineer.
Project Details
Open in GitHubCracking the Prompting Interview
Ready to level up your prompting skills? Join us for a deep dive into advanced prompting techniques that separate good prompt engineers from great ones. We'll cover systematic prompt design, evaluation frameworks, and tackle real interview-style prompting challenges.
Video (1h23m - Available June 13, 2025 8 AM PST)
🎯 Key Takeaways
- Use Indexes for URLs & Citations: Provide content with simple IDs (e.g., [SOURCE_1]) and have the LLM output these IDs. Map them back programmatically to improve accuracy and reduce token load.
- Index-Based Diarization: For tasks like speaker diarization, have the LLM output the index of the dialogue turn and the identified speaker (e.g., {"dialogue_idx": 0, "speaker": "Nurse"}).
- Context & "Escape Hatches" for Classification: Provide relevant context upfront and include an "Other" or "Unknown" category to handle ambiguity.
- Reasoning via "Busted" JSON/Comments: Include LLM reasoning as comments or non-standard fields in structured output for easier debugging.
- Natural Code Generation (in JSON): Generate code within Markdown-style backticks as a string field in JSON for higher quality output.
- RTFP (Read The...Prompt!): Carefully review prompts for potential ambiguities that might confuse the LLM.