Session 1

Industry Standards & Real-World Application — from ad-hoc prompting to systematic approaches

Joey Lopez · Prompt Engineering Bootcamp · 60 minutes

Session Agenda

TimeActivityType
0–5 minProblem & Solution OverviewLecture
5–15 minThree Approaches FrameworkLecture
15–20 minFoundational PatternsLecture
20–25 minDemo: Priority BuilderHands-on
25–45 minYour Turn: Build PrioritiesHands-on
45–55 minCompare: Three ApproachesHands-on
55–60 minWrap & Session 2 PreviewLecture

Ad-Hoc AI Prompting Doesn't Scale

Most professionals use AI tools the same way they'd use a search engine — one-off questions, generic context, inconsistent results. Here's what that looks like:

Professional: "Hey AI, help me write my annual priorities"
AI: [generates generic priorities]
Professional: "These don't capture my impact... try again"
AI: [generates different generic priorities]
Professional: "Still missing key metrics..."

The problems with ad-hoc prompting:

Key insight: Patterns matter more than format. Choose based on team needs, not dogma.

Three Valid Approaches

There is no single "right" way to do systematic prompting. Three approaches have emerged in industry practice, each suited to different team contexts:

ADRs + Config

Architecture Decision Records plus configuration files. Example: .github/copilot-instructions.md that any AI tool reads automatically.

Best for most teams. Low setup overhead.

Structured Files

Multi-file workflows: knowledge-base → specification → implementation-plan. Each file has a specific purpose and builds on the previous.

Best for complex, repeated tasks and learning.

Tool-Assisted

Platform-native workflows like Windsurf's cascade system (290 lines). The tool manages context and sequencing automatically.

Best for teams committed to one IDE or platform.

Tier Framework for Evaluation

Use this to assess any prompt engineering approach you encounter:

Tier 1 — Proven (10+ years)

Architecture Decision Records, Few-shot prompting, Chain-of-Thought. Used at scale by Microsoft, AWS, Google, Netflix. These patterns have survived real production use.

Tier 2 — Production Ready (1–3 years)

.github/copilot-instructions.md, ReAct pattern. Growing enterprise adoption. Reasonably safe to build on.

Tier 3 — Experimental (<2 years)

Spec-kit workflows, structured prompt files, tool-specific approaches. Interesting and useful, but unproven at scale. Use deliberately.

Four Foundational Patterns

These patterns are research-backed and appear in all serious prompt engineering work. A well-constructed prompt uses all four.

Persona

"You are an expert [role] with [specific expertise]..."

Establishes the lens the AI uses for all responses.

Few-shot

Provide 2–3 example input/output pairs before your actual request.

Shows the desired format and quality level.

Template

"Respond in this format: [structure]"

Constrains output shape for reuse and consistency.

Chain-of-Thought

"Show your reasoning step by step before giving the answer."

Forces deliberate reasoning, catches errors earlier.

The 325-Line Priority Builder

The Priority Builder Agent prompt uses all four patterns together. It demonstrates what a production-grade prompt looks like vs a quick one-off request:

The result: persona-driven specificity, structured questioning that ensures completeness, and consistent output format — versus the ad-hoc alternative that generates something generic every time.

Hands-On: Build Priorities

Exercise 1 — Freestyle First (10 minutes)

Before using any systematic approach, attempt to create one priority for a demo persona using your normal prompting style.

Choose a demo persona

  • Option A: Delivery Lead, Financial Services (8 years, team management)
  • Option B: Tech Lead, Banking Automation (5 years, AI/ML specialist)
  • Option C: Associate Manager, Digital Strategy (6 years, transformation)

Goal: 1 priority in any category with basic reflection. Use your AI tool however you normally would.

Notice what's hard: specificity, metrics, avoiding generic language that could apply to anyone.

Exercise 2 — Priority Builder Template (15 minutes)

Now use the complete Priority Builder Agent with a different persona than Exercise 1.

  1. Load the 325-line Priority Builder prompt into your AI tool
  2. Choose a different persona than the freestyle round
  3. Let the agent guide you through its 20-question process — answer as your persona
  4. Select a version: Conservative, Balanced, or Aspirational
  5. Export the CSV output (ready for submission format)

Success criteria: Complete ABCD reflections with specific metrics, CSV output ready.

Exercise 3 — Compare Three Approaches (10 minutes)

Same Spring Boot 2→3 migration task, three different approaches. Observe the difference in structure and maintenance overhead — not the result.

Approach A — ADRs + Config

"Following .github/copilot-instructions.md,
migrate UserController to Spring Boot 3"

Approach B — Structured Files

Load: knowledge-base.mdspecification.mdimplementation-plan.md

Approach C — Tool-Assisted

Windsurf cascade workflow (290-line systematic methodology with built-in validation steps).

Same result, different maintenance overhead. Which fits your team?

What You Accomplished

Cross-Domain Application

The patterns you practiced in this session work across domains:

Business Applications

  • Strategic planning documents
  • Performance reviews and goal setting
  • Client presentations and proposals
  • Training material development

Technical Applications

  • Code migration and refactoring
  • Architecture documentation
  • Troubleshooting workflows
  • System design patterns

Same systematic thinking, different domain. Session 2 demonstrates this across the full range.

Session 2 Preview

  • ReAct pattern (Think → Act → Observe) for multi-step reasoning
  • Tree of Thoughts for decision points with real tradeoffs
  • Interview prep workflow — 4-file systematic approach
  • Live technical demo — Spring Boot migration using same patterns

Bring a job description you're interested in, or use the samples in the participant materials.

.md