Wednesday, May 6, 2026
12:00 PM - 1:00 PM
Webinar 1: AI Literacy for Legal Professionals
 

Generative AI has crossed the threshold from novelty to professional necessity. Seventy percent of law firm attorneys now use it at least weekly, and demand for AI-fluent associates is up 106 percent. But 2025 also brought sanctioned filings, withdrawn opinions, and a steady stream of cautionary tales involving lawyers and judges who knew the law but did not adequately supervise what AI produced for them. This first webinar in the AI Explorer Series builds the foundation: what generative AI actually does, why it fails the way it does, and why competent use is a supervisory practice grounded in Louisiana Rules of Professional Conduct 5.1 and 5.3 and the Louisiana Code of Professionalism.
You will leave this webinar able to:

•    Explain how large language models generate output, and why hallucination is structural
•    Recognize the fluency trap that makes polished AI output dangerous
•    Anchor responsible AI use in the supervision duties of Louisiana Rules 5.1 and 5.3 and your Louisiana professionalism commitments

Required for the AI Navigator Workshop on June 5, 2026
 

Wednesday, May 13, 2026
12:00 PM - 1:00 PM
Webinar 2: AI Platforms, Tasks, and Knowing What Fits
 

Every AI platform handles your data differently, and not every task belongs in an AI tool. This webinar gives Louisiana legal professionals two practical frameworks for matching the right tool to the right task: the Learn / Develop / Polish task categories, which describe how AI engages with different kinds of work, and the AI Decision Grid, a 2x2 that sorts tasks by time savings and oversight required. The session covers the current AI landscape across three platform categories (free consumer tools, upgraded paid tiers, and legal-specific platforms), the confidentiality differences among them, and why platform choice itself is a supervisory decision under Louisiana Rules of Professional Conduct 1.6, 5.1, and 5.3.
You will leave this webinar able to:

•    Distinguish what free tools (ChatGPT, Claude, Copilot, Gemini, NotebookLM), their upgraded paid tiers, and legal-specific platforms (Lexis+ AI, Westlaw AI, CoCounsel) actually do with your data
•    Categorize legal tasks as Learn, Develop, or Polish
•    Use the AI Decision Grid to scope what supervision each task will require

Second webinar in the three-part AI Explorer Series. Required for the AI Navigator Workshop. 
 

Wednesday, May 27, 2026
12:00 PM - 1:00 PM
Webinar 3: Basic Prompting Strategies for Lawyers
 

Effective AI use is not a single skill. It is a four-stage workflow. This webinar introduces DIAL, a supervision framework for AI-augmented legal work, and covers the two input-side stages: Deliberate (building prompts with purpose) and Iterate (refining through structured technique). Participants learn how to construct prompts that supply the context AI cannot infer, why confidentiality must be addressed before the prompt is sent, and five practical iteration techniques. The output-side stages of DIAL, Audit and Log, are covered in the AI Navigator Workshop, along with a specific legal prompt architecture grounded in rhetorical situation theory.
You will leave this webinar able to:

•    Place your AI use within the DIAL supervision framework
•    Construct prompts that include clarity, context, constraints, and role assignment
•    Apply five practical iteration techniques to refine AI output
•    Run an input-stage confidentiality check consistent with Louisiana Rules 1.6, 5.1, and 5.3

Third webinar in the three-part AI Explorer Series. Required for the AI Navigator Workshop. 
 

Friday, June 5, 2026
8:00 AM - 8:30 AM
Check-in/Conference Opening
 
 
8:30 AM - 9:30 AM
AI Fluency + Ethics Case Studies
 

AI Fluency Foundations (approximately 20 minutes)
•    Building on Explorer concepts: moving from literacy to fluency
•    The fluency framework — what separates competent AI use from casual experimentation
•    The three task categories: Learn, Develop, Polish — matching your approach to task complexity

Ethics Case Studies (approximately 40 minutes)
•    Participants arrive having already encountered Rules 1.1, 1.6, 3.3, and 5.3 in the Explorer Series; this session applies them to real failures
•    Case study analysis: Recent cases showing sanctions, disciplinary actions, and malpractice exposure
•    Full Model Rules treatment: 1.1, 1.6, 3.3, 5.1, 5.3, 5.5 — applied, not introduced
•    State-specific considerations (Louisiana focus)
•    Attorney-client privilege and AI
•    The throughline: how AI fluency principles could have prevented each violation

9:30 AM - 9:40 AM
Break
 
 
9:40 AM - 10:40 AM
GATEPAS Prompt Architecture
 

•    The GATEPAS framework: Genre, Author, Tone, Exigence, Purpose, Audience, Subject
•    Why systematic prompting outperforms ad hoc prompting — evidence and demonstration
•    Walking through each element with legal examples
•    Worked examples progressing from simple to complex
•    How GATEPAS connects to the Learn/Develop/Polish task categories: not every task needs every element
•    Live demonstration: building a complete GATEPAS prompt for a realistic legal task

10:40 AM - 11:40 AM
Lab: Designing Reusable Prompt Modules
 

•    Participants build GATEPAS-based prompt templates for common legal tasks:
–    Drafting client correspondence
–    Summarizing lengthy documents
–    Generating research outlines
–    Creating case chronologies
–    Drafting routine motions
•    Practice matching prompt complexity to task type (Learn/Develop/Polish)
•    Peer exchange: participants share and critique each other’s templates
•    Facilitated troubleshooting of common construction errors
•    Deliverable: Participants leave with 2–3 reusable prompt capsules they built themselves

11:40 AM - 12:40 PM
Lunch
 
 
12:40 PM - 1:40 PM
Verification Protocols
 

•    The verification framework
•    Why verification is a system, not a glance — the difference between checking and verifying
•    Each element in depth:
–    Accuracy: verifying facts, law, citations, and reasoning — what to check and how
–    Bias: detecting prejudicial content, stereotyping, improper assumptions in AI output
–    Confidentiality: confirming no protected information was exposed in the process
–    Judgment: ensuring AI assistance preserved (not replaced) professional discretion
•    Verification documentation: what to record and why it matters for malpractice protection
•    Tailoring verification intensity to task risk level

1:40 PM - 1:50 PM
Break
 
 
1:50 PM - 2:50 PM
Lab: Building Your Verification Workflow
 

•    Participants create verification checklists tailored to their practice area
•    Apply verification protocols to sample AI outputs (pre-generated examples with embedded errors for participants to catch):
–    Hallucinated citations
–    Subtle bias in legal analysis
–    Confidentiality leakage
–    Outputs that overstep professional judgment boundaries
•    Practice documentation: participants complete verification records for their reviewed outputs
•    Peer review: exchange outputs and checklists with a partner to test each other’s verification systems
•    Deliverable: Personalized verification checklist and documentation template

2:50 PM - 3:50 PM
Lab: Integrated Capstone Exercise
 

•    Full-cycle exercise: participants work through a complete AI-assisted legal task from start to finish:
–    Select a realistic legal task from their own practice
–    Design a GATEPAS prompt
–    Generate output using AI (live, if technology permits; from pre-generated examples if not)
–    Apply their verification workflow
–    Document the entire process
•    Peer review and feedback: small groups review each other’s complete cycle
•    Group debrief: common challenges, insights, and strategies that emerged
•    Forward-looking: connecting workshop skills to Architect Series opportunities
•    Deliverable: One complete, documented AI-assisted work product participants can use as a model going forward