Tag: claude-prompts

  • The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    Five prompts, run in order, will get you a structured first-pass review of a third-party contract before you touch a redline — here’s the exact sequence, verbatim, with notes on where it holds and where it falls apart.

    Third-party paper is the friction point most solo and small-firm lawyers handle the same way they always have: read the whole thing, flag as you go, hope you caught everything. That works. It also takes two to four hours on a mid-length MSA. This sequence hands the first pass to Claude or GPT-4o, extracts structured outputs at each step, and leaves you doing the one thing AI still can’t do — judging what the risk actually means for your specific client. The prompts were built for contracts in the 10–40 page range. Above 40 pages, read the context window notes at the end.

    How these prompts were chosen

    Each prompt in the sequence produces a discrete output that feeds the next one. They’re ordered to mirror what an experienced contracts lawyer actually does: orient (what does this contract require?), diagnose (what’s missing or sloppy?), triage risk (what can hurt the client?), calibrate (how bad is it given the client’s position?), then document (what do I tell the client and opposing counsel?). Skipping steps or running them out of order produces muddier results. Run them in a single long conversation thread so each prompt inherits the prior context — don’t start a new chat between steps.

    1. Extract obligations and deadlines

    Run this first. Before you can assess risk, you need a clean inventory of what the contract actually obligates each party to do and when. Paste the full contract text immediately after this prompt in the same message.

    You are a contract analysis assistant. I am going to paste a contract below. Read the entire contract carefully.
    
    Your task: Extract every obligation, right, and deadline from this contract. Organize your output as follows:
    
    1. OBLIGATIONS — MY CLIENT: A bulleted list of every affirmative obligation placed on [PARTY A / insert your client's role, e.g., "the Vendor" or "the Licensee"]. For each obligation, note the section number and any triggering condition.
    
    2. OBLIGATIONS — COUNTERPARTY: Same format for the other party.
    
    3. DEADLINES AND NOTICE PERIODS: A separate table with three columns — Event, Deadline or Notice Period, Section Reference. Include payment terms, renewal windows, termination notice periods, cure periods, and any other time-sensitive triggers.
    
    4. UNCLEAR OR AMBIGUOUS OBLIGATIONS: List any obligation where the responsible party, timing, or scope is not clearly defined. Quote the relevant language.
    
    Do not summarize the contract overall. Do not give legal advice. Output only the structured lists above.
    
    [PASTE CONTRACT TEXT HERE]

    Expect 400–800 words of output on a typical 20-page SaaS or services agreement. If the model starts summarizing instead of listing, add “Do not write prose summaries. Use only bullet points and tables” to the top of the prompt. On Claude 3.5 Sonnet, the table formatting holds well. On GPT-4o, you may get a looser structure — add “Use markdown tables” if you’re in a canvas or interface that renders them.

    2. Identify boilerplate gaps and missing definitions

    Standard third-party paper often omits definitions that matter to your client’s specific situation, or uses defined terms inconsistently. This prompt catches that before you get to substantive risk review.

    Now review the same contract for structural and definitional issues.
    
    Your task:
    
    1. MISSING DEFINITIONS: List every capitalized term that is used in the contract body but is not defined in the Definitions section (or anywhere in the contract). For each, quote one sentence where the term appears.
    
    2. INCONSISTENT USAGE: Identify any term that appears to be used with different meanings or scope in different sections. Quote both instances.
    
    3. MISSING STANDARD PROVISIONS: Flag any of the following that are absent from the contract: governing law clause, dispute resolution clause, entire agreement / integration clause, amendment procedure, assignment restriction, force majeure, notice provision with contact details, counterparts / electronic signature clause. State clearly which are missing and which are present.
    
    4. INTERNALLY INCONSISTENT TERMS: Flag any place where two clauses appear to conflict with each other. Quote both clauses and identify the section numbers.
    
    Output only the structured lists above. Do not summarize the contract.

    This prompt runs in the same thread — the model already has the contract text from prompt 1. You don’t need to re-paste. If you’re hitting context limits on a long MSA, paste only the definitions section and the first 10 pages before running this prompt, then run it again on the back half. You’ll lose cross-document comparison on the second run, which matters most for item 4 above.

    Close detail shot from slightly above: two hands resting on a laptop keyboard, fingers just touching the keys, with a bl

    3. Flag risk-shifting clauses

    This is the prompt that does the heaviest lifting. It pulls every clause that shifts financial or legal exposure between the parties, with no editorial spin — just extraction and quotation. You apply the judgment in the next step.

    Now analyze the contract for risk-shifting provisions. Cover each of the following categories. For each clause you identify, quote the exact language, cite the section number, and write one neutral sentence describing what the clause does. Do not characterize the clause as good or bad. Do not advise.
    
    1. INDEMNIFICATION: All indemnification obligations — who indemnifies whom, for what triggers, and whether there are carve-outs for the indemnifying party's own negligence or willful misconduct.
    
    2. LIMITATION OF LIABILITY: Any cap on damages (include the cap amount or formula if stated), any exclusion of consequential or indirect damages, and any carve-outs to those limitations (e.g., IP infringement, fraud, gross negligence).
    
    3. IP OWNERSHIP AND ASSIGNMENT: Any clause addressing ownership of work product, deliverables, inventions, or improvements. Note whether the clause is a present assignment ("hereby assigns") or an agreement to assign ("agrees to assign"). Note any carve-outs for pre-existing IP or background IP.
    
    4. REPRESENTATIONS AND WARRANTIES: List all representations and warranties made by each party. Flag any that are qualified by "knowledge" or "materiality" and any that are conspicuously absent (e.g., no warranty of fitness for purpose, no warranty of non-infringement).
    
    5. TERMINATION RIGHTS: All termination triggers for each party — termination for cause, termination for convenience, termination for insolvency. Note any asymmetry (e.g., one party can terminate for convenience, the other cannot).
    
    6. AUTO-RENEWAL AND PRICE ESCALATION: Any automatic renewal provision, any price escalation formula, and the notice window required to prevent auto-renewal.
    
    Output structured lists only. Quote the contract text directly for each item.

    This is the prompt where Claude 3.5 Sonnet earns the extra cost over GPT-4o-mini. On a heavily-negotiated enterprise software agreement with layered carve-outs and cross-references, Claude reads the inter-clause relationships more accurately. GPT-4o-mini will catch the obvious caps and indemnity triggers, but it misses carve-outs buried in definitions or in exhibit terms. For a straightforward services agreement, the cheaper model is fine. For a software license with a complex SLA exhibit, use Claude.

    4. Compare against a stated risk profile

    This is where you tell the model which side your client is on and what level of exposure they can absorb. The three profiles below are starting points — edit them to match what you actually know about the client and matter.

    Based on your analysis above, evaluate the contract from the perspective of [INSERT PROFILE — choose one or write your own]:
    
    PROFILE A — FAVORED CLIENT: My client has significant leverage and expects the contract to be tilted in their favor. Flag any provision that is less favorable than market standard for the stronger party. Flag any missing protections a stronger party would normally demand.
    
    PROFILE B — BALANCED: My client is seeking a market-standard, balanced agreement. Flag any provision that materially departs from balanced allocation of risk — in either direction. Note whether the departure favors or disfavors my client.
    
    PROFILE C — VENDOR-FAVORABLE PAPER: My client is the vendor and this is the vendor's own form. My client wants to understand what concessions may be necessary to close deals. Flag any provision that a sophisticated counterparty's counsel is likely to push back on, and note the likely pushback.
    
    For each flagged item:
    - Identify the section number and quote the relevant language.
    - State specifically how it departs from the chosen profile.
    - Rate the priority: HIGH (likely to affect deal economics or create significant exposure), MEDIUM (worth negotiating if time permits), LOW (standard ask that may not move the needle).
    
    Do not give legal advice. Do not recommend specific contract language. Output a prioritized list only.

    The priority ratings are the most useful output here — they let you scope your redline conversation with the client before you spend time drafting. In practice, HIGH items on a balanced-profile review track closely with what experienced contracts lawyers flag first. MEDIUM and LOW ratings are noisier; treat them as a checklist to eyeball, not a final word. Swap in your own profile language if the client’s situation is more specific — for example, a regulated entity with insurance constraints, or a startup with no revenue that can’t backstop an uncapped indemnity.

    5. Generate a redline rationale memo

    The final prompt turns the structured outputs from prompts 3 and 4 into a working document you can hand to a client or use as your own drafting notes before opening the redline. This is not a memo you send without reading — it’s a first draft that saves you the blank-page problem.

    Using the risk analysis and profile comparison above, draft a contract review memo structured as follows. Address the memo to [CLIENT NAME / "the client"] from [YOUR NAME / "reviewing counsel"]. Date it [DATE].
    
    SECTION 1 — OVERVIEW: Two to three sentences summarizing the nature of the agreement, the parties, and the general posture of the draft (e.g., "This is a vendor-favorable SaaS agreement. The draft contains several provisions that would require revision before execution under a balanced risk profile."). Do not be conclusory about legal enforceability.
    
    SECTION 2 — KEY ISSUES FOR CLIENT DECISION: A numbered list of the HIGH-priority items from the risk profile comparison. For each item: state what the current contract says (in plain language, not legal jargon), state why it is flagged as high priority for this client's profile, and state what outcome the client should decide on before redlining begins. Do not draft contract language. Do not advise the client what to decide.
    
    SECTION 3 — NEGOTIATION TARGETS: A numbered list of MEDIUM-priority items formatted the same way as Section 2.
    
    SECTION 4 — ITEMS TO NOTE BUT NOT PRIORITIZE: A brief list of LOW-priority items. One sentence each.
    
    SECTION 5 — QUESTIONS FOR CLIENT BEFORE REDLINE: List any factual questions that need answers before the redline can be completed (e.g., "Does the client have existing IP that should be carved out of the assignment clause?", "What is the client's insurance coverage for the indemnification obligation in Section 9.2?").
    
    Write in plain, direct language. No legal conclusions. No recommendations about what the client should sign or not sign. Format for a professional memo — not bullet-heavy, use short paragraphs within each numbered item.

    Section 5 — the questions list — is often the most valuable output of the entire sequence. It surfaces the gaps between what the contract says and what you don’t yet know about the client’s situation. On one test run against a 28-page IT services agreement, the model generated nine factual questions, seven of which were legitimate blockers to completing the redline. Two were redundant. That’s a usable signal-to-noise ratio for a first draft.

    Notes on using these prompts

    Model choice

    Claude 3.5 Sonnet (claude-3-5-sonnet-20241022 in the API, or Claude.ai Pro at $20/month) handles the nuance in prompts 3 and 4 better than any other model I’ve run this sequence against. It tracks cross-references between clauses and definitions more reliably, and it’s less likely to collapse carve-outs into the main clause when summarizing. GPT-4o is a close second for the same price tier. GPT-4o-mini at roughly one-fifteenth the API cost is useful for running prompt 1 against a stack of contracts in parallel — obligation extraction is mechanical enough that the cheaper model performs acceptably. Don’t use GPT-4o-mini for prompts 3 and 4 on anything above moderate complexity.

    Context window limits

    Claude 3.5 Sonnet’s 200k-token context window handles most MSAs without issue. GPT-4o’s 128k window is adequate for contracts up to roughly 60–70 pages of plain text. Problems start when you add exhibits. A master services agreement with three SOWs, a data processing addendum, and an acceptable use policy can push 100k tokens of contract text alone, leaving little room for the model to hold its own outputs across five prompts. If the contract exceeds 40 pages, split it: run the sequence on the core agreement, then run prompts 1 and 3 separately on each exhibit, and combine the outputs manually before running prompt 5.

    Where this workflow breaks

    Heavily-amended drafts are the main failure mode. If you paste a contract that has already been through two rounds of negotiation — tracked changes accepted, bracketed alternatives still in, comments embedded — the model will mis-read the document. It will sometimes treat bracketed alternatives as agreed text, and it will occasionally flag a provision as present when it was in fact struck in a prior round. Clean the document before you paste it: accept all tracked changes you want the model to see, delete all comments, remove all bracketed alternatives except the current working version. This is a fifteen-minute task that prevents a half-hour of bad output.

    The other break point is contract types the models haven’t seen much of. Highly specialized agreements — certain energy contracts, bespoke financing structures, niche IP licenses with industry-specific custom terms — produce weaker results on prompts 3 and 4 because the model’s sense of “market standard” is thinner. The obligation extraction in prompts 1 and 2 still works on unusual contract types; the risk calibration in prompt 4 gets shakier. Treat the profile comparison output with more skepticism on unfamiliar paper.

    Finally: this sequence reviews a contract. It does not replace judgment about whether specific terms are acceptable for a specific client in a specific transaction. The memo from prompt 5 is a drafting aid, not a deliverable. Read every flagged item against the actual contract text before you act on it.

    Run the sequence once on a low-stakes matter you already know well. Compare the model’s output against your own notes. That calibration exercise — run once — will tell you exactly how much to trust each prompt’s output on your practice area’s typical paper.

    Related reading

  • How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    You are already doing the work. This workflow makes sure you get paid for it.

    Most solo lawyers aren’t losing billable time because they’re lazy about tracking — they’re losing it because logging happens hours after the work, memory compresses a 40-minute call into a six-word entry, and back-to-back matters blur into a single undifferentiated afternoon. Studies on attorney time capture consistently land in the same neighborhood: 15–30% of billable work never makes it onto an invoice. This workflow fixes that without adding a dedicated time-tracking app to your monthly overhead. You need a transcription tool you may already have (Otter.ai or Fireflies.ai), your existing calendar, and a single Claude prompt you run once at end of day. The output drops into whatever practice management software you already use — Clio, MyCase, PracticePanther, or a spreadsheet if that’s where you are.

    What You’ll Need

    • Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month) — either works; Fireflies has slightly better Zoom/Teams auto-join, Otter is easier for in-person dictation via phone
    • Claude (claude.ai, Pro plan at $20/month, or API access if you want to automate later) — the prompt below was written and tested on Claude 3.5 Sonnet
    • Your existing calendar (Google Calendar or Outlook) — you’ll export or copy today’s event list
    • Your existing practice management software’s time entry screen — open it to receive the output
    • A matter-code list: a simple text list of your active matters and their billing codes, which you’ll paste into the prompt

    Step 1: Get Transcription Running in the Background

    The entire workflow depends on raw transcript text. Nothing fancy happens here — you are just making sure something is capturing words while you work.

    For calls and video meetings

    Connect Fireflies to your Google Meet, Zoom, or Teams calendar so it auto-joins every meeting. The first time it appears as “Notetaker,” alert participants that the meeting is being transcribed — check your state’s consent rules before you do this at all. One-party consent states give you more latitude on internal calls; two-party states mean you need explicit verbal acknowledgment before the bot stays in the room. Fireflies lets you configure a custom bot name (I use “LFB Notetaker”) so it looks less like a surveillance tool and more like a deliberate choice.

    For in-person work, research, and drafting time

    Open the Otter mobile app and hit record at the start of a drafting session or in-person client meeting. You don’t need to narrate every keystroke. Talking through what you’re doing — “starting review of the indemnification clause in the Smith MSA, flagging the liability cap” — gives Claude enough context to write a real time entry later. Even a 30-second verbal summary at the end of a task (“done with that, probably 45 minutes”) is enough. Otter’s transcripts are available in the app and exportable as plain text.

    Collect transcripts at end of day

    From Fireflies: go to Meetings, select each transcript from today, copy the full text or use the “Export as TXT” option. From Otter: open each conversation, hit the three-dot menu, and export as text. Paste all of today’s transcripts into a single plain-text document. Label each block with a rough time — “10:15 AM — Zoom call” — if the export doesn’t include timestamps. This takes under five minutes once it’s habitual.

    Step 2: Pull Your Calendar for the Day

    In Google Calendar, click the day view and copy the text of your appointments. In Outlook, use the “Today” view and do the same. You want event names, times, and any notes you added. This is the skeleton the AI uses to attribute time chunks to matters when transcripts are thin or missing. A calendar entry that reads “Garcia deposition prep — 2:00 PM – 4:00 PM” gives Claude a two-hour anchor even if you didn’t record anything during that block.

    Do not skip this step on days when you recorded everything. Calendar entries catch the gaps: the 20-minute call you took off-app, the courthouse run you forgot to narrate, the email sprint that never got a recording started.

    Step 3: Run the End-of-Day Claude Prompt

    Open Claude and paste the following prompt. Fill in the bracketed sections before sending. The prompt is long on purpose — Claude performs substantially better on time-entry tasks when it has explicit formatting rules and examples to follow rather than open-ended instructions.

    You are a legal billing assistant helping a solo attorney draft time entries for the day. You do NOT give legal advice. Your job is to convert raw transcript text and calendar entries into properly formatted billable time entries.
    
    ACTIVE MATTERS (billing codes and short names):
    [PASTE YOUR MATTER LIST HERE — e.g.:
      - 2024-047 / Garcia v. Hendricks (litigation)
      - 2024-061 / Patel Business Formation (transactional)
      - 2024-058 / Nguyen Estate Plan (estate)
      - ADMIN / Non-billable internal tasks]
    
    TODAY'S CALENDAR:
    [PASTE YOUR CALENDAR ENTRIES HERE — include event name, start time, end time, and any notes]
    
    TODAY'S TRANSCRIPTS:
    [PASTE ALL TRANSCRIPT TEXT HERE — label each block with approximate time if possible]
    
    ---
    
    INSTRUCTIONS:
    1. Review the calendar entries and transcripts together.
    2. For each identifiable block of work, draft one time entry in this exact format:
       - Matter: [billing code / matter name]
       - Date: [today's date]
       - Time (hours): [round to nearest 0.1]
       - Description: [one sentence, active voice, specific — what was done, not just "worked on matter"]
    3. If a transcript block clearly belongs to a specific matter, assign it. If you are not certain, flag it as [ATTRIBUTION UNCERTAIN] and explain briefly why.
    4. Do not invent work that is not supported by the calendar or transcripts.
    5. Do not combine entries from different matters into one entry.
    6. After the entries, add a section called "Gaps and Flags" that lists: (a) any calendar blocks with no transcript support, (b) any transcript content you could not attribute to a matter, and (c) any entries where the time estimate feels imprecise.
    7. Keep descriptions under 20 words. Write them in the style used in legal billing — e.g., "Reviewed indemnification clause; drafted revision and sent to client for approval."
    
    OUTPUT FORMAT:
    Return a numbered list of draft time entries followed by the Gaps and Flags section. Do not add commentary between entries.

    Claude will return a numbered list of draft entries and a flags section. The flags section is the part most lawyers skip — don’t. It surfaces the gaps where time walked out the door.

    Close-up detail shot of two hands resting on a laptop keyboard, a legal notepad with handwritten notes visible only as a

    Step 4: Review, Edit, and Enter

    Claude’s draft entries are a starting point, not a finished product. Plan for a five-to-ten minute review pass. What you’re checking: matter attribution accuracy, time estimates that feel off, and descriptions vague enough to draw a billing dispute.

    Fix attribution errors first

    On multi-matter days, Claude occasionally assigns work to the wrong matter when two clients share an industry or a topic — “reviewed contract clause” can land on the wrong billing code if both your open matters involve contract review. The [ATTRIBUTION UNCERTAIN] flag catches the obvious ones, but scan all entries. You know your matters; Claude doesn’t.

    Adjust time estimates

    Claude derives time from transcript timestamps and calendar blocks. If a calendar block says 60 minutes but you wrapped in 35, change it. If a transcript from a “30-minute check-in” runs 47 minutes of actual content, adjust upward. The AI is giving you a scaffolding, not an invoice.

    Enter into your practice management software

    Copy each approved entry into Clio, MyCase, PracticePanther, or wherever you track time. Most practice management platforms have a quick-add time entry screen that takes under 30 seconds per entry when the description is already written. You are not re-drafting from memory — you are pasting and confirming. That is the entire efficiency gain.

    Step 5: Build the Habit Loop

    This workflow produces diminishing results if transcripts are inconsistent. The reliable version runs every single day, not just on busy ones. Set a recurring calendar event at 5:00 PM — “Time entry review, 10 min.” That block keeps the habit alive. After two weeks it compresses to six minutes. After a month the transcript collection is reflexive and the prompt run takes under four minutes of active attention.

    If you want to reduce the manual copy-paste, Fireflies has a Zapier integration that can push transcripts to a Google Doc automatically. You can then keep a running daily doc and paste the whole thing into Claude at end of day rather than exporting individual transcripts. That setup takes about 20 minutes to configure once and saves two to three minutes daily.

    Where This Breaks

    Phone calls without recording consent. This is the most common gap. If you practice in a two-party consent state and forget to get acknowledgment before a call, you get no transcript for that call. The calendar entry will show up in the Gaps and Flags section, but Claude can only estimate — it has no content to work from. The fix is a verbal habit: “Just so you know, I may be recording this call for my notes — is that okay?” said in the first 15 seconds. If a client declines, take a 30-second voice memo immediately after you hang up describing what was covered.

    Multi-matter days with thin context. When you have five active matters and a day full of short, topic-overlapping calls, Claude’s attribution guesses degrade. “Discussed indemnification” does not uniquely identify a matter when you have three open transactional files. Narrating the client name or matter reference number into your voice notes at the start of each session eliminates most of this. “Starting Garcia call” at the top of a transcript is enough context for reliable attribution.

    Transcription errors on legal terms. Both Otter and Fireflies occasionally garble case names, statute citations, and proper nouns. “Promissory estoppel” becomes “promissory a stopple.” This matters less than you might think for time entries — the AI is extracting intent and duration, not quoting the transcript verbatim — but it can confuse attribution when a client name gets mangled. Scan the flags section; that’s where garbled attributions surface.

    Privacy and confidentiality obligations. Transcripts contain client information. Otter and Fireflies store data on their servers. Before you run this workflow, check your state bar’s guidance on cloud storage of client data and review each vendor’s data processing terms. Claude processes data through Anthropic’s API; the Pro plan’s privacy settings default to not training on your inputs, but confirm that in your account settings before pasting client-identifying information. Some attorneys use matter codes rather than client names in transcripts specifically to reduce exposure — a reasonable precaution.

    What This Saves You

    The honest estimate: for a solo billing 25–30 hours per week, recovering 15–20% of previously lost time means three to five additional billable hours per week. At $250/hour, that is $750–$1,250 per week that was already earned but never invoiced. The workflow costs under $40/month in tools (if you don’t already have Otter or Fireflies) and roughly 10 minutes per workday once the habit is set. The math is not subtle.

    Beyond revenue, the descriptions Claude drafts are longer and more specific than what most attorneys write under time pressure. Better descriptions mean fewer billing disputes, faster client approval, and a cleaner paper trail if a fee is ever challenged. That is a secondary benefit, but it compounds over a full year of billing files.

    This workflow will not work for every attorney on every day. Phone-heavy practices in two-party consent states will see smaller gains without the voice-memo habit. Attorneys with highly irregular schedules who forget to start recordings will get patchy transcripts and patchy output. But for a solo who runs a relatively consistent day of calls, drafting, and client meetings, this is the most direct path from “I think I billed about six hours today” to “I have eight verified entries in my practice management software and I know exactly what they cover.”

    Related reading

  • Document Automation with Claude and Microsoft Word: A Walkthrough for Small Firms

    Document Automation with Claude and Microsoft Word: A Walkthrough for Small Firms

    This workflow turns a single Word clause library and a Claude prompt template into a repeatable drafting machine — cutting 10 to 20 minutes off every engagement letter, NDA, demand letter, and fee agreement you produce.

    The workflow is built for solo attorneys and firms of two to ten lawyers who are drafting the same five to eight document types repeatedly and doing it mostly by hand. You don’t need a document automation platform, a monthly SaaS subscription, or a developer. You need a Claude account (the Pro tier works fine), Microsoft Word, and about two hours to set it up the first time. After that, each document runs in under five minutes of AI time, plus your review pass.

    What you’ll need

    • Claude Pro ($20/month at claude.ai) — the claude.ai web interface is sufficient; API access is optional but speeds things up if you want to go further later.
    • Microsoft Word — desktop version, Microsoft 365 or a perpetual license. The workflow works with the web version but the macro/style features described below require the desktop app.
    • One master clause library document — a single .docx file you’ll build in Step 1.
    • A plain-text intake form — a short list of matter variables (client name, jurisdiction, date, deal type, etc.) you fill out before each run. A Word table or a Notepad file both work.

    Step 1: Build your clause library in one Word document

    The clause library is the foundation. Without it, Claude drafts from general training data — which produces serviceable but generic language you’ll spend time rewriting anyway. With it, Claude assembles from clauses you’ve already vetted.

    Create a new Word document called CLAUSE_LIBRARY_MASTER.docx. Organize it with Word Heading 1 styles for document type and Heading 2 for each clause. A minimal starting library looks like this:

    • Engagement Letters: scope of representation, fee structure (hourly / flat / contingency variants), billing cycle and invoice terms, communication expectations, termination by client, termination by firm, conflict waiver carve-out, file retention notice.
    • NDAs: definition of confidential information, exclusions, permitted disclosures, term and termination, return/destruction of materials, remedies clause, governing law placeholder.
    • Demand Letters: opening statement of representation, factual background placeholder, legal basis paragraph (tort / contract / statutory variants), demand and deadline paragraph, reservation of rights, closing.
    • Fee Agreements: scope reference, rate schedule, retainer mechanics, billing increment, late payment, lien notice, dispute resolution over fees.

    Each clause entry should be the actual text you’d use — not a description of it. Pull from your best current templates. Flag jurisdiction-specific language with a bracketed tag like [JURISDICTION: CA only] or [JURISDICTION: TX only]. That tag will matter in Step 3.

    Keep every clause under 150 words. Long multi-part clauses should be split. When you paste library content into a Claude prompt later, shorter chunks give the model cleaner assembly instructions.

    Step 2: Create your intake variable sheet

    Before you run any prompt, fill out a matter intake sheet. This is just a list of variables. Keep it in a Word table at the top of each new matter folder, or in a pinned Notepad file you overwrite each time. The fields below cover all four document types — you’ll only use the relevant subset per document run.

    • CLIENT_NAME
    • CLIENT_ENTITY_TYPE (individual / LLC / corporation / etc.)
    • MATTER_TYPE (engagement letter / NDA / demand letter / fee agreement)
    • JURISDICTION (state)
    • GOVERNING_LAW_STATE (if different from jurisdiction)
    • ATTORNEY_NAME
    • FIRM_NAME
    • DATE
    • FEE_STRUCTURE (hourly at $X / flat fee of $X / contingency at X%)
    • BILLING_INCREMENT (e.g., 0.1 hour)
    • RETAINER_AMOUNT
    • OPPOSING_PARTY (demand letters only)
    • CLAIM_SUMMARY (demand letters only — two to four sentences, plain language)
    • DEMAND_AMOUNT (demand letters only)
    • RESPONSE_DEADLINE (demand letters only)
    • NDA_PARTIES (both party names and entity types)
    • NDA_PURPOSE (one sentence)
    • NDA_TERM (months/years)
    • SPECIAL_INSTRUCTIONS (anything that overrides standard clauses)

    Filling this out takes two to three minutes. That time investment is what makes the Claude output usable on the first pass rather than the third.

    Close-up detail shot of hands resting on a mechanical keyboard, a Word document visible on screen as soft abstract white

    Step 3: The Claude prompt template

    This is the core of the workflow. The prompt does three things: it tells Claude what document to produce, it feeds in your vetted clause library text, and it tells Claude exactly how to handle anything the library doesn’t cover. Run this at claude.ai with Claude 3.5 Sonnet (the default model as of mid-2025). Paste the filled-in intake variables where indicated.

    You are a document drafting assistant for a law firm. Your job is to assemble a [MATTER_TYPE] using the clause library I provide below. Follow these rules exactly:
    
    1. Use ONLY the clauses I provide in the CLAUSE LIBRARY section. Do not invent new legal language.
    2. Where a clause contains a [JURISDICTION: XX only] tag, include that clause ONLY if the jurisdiction in the matter variables matches. Otherwise, omit it and note the omission in brackets at the end of the document.
    3. Wherever you see a variable in ALL_CAPS in the clause text (e.g., CLIENT_NAME, FEE_STRUCTURE), replace it with the corresponding value from the MATTER VARIABLES section below.
    4. If the clause library does not contain a clause needed for this document type, insert a bracketed placeholder: [CLAUSE NEEDED: describe what's missing] — do not write the clause yourself.
    5. Output the assembled document in clean paragraph form, ready to paste into Word. Use clear section headings. Do not include commentary, footnotes, or explanations in the body — those go in a separate DRAFTING NOTES section at the end.
    6. After the document body, include a DRAFTING NOTES section listing: (a) any jurisdiction-specific clauses that were omitted because the library didn't have a match, (b) any [CLAUSE NEEDED] placeholders you inserted, (c) any variable fields you could not fill because the intake form was incomplete.
    
    ---
    
    MATTER VARIABLES:
    CLIENT_NAME: [paste value]
    CLIENT_ENTITY_TYPE: [paste value]
    MATTER_TYPE: [paste value]
    JURISDICTION: [paste value]
    GOVERNING_LAW_STATE: [paste value]
    ATTORNEY_NAME: [paste value]
    FIRM_NAME: [paste value]
    DATE: [paste value]
    FEE_STRUCTURE: [paste value]
    BILLING_INCREMENT: [paste value]
    RETAINER_AMOUNT: [paste value]
    [add or remove fields as relevant to this document type]
    SPECIAL_INSTRUCTIONS: [paste value or "none"]
    
    ---
    
    CLAUSE LIBRARY:
    [Paste the relevant section(s) of your CLAUSE_LIBRARY_MASTER.docx here. For an engagement letter, paste the Engagement Letters section. For an NDA, paste the NDA section. Keep the Heading 2 labels so Claude can reference them.]
    
    ---
    
    Produce the [MATTER_TYPE] now.

    A few notes on this prompt. The “do not invent new legal language” instruction is the most important line. Without it, Claude will helpfully fill gaps with plausible-sounding clauses that you haven’t reviewed. The bracketed placeholder approach — [CLAUSE NEEDED: describe what's missing] — surfaces those gaps cleanly so your review pass catches them immediately.

    The DRAFTING NOTES section at the end is genuinely useful. It acts as a checklist. If Claude flags “omitted retainer lien notice — no California version in library,” that’s your signal to add a California version to the clause library before the next matter.

    Step 4: Move the output into Word and run your review pass

    Claude outputs clean plain text. Copy the document body (everything above the DRAFTING NOTES section) and paste it into a new Word document. Use Paste Special → Keep Text Only, then apply your firm’s styles. This takes about 90 seconds.

    If your firm uses a branded Word template (.dotx file), open that template first, paste into it, and your headers, fonts, and margins apply automatically. For firms that haven’t built a .dotx template yet: build one. It takes an hour once and saves formatting time on every document you produce.

    Your review pass has three specific jobs. First, read every clause that carries a jurisdiction tag and confirm it’s correct for this matter — Claude can mis-match these if your intake variables are ambiguous. Second, read every line that replaced a variable and confirm the substitution makes grammatical and substantive sense. “The firm shall bill CLIENT_NAME at an hourly rate” assembles correctly; “The client, a LLC, agrees” does not and needs a quick fix. Third, work through Claude’s DRAFTING NOTES checklist and resolve every item before the document leaves your desk.

    Do not skip the review pass. This workflow does not produce final documents. It produces reviewed-ready drafts — which is still a meaningful time compression compared to starting from scratch or hunting through old files for the right template version.

    Step 5: Version control without extra software

    Version control for this workflow is low-tech by design. In your matter folder, save each Claude-generated draft with a filename convention: CLIENTNAME_DOCTYPE_v1_YYYYMMDD.docx. When you complete your review pass and make edits, save as v1-reviewed. When the document goes out, save as v1-final. If you revise after client feedback: v2.

    Also save the prompt you ran — paste it into a _PROMPT_LOG.txt file in the same folder. This takes ten seconds and gives you a complete record of what input generated what output. If you ever need to explain why a clause appeared in a document, you have the paper trail.

    For the clause library itself, treat it like source code. Save a dated backup whenever you add or change a clause: CLAUSE_LIBRARY_MASTER_20250610.docx. Keep the last three versions. You’ll want to know what the library looked like when you drafted a document six months ago.

    Where this breaks

    The single biggest failure mode is jurisdiction-specific clause gaps. Claude will follow the instruction to omit unmatched clauses and flag them — but only if your library tagged them correctly in the first place. If you built the library from California templates and you’re running a Texas matter, Claude may not know that what it’s assembling is California-only language unless you tagged it. Starting with a 50-state scope is unrealistic. Start with your primary jurisdiction and explicitly mark everything else as out-of-scope until you’ve added it.

    Variable substitution breaks on edge cases. A client who is a single-member LLC doing business under a DBA will confuse a simple variable replacement in ways you’ll catch only on review. Entities with long names break sentence flow. Joint representation (two clients, one engagement letter) requires a clause the basic library probably doesn’t handle. Build variants for those scenarios rather than expecting the prompt to figure them out.

    The prompt length has a ceiling. Claude 3.5 Sonnet handles a context window large enough for this workflow comfortably, but if you paste an entire 40-clause library plus a long intake form, output quality starts to slip — Claude may skip clauses or truncate the DRAFTING NOTES. Keep each library section to the clauses actually relevant to the document type you’re drafting. Don’t paste the whole library for an NDA run.

    Copy-paste friction is real. This workflow is faster than starting from scratch, but it’s not as fast as a purpose-built automation tool like HotDocs or Documate. If you’re producing more than 30 templated documents per month, the manual copy-paste steps add up and you should evaluate a proper document assembly platform instead. This workflow is the right fit for firms producing five to twenty templated documents per month who don’t want to pay platform fees or manage a separate system.

    Finally: Claude hallucinates. It does so less when constrained to a provided clause library, but it can still produce grammatically smooth sentences that say something slightly different from what your clause says. The review pass is not optional.

    What this saves you

    For a straightforward engagement letter — one client, one matter type, your primary jurisdiction — expect to spend two to three minutes filling out the intake form, one minute running the prompt, and five to eight minutes on the review pass. That’s under twelve minutes total, compared to a realistic twenty-five to thirty-five minutes pulling an old template, editing it, catching holdover text from the prior client, and formatting it correctly. The savings are in the middle: no hunting for the right old file, no manually replacing every instance of the prior client’s name, no re-applying styles.

    Demand letters save the most time because the factual background section — which you write yourself in the intake form as a plain-language summary — feeds directly into a structured output, replacing the blank-page problem. NDA runs are the fastest because the clause count is low and variable substitution is clean. Fee agreements run close to engagement letters in time.

    Over a week of ten to fifteen templated documents, the workflow realistically returns ninety minutes to two hours. That’s not dramatic. It’s also not nothing — it’s a full billing hour or more, every week, recovered from administrative drafting.

    The clause library also has a side benefit that doesn’t show up in time savings: it forces you to standardize. Firms that run this workflow for two months typically discover they had four slightly different versions of their termination clause floating across old templates. Consolidating them into the library is the kind of housekeeping that improves every document going forward, not just the ones produced by this workflow.

    Start with one document type, build out the library section for it, and run ten matters through it before you add the next type. The setup investment is real; spread it out and you’ll actually finish it.

    Related reading

  • The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    A 30-minute intake call produces a structured matter file in under five minutes of editing — if you wire up the right three tools before the call starts.

    This workflow is built for solo lawyers and firms of two to five attorneys who are personally running their own intake. You take the call, you open the matter, you chase the conflict check. Every step is manual and each one costs time you don’t have. The workflow below connects a structured intake form, a call transcription tool, and a Claude prompt to collapse that 30-minute process into about five minutes of cleanup. I’ve written it so you can implement it in an afternoon. The total recurring cost runs between $20 and $40 per month depending on which tools you already pay for.

    What You’ll Need

    • Intake form tool: Typeform (free tier works; Paid starts at $25/month) or Jotform (free tier available). Either gives you a shareable link you send before the call.
    • Transcription tool: Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month). Both join video calls automatically and produce a searchable transcript within minutes of the call ending.
    • Claude: Claude.ai Pro ($20/month) or API access via Anthropic. Claude 3.5 Sonnet handles long transcripts without truncating the way shorter-context models do.
    • Practice management software: Clio Manage or MyCase. You’ll paste the output of the Claude prompt into a new matter note. No native integration required — this is copy-paste, not automation.

    Step 1: Build Your Pre-Call Intake Form

    Send a form link 24 hours before the scheduled call. The form does two things: it primes the prospective client to think clearly before you talk, and it gives you structured data that the Claude prompt will pull from directly.

    Fields to include

    • Full legal name
    • Date of birth
    • Phone and email
    • Adverse party name(s) — this is your conflict-check input
    • Matter type (dropdown: family, estate planning, business formation, real estate, employment, other)
    • Brief description of the situation (open text, 500-character limit)
    • Relevant dates (incident date, deadlines, filing dates they’re aware of)
    • Prior attorneys on this matter (yes/no + name field if yes)
    • How did you hear about us

    Keep the form under ten fields. Longer forms get abandoned. The goal is names, adverse parties, and a rough description — everything else comes out in the call.

    In Typeform, turn on email notifications so the completed response lands in your inbox before the call starts. In Jotform, the same setting lives under Settings → Emails. Export the response as a PDF and have it open during the call.

    Step 2: Record and Transcribe the Call

    If you’re on Zoom or Google Meet, Otter.ai and Fireflies.ai both join as a bot participant and record automatically once you connect your calendar. For phone calls, Otter’s mobile app records locally and transcribes after the fact. Fireflies handles phone recording through its dial-in number, which is slightly more friction.

    Tell the prospective client at the start of the call that you’re recording for your notes. One sentence is enough: “I record intake calls so I can focus on listening — the recording is just for my internal file.” Most clients don’t object. Check your state bar’s rules on recording consent before you run this call the first time; a few states require two-party consent on recorded phone calls.

    After the call ends, Otter delivers a transcript and summary to your inbox within five to ten minutes. Fireflies is slightly faster. Either one produces a searchable text file — that transcript is what you feed to Claude.

    One thing to check: both tools include speaker labels, but they’re imperfect. Otter labels speakers as “Speaker 1” and “Speaker 2” unless you manually assign names. Fireflies does the same. The Claude prompt handles unlabeled speakers fine — just note in the prompt which speaker is the attorney.

    Close-up detail shot of two hands resting near an open laptop keyboard, a leather portfolio and fountain pen in soft foc

    Step 3: Run the Claude Prompt

    Open Claude.ai Pro (or your API interface) and paste the following prompt, then paste the full transcript below it. Do not summarize the transcript yourself first — give Claude the raw text. The prompt is designed to pull structure out of unstructured conversation.

    You are a legal intake assistant helping a solo attorney organize information from a new client intake call. You do not provide legal advice or legal analysis. Your job is to extract and organize factual information from the transcript below into a structured intake brief.
    
    Using only the information in the transcript, produce the following sections:
    
    1. CLIENT INFORMATION
       - Full name
       - Contact information (phone, email) if mentioned
       - Date of birth if mentioned
    
    2. ADVERSE PARTIES
       - List every person, company, or entity the client mentioned as an opposing or adverse party
       - Include any names the attorney should check for conflicts
    
    3. MATTER TYPE AND DESCRIPTION
       - Practice area (as stated or clearly implied)
       - Neutral factual summary of the client's situation in 3-5 sentences. Do not characterize fault, liability, or legal merit. Report what the client described.
    
    4. KEY DATES AND DEADLINES
       - Any specific dates mentioned (incident dates, contract dates, filing dates, court dates)
       - Any deadlines the client is aware of
    
    5. DOCUMENTS MENTIONED
       - Any documents the client referenced (contracts, court filings, notices, deeds, etc.)
    
    6. PRIOR REPRESENTATION
       - Any prior attorneys the client mentioned in connection with this matter
    
    7. OPEN QUESTIONS
       - Information that appears missing or unclear from this intake that the attorney will likely need before opening the matter (do not suggest legal strategy — list informational gaps only)
    
    8. CONFLICT CHECK NAMES
       - A clean list of every proper name and entity name pulled from sections 1 and 2, formatted one per line, ready to copy into a conflict-check search
    
    Format each section with a clear header. Use bullet points within sections. If a section has no information from the transcript, write "Not mentioned in call."
    
    Do not add information not found in the transcript. Do not offer legal opinions. Do not speculate about outcomes.
    
    TRANSCRIPT:
    [paste full transcript here]

    The prompt takes about 90 seconds to run on Claude 3.5 Sonnet with a standard 30-minute transcript. The output is typically 400 to 600 words of clean, structured text.

    Tuning the prompt for your practice area

    If you run a family law practice, add a line to Section 3: “Note any minor children mentioned, their ages, and current custody arrangements as described by the client.” If you do transactional work, add a section for “Entities and Ownership” to capture business names, EINs, or ownership structures the client mentions. The base prompt above is practice-area neutral by design — specialize it once and save the modified version as a text file you reuse.

    Step 4: Merge Form Data With the AI Summary

    Claude’s output covers what was said on the call. Your Typeform or Jotform response covers what the client submitted before the call. These two documents sometimes disagree — the client wrote one adverse party name on the form and mentioned two others on the call. That gap is worth catching before you open the matter.

    Spend two to three minutes reading both documents side by side. Look specifically at: adverse party names (conflict-check section), dates (do the form dates match what was discussed), and matter type. Where they conflict, note it in the Open Questions section of the Claude output before you file it.

    Then copy the combined, lightly edited intake brief into your practice management software. In Clio, open a new Matter, go to the Notes tab, and paste it as a pinned note titled “Initial Intake Brief — [Date].” In MyCase, the equivalent is a new Case Note marked Internal. Either way, the structured brief is now searchable and attached to the matter from day one.

    Run your conflict check using the “Conflict Check Names” list from the Claude output. In Clio, that’s a global search across contacts. In MyCase, use the Conflicts search under the Contacts menu. Because the prompt formats each name on its own line, you can move through the list quickly without reformatting anything.

    Where This Breaks

    The prompt fails predictably in one category: emotionally complex matters where the most important facts are what the client didn’t say clearly. A caller describing a contentious divorce who is guarded, interrupted, or inconsistent will produce a transcript full of fragmentary sentences and topic shifts. Claude will dutifully summarize the fragments — and the summary will read as coherent when the underlying situation is not. You’ll get a clean-looking brief that papers over real ambiguity.

    The fix is partial, not complete. Add this to the Section 7 (Open Questions) prompt instruction: “Note any topics where the client gave contradictory or incomplete information, even if you cannot resolve the contradiction.” That surfaces the gaps, but it doesn’t replace your own read of the transcript for anything emotionally charged — grief, trauma, estrangement, or financial desperation. Read the raw transcript for those matters. The brief is a starting point, not a substitute.

    A second failure mode is proper noun recognition. Otter and Fireflies both mis-transcribe uncommon names — a client named “Dzhokhar” becomes “Joker” in the transcript, which flows through to the conflict-check list. Scan the names list before you run the conflict search. One missed name in a conflict check is a genuine problem; catching it takes 60 seconds.

    Third: this workflow assumes the client completed the pre-call form. When they don’t — which happens with roughly one in four prospective clients in my observation — the merge step in Step 4 collapses to just the Claude output, which is still useful, but the conflict-check list is thinner. You can prompt the client for the form during the call or ask the adverse party names directly. Either way, note in the file that the pre-call form was not received.

    What This Saves You

    The honest estimate: 20 to 25 minutes per new matter. The manual version of this process — handwritten notes, typed summary, conflict-check name assembly — runs 25 to 35 minutes after a 30-minute call for most solo practitioners. The automated version runs five to seven minutes (three minutes reading and editing the Claude output, two minutes on the conflict-check list, two minutes pasting into Clio or MyCase).

    If you take 10 new matters per month, that’s three to four hours returned to billable work or to leaving the office earlier. It also reduces the most common intake error: forgetting to run a conflict check on every name the client mentioned, not just the obvious adverse party. The structured output makes that step harder to skip.

    The pre-call form adds a side benefit that doesn’t show up in time estimates: clients who complete it arrive at the call more organized. The call itself often runs shorter.

    This workflow costs $55 to $75 per month in new tool spend if you don’t already pay for any of the components (Typeform free tier + Otter Pro + Claude Pro). If you already have a transcription tool through your video conferencing plan, or you’re already on Claude, the incremental cost is lower. At 10 new matters a month, the math on three reclaimed hours isn’t complicated.

    Build it once on a slow afternoon. Run it on the next intake call. Adjust the prompt after the first three uses when you see what it misses for your specific practice area. The structure is there from day one; the tuning takes a week.

    Related reading