Category: Prompts

Tested prompts for legal-adjacent AI tasks. Copy, paste, refine.

  • The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    Five prompts, run in order, will get you a structured first-pass review of a third-party contract before you touch a redline — here’s the exact sequence, verbatim, with notes on where it holds and where it falls apart.

    Third-party paper is the friction point most solo and small-firm lawyers handle the same way they always have: read the whole thing, flag as you go, hope you caught everything. That works. It also takes two to four hours on a mid-length MSA. This sequence hands the first pass to Claude or GPT-4o, extracts structured outputs at each step, and leaves you doing the one thing AI still can’t do — judging what the risk actually means for your specific client. The prompts were built for contracts in the 10–40 page range. Above 40 pages, read the context window notes at the end.

    How these prompts were chosen

    Each prompt in the sequence produces a discrete output that feeds the next one. They’re ordered to mirror what an experienced contracts lawyer actually does: orient (what does this contract require?), diagnose (what’s missing or sloppy?), triage risk (what can hurt the client?), calibrate (how bad is it given the client’s position?), then document (what do I tell the client and opposing counsel?). Skipping steps or running them out of order produces muddier results. Run them in a single long conversation thread so each prompt inherits the prior context — don’t start a new chat between steps.

    1. Extract obligations and deadlines

    Run this first. Before you can assess risk, you need a clean inventory of what the contract actually obligates each party to do and when. Paste the full contract text immediately after this prompt in the same message.

    You are a contract analysis assistant. I am going to paste a contract below. Read the entire contract carefully.
    
    Your task: Extract every obligation, right, and deadline from this contract. Organize your output as follows:
    
    1. OBLIGATIONS — MY CLIENT: A bulleted list of every affirmative obligation placed on [PARTY A / insert your client's role, e.g., "the Vendor" or "the Licensee"]. For each obligation, note the section number and any triggering condition.
    
    2. OBLIGATIONS — COUNTERPARTY: Same format for the other party.
    
    3. DEADLINES AND NOTICE PERIODS: A separate table with three columns — Event, Deadline or Notice Period, Section Reference. Include payment terms, renewal windows, termination notice periods, cure periods, and any other time-sensitive triggers.
    
    4. UNCLEAR OR AMBIGUOUS OBLIGATIONS: List any obligation where the responsible party, timing, or scope is not clearly defined. Quote the relevant language.
    
    Do not summarize the contract overall. Do not give legal advice. Output only the structured lists above.
    
    [PASTE CONTRACT TEXT HERE]

    Expect 400–800 words of output on a typical 20-page SaaS or services agreement. If the model starts summarizing instead of listing, add “Do not write prose summaries. Use only bullet points and tables” to the top of the prompt. On Claude 3.5 Sonnet, the table formatting holds well. On GPT-4o, you may get a looser structure — add “Use markdown tables” if you’re in a canvas or interface that renders them.

    2. Identify boilerplate gaps and missing definitions

    Standard third-party paper often omits definitions that matter to your client’s specific situation, or uses defined terms inconsistently. This prompt catches that before you get to substantive risk review.

    Now review the same contract for structural and definitional issues.
    
    Your task:
    
    1. MISSING DEFINITIONS: List every capitalized term that is used in the contract body but is not defined in the Definitions section (or anywhere in the contract). For each, quote one sentence where the term appears.
    
    2. INCONSISTENT USAGE: Identify any term that appears to be used with different meanings or scope in different sections. Quote both instances.
    
    3. MISSING STANDARD PROVISIONS: Flag any of the following that are absent from the contract: governing law clause, dispute resolution clause, entire agreement / integration clause, amendment procedure, assignment restriction, force majeure, notice provision with contact details, counterparts / electronic signature clause. State clearly which are missing and which are present.
    
    4. INTERNALLY INCONSISTENT TERMS: Flag any place where two clauses appear to conflict with each other. Quote both clauses and identify the section numbers.
    
    Output only the structured lists above. Do not summarize the contract.

    This prompt runs in the same thread — the model already has the contract text from prompt 1. You don’t need to re-paste. If you’re hitting context limits on a long MSA, paste only the definitions section and the first 10 pages before running this prompt, then run it again on the back half. You’ll lose cross-document comparison on the second run, which matters most for item 4 above.

    Close detail shot from slightly above: two hands resting on a laptop keyboard, fingers just touching the keys, with a bl

    3. Flag risk-shifting clauses

    This is the prompt that does the heaviest lifting. It pulls every clause that shifts financial or legal exposure between the parties, with no editorial spin — just extraction and quotation. You apply the judgment in the next step.

    Now analyze the contract for risk-shifting provisions. Cover each of the following categories. For each clause you identify, quote the exact language, cite the section number, and write one neutral sentence describing what the clause does. Do not characterize the clause as good or bad. Do not advise.
    
    1. INDEMNIFICATION: All indemnification obligations — who indemnifies whom, for what triggers, and whether there are carve-outs for the indemnifying party's own negligence or willful misconduct.
    
    2. LIMITATION OF LIABILITY: Any cap on damages (include the cap amount or formula if stated), any exclusion of consequential or indirect damages, and any carve-outs to those limitations (e.g., IP infringement, fraud, gross negligence).
    
    3. IP OWNERSHIP AND ASSIGNMENT: Any clause addressing ownership of work product, deliverables, inventions, or improvements. Note whether the clause is a present assignment ("hereby assigns") or an agreement to assign ("agrees to assign"). Note any carve-outs for pre-existing IP or background IP.
    
    4. REPRESENTATIONS AND WARRANTIES: List all representations and warranties made by each party. Flag any that are qualified by "knowledge" or "materiality" and any that are conspicuously absent (e.g., no warranty of fitness for purpose, no warranty of non-infringement).
    
    5. TERMINATION RIGHTS: All termination triggers for each party — termination for cause, termination for convenience, termination for insolvency. Note any asymmetry (e.g., one party can terminate for convenience, the other cannot).
    
    6. AUTO-RENEWAL AND PRICE ESCALATION: Any automatic renewal provision, any price escalation formula, and the notice window required to prevent auto-renewal.
    
    Output structured lists only. Quote the contract text directly for each item.

    This is the prompt where Claude 3.5 Sonnet earns the extra cost over GPT-4o-mini. On a heavily-negotiated enterprise software agreement with layered carve-outs and cross-references, Claude reads the inter-clause relationships more accurately. GPT-4o-mini will catch the obvious caps and indemnity triggers, but it misses carve-outs buried in definitions or in exhibit terms. For a straightforward services agreement, the cheaper model is fine. For a software license with a complex SLA exhibit, use Claude.

    4. Compare against a stated risk profile

    This is where you tell the model which side your client is on and what level of exposure they can absorb. The three profiles below are starting points — edit them to match what you actually know about the client and matter.

    Based on your analysis above, evaluate the contract from the perspective of [INSERT PROFILE — choose one or write your own]:
    
    PROFILE A — FAVORED CLIENT: My client has significant leverage and expects the contract to be tilted in their favor. Flag any provision that is less favorable than market standard for the stronger party. Flag any missing protections a stronger party would normally demand.
    
    PROFILE B — BALANCED: My client is seeking a market-standard, balanced agreement. Flag any provision that materially departs from balanced allocation of risk — in either direction. Note whether the departure favors or disfavors my client.
    
    PROFILE C — VENDOR-FAVORABLE PAPER: My client is the vendor and this is the vendor's own form. My client wants to understand what concessions may be necessary to close deals. Flag any provision that a sophisticated counterparty's counsel is likely to push back on, and note the likely pushback.
    
    For each flagged item:
    - Identify the section number and quote the relevant language.
    - State specifically how it departs from the chosen profile.
    - Rate the priority: HIGH (likely to affect deal economics or create significant exposure), MEDIUM (worth negotiating if time permits), LOW (standard ask that may not move the needle).
    
    Do not give legal advice. Do not recommend specific contract language. Output a prioritized list only.

    The priority ratings are the most useful output here — they let you scope your redline conversation with the client before you spend time drafting. In practice, HIGH items on a balanced-profile review track closely with what experienced contracts lawyers flag first. MEDIUM and LOW ratings are noisier; treat them as a checklist to eyeball, not a final word. Swap in your own profile language if the client’s situation is more specific — for example, a regulated entity with insurance constraints, or a startup with no revenue that can’t backstop an uncapped indemnity.

    5. Generate a redline rationale memo

    The final prompt turns the structured outputs from prompts 3 and 4 into a working document you can hand to a client or use as your own drafting notes before opening the redline. This is not a memo you send without reading — it’s a first draft that saves you the blank-page problem.

    Using the risk analysis and profile comparison above, draft a contract review memo structured as follows. Address the memo to [CLIENT NAME / "the client"] from [YOUR NAME / "reviewing counsel"]. Date it [DATE].
    
    SECTION 1 — OVERVIEW: Two to three sentences summarizing the nature of the agreement, the parties, and the general posture of the draft (e.g., "This is a vendor-favorable SaaS agreement. The draft contains several provisions that would require revision before execution under a balanced risk profile."). Do not be conclusory about legal enforceability.
    
    SECTION 2 — KEY ISSUES FOR CLIENT DECISION: A numbered list of the HIGH-priority items from the risk profile comparison. For each item: state what the current contract says (in plain language, not legal jargon), state why it is flagged as high priority for this client's profile, and state what outcome the client should decide on before redlining begins. Do not draft contract language. Do not advise the client what to decide.
    
    SECTION 3 — NEGOTIATION TARGETS: A numbered list of MEDIUM-priority items formatted the same way as Section 2.
    
    SECTION 4 — ITEMS TO NOTE BUT NOT PRIORITIZE: A brief list of LOW-priority items. One sentence each.
    
    SECTION 5 — QUESTIONS FOR CLIENT BEFORE REDLINE: List any factual questions that need answers before the redline can be completed (e.g., "Does the client have existing IP that should be carved out of the assignment clause?", "What is the client's insurance coverage for the indemnification obligation in Section 9.2?").
    
    Write in plain, direct language. No legal conclusions. No recommendations about what the client should sign or not sign. Format for a professional memo — not bullet-heavy, use short paragraphs within each numbered item.

    Section 5 — the questions list — is often the most valuable output of the entire sequence. It surfaces the gaps between what the contract says and what you don’t yet know about the client’s situation. On one test run against a 28-page IT services agreement, the model generated nine factual questions, seven of which were legitimate blockers to completing the redline. Two were redundant. That’s a usable signal-to-noise ratio for a first draft.

    Notes on using these prompts

    Model choice

    Claude 3.5 Sonnet (claude-3-5-sonnet-20241022 in the API, or Claude.ai Pro at $20/month) handles the nuance in prompts 3 and 4 better than any other model I’ve run this sequence against. It tracks cross-references between clauses and definitions more reliably, and it’s less likely to collapse carve-outs into the main clause when summarizing. GPT-4o is a close second for the same price tier. GPT-4o-mini at roughly one-fifteenth the API cost is useful for running prompt 1 against a stack of contracts in parallel — obligation extraction is mechanical enough that the cheaper model performs acceptably. Don’t use GPT-4o-mini for prompts 3 and 4 on anything above moderate complexity.

    Context window limits

    Claude 3.5 Sonnet’s 200k-token context window handles most MSAs without issue. GPT-4o’s 128k window is adequate for contracts up to roughly 60–70 pages of plain text. Problems start when you add exhibits. A master services agreement with three SOWs, a data processing addendum, and an acceptable use policy can push 100k tokens of contract text alone, leaving little room for the model to hold its own outputs across five prompts. If the contract exceeds 40 pages, split it: run the sequence on the core agreement, then run prompts 1 and 3 separately on each exhibit, and combine the outputs manually before running prompt 5.

    Where this workflow breaks

    Heavily-amended drafts are the main failure mode. If you paste a contract that has already been through two rounds of negotiation — tracked changes accepted, bracketed alternatives still in, comments embedded — the model will mis-read the document. It will sometimes treat bracketed alternatives as agreed text, and it will occasionally flag a provision as present when it was in fact struck in a prior round. Clean the document before you paste it: accept all tracked changes you want the model to see, delete all comments, remove all bracketed alternatives except the current working version. This is a fifteen-minute task that prevents a half-hour of bad output.

    The other break point is contract types the models haven’t seen much of. Highly specialized agreements — certain energy contracts, bespoke financing structures, niche IP licenses with industry-specific custom terms — produce weaker results on prompts 3 and 4 because the model’s sense of “market standard” is thinner. The obligation extraction in prompts 1 and 2 still works on unusual contract types; the risk calibration in prompt 4 gets shakier. Treat the profile comparison output with more skepticism on unfamiliar paper.

    Finally: this sequence reviews a contract. It does not replace judgment about whether specific terms are acceptable for a specific client in a specific transaction. The memo from prompt 5 is a drafting aid, not a deliverable. Read every flagged item against the actual contract text before you act on it.

    Run the sequence once on a low-stakes matter you already know well. Compare the model’s output against your own notes. That calibration exercise — run once — will tell you exactly how much to trust each prompt’s output on your practice area’s typical paper.

    Related reading

  • 10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    These ten prompts took me from blank page to usable first draft on actual client matters — intake calls, demand letters, deposition prep, and everything in between. Save them now; tweak the variables later.

    Every solo lawyer I talk to has the same complaint: too many tasks, not enough time, and AI tools that sound impressive until you actually try them on a real matter. The prompts below were built for ChatGPT (GPT-4o) and tested across family law, employment, and small-business transactional matters. They are not magic. They produce first drafts, not final work product. But a solid first draft that takes three minutes instead of forty-five minutes is the whole point.

    A few ground rules before you start. Never paste full client names, Social Security numbers, or identifying case details into a public AI tool. Use placeholders like [CLIENT], [OPPOSING PARTY], and [MATTER TYPE]. If your firm uses Microsoft Copilot or a privacy-partitioned ChatGPT Enterprise account, you have more flexibility — but check your bar’s current guidance on client data and AI tools before you do anything. These prompts work best as templates you adapt, not scripts you run verbatim.

    1. Intake Call Summary into a Structured Brief

    When to use it: Right after an intake call. You have rough notes or a transcript from a call-recording tool like Otter.ai or Fireflies. You need a clean, structured brief to open a new matter file.

    What to expect: A structured output with labeled sections — parties, key facts, potential claims, open questions, and recommended next steps. The model is good at pulling signal from messy notes. It will occasionally hallucinate a “fact” that wasn’t in your notes, so read it against your source before filing it anywhere.

    You are a legal assistant helping a solo attorney organize intake notes.
    
    Below are rough notes from a new client intake call. Convert them into a structured brief with these sections:
    1. Parties (client name placeholder, opposing party placeholder, any other relevant persons)
    2. Core Facts (bullet list, chronological where possible)
    3. Potential Claims or Issues (list only — do not evaluate likelihood)
    4. Documents Mentioned or Needed
    5. Open Questions for Follow-Up
    6. Suggested Next Steps
    
    Do not add facts not present in the notes. Flag anything unclear with [UNCLEAR].
    
    Intake notes:
    [PASTE YOUR NOTES HERE]

    Tweaks: Add a sixth section called “Conflicts Check Names” and ask the model to pull every person and entity name mentioned — that feeds directly into prompt #2. If you handle a specific practice area, add “Practice area: [AREA]” so the model can weight its issue-spotting accordingly.

    2. First-Pass Conflict Check from a Party List

    When to use it: You’ve got a new matter and a list of parties. You want a quick cross-reference against your existing client list before your conflicts-check software runs its full scan — or if you don’t have dedicated conflicts software.

    What to expect: The model will flag name matches, near-matches, and related entities. This is a first pass, not a complete conflicts check. Your malpractice carrier and bar rules require a real process — this prompt helps you surface obvious problems faster.

    You are a legal assistant running a first-pass conflicts check for a solo attorney.
    
    New matter parties:
    [LIST ALL PARTIES, ENTITIES, AND KEY PERSONS FROM THE NEW MATTER]
    
    Existing client and adverse party list:
    [PASTE YOUR CURRENT CLIENT/ADVERSE PARTY LIST — USE PLACEHOLDERS IF NEEDED]
    
    Tasks:
    1. Flag any exact name matches between the two lists.
    2. Flag any likely near-matches (similar names, abbreviations, DBAs).
    3. Flag any entities that share a name root with a listed party.
    4. List any names from the new matter that do NOT appear on the existing list (for your records).
    
    Format the output as a table with columns: New Matter Party | Match Found | Match Type | Notes.

    Tweaks: This prompt only works as well as the list you feed it. Keep a running CSV of client and adverse party names in a note or document you can paste quickly. If your existing list is long, break it into chunks — GPT-4o handles roughly 25,000 words of context, but accuracy degrades near the ceiling.

    3. Demand Letter Draft from a Fact Pattern

    When to use it: You have a settled fact pattern and a clear demand amount. You need a professional demand letter drafted before you spend thirty minutes staring at a blank template.

    What to expect: A complete letter with opening statement of representation, fact recitation, legal basis section (labeled as general — you’ll fill in controlling authority), demand, and deadline. The model writes competent prose. It will not cite your jurisdiction’s specific statutes correctly without prompting, so always check cites before sending.

    You are a legal assistant drafting a demand letter for a solo attorney.
    
    Facts:
    [SUMMARIZE THE CORE FACTS — WHO DID WHAT, WHEN, AND WHAT HARM RESULTED]
    
    Jurisdiction: [STATE]
    Practice area: [E.G., EMPLOYMENT / PERSONAL INJURY / CONTRACT]
    Demand amount: $[AMOUNT] or [DESCRIBE RELIEF SOUGHT]
    Response deadline: [NUMBER] days
    
    Draft a professional demand letter. Use formal tone. Include:
    - Opening paragraph identifying the attorney and client (use [ATTORNEY NAME] and [CLIENT] as placeholders)
    - Factual background section
    - Legal basis section — flag where jurisdiction-specific statutes or case law should be inserted with [INSERT AUTHORITY]
    - Clear statement of demand
    - Response deadline and consequence of non-response
    
    Do not invent legal citations. Use [INSERT AUTHORITY] wherever a cite is needed.

    Tweaks: Add “Tone: [firm but professional / aggressive / conciliatory]” to the prompt to shift the letter’s posture. For employment matters, add the employer’s size if known — it affects which statutes apply and the model will note that in the [INSERT AUTHORITY] placeholders.

    4. Deposition Outline from Case Documents

    When to use it: You have a deponent, a set of documents, and not enough time to build a line-by-line outline from scratch. Paste in the relevant excerpts — discovery responses, prior statements, key emails — and let the model draft your question framework.

    What to expect: A topical outline with suggested question areas, document tie-ins, and impeachment flags. The model is strong on organizing themes and weak on jurisdiction-specific deposition procedure. Expect to add foundation questions and objection-anticipation notes yourself.

    You are a legal assistant helping a solo attorney prepare for a deposition.
    
    Deponent: [ROLE — E.G., "Defendant employer's HR director" — no real names]
    Matter type: [E.G., wrongful termination / breach of contract]
    Key issues in dispute: [LIST 3-5 CORE DISPUTED FACTS OR LEGAL ELEMENTS]
    
    Documents provided (paste excerpts below):
    [PASTE RELEVANT EXCERPTS — REDACT IDENTIFYING INFO AS NEEDED]
    
    Create a deposition outline organized by topic. For each topic:
    1. State the goal of that topic section (what you are trying to establish or undermine)
    2. List 5-8 suggested open-ended questions
    3. Note any document the attorney should introduce during that section
    4. Flag any prior statements in the documents that could be used for impeachment
    
    Do not suggest legal strategy. Flag factual inconsistencies in the documents with [INCONSISTENCY NOTE].

    Tweaks: If you have a prior deposition transcript from the same witness in another matter, paste selected excerpts and add “Flag any statements inconsistent with the documents above.” The model handles cross-document comparison reasonably well within a single context window.

    Close-up of two hands resting on a slim laptop keyboard, a printed contract visible on the desk beside it as soft abstra

    5. Engagement Letter Customization

    When to use it: You have a master engagement letter template and need to adapt it for a specific matter type, fee arrangement, or client situation without rewriting the whole thing manually.

    What to expect: The model will insert the right variables, flag clauses that may not fit the matter type, and suggest additions you might have missed. It will not flag jurisdiction-specific requirements you haven’t told it about — you still need to know what your state bar requires in an engagement letter.

    You are a legal assistant helping a solo attorney customize an engagement letter.
    
    Base template:
    [PASTE YOUR ENGAGEMENT LETTER TEMPLATE]
    
    Matter details:
    - Matter type: [E.G., estate planning / civil litigation / business formation]
    - Fee arrangement: [E.G., flat fee $X / hourly at $X / contingency at X%]
    - Scope of representation: [DESCRIBE WHAT IS AND IS NOT INCLUDED]
    - Any special terms: [LIST ANY CLIENT-SPECIFIC ARRANGEMENTS]
    
    Tasks:
    1. Insert the matter-specific details into the appropriate places in the template.
    2. Flag any clauses in the template that may not fit this matter type with [REVIEW THIS CLAUSE].
    3. Suggest any standard clauses that appear to be missing for this matter type, labeled [SUGGESTED ADDITION].
    4. Do not change any clause language without flagging the change clearly.
    
    Output: The revised letter with all changes marked in [BRACKETS].

    Tweaks: Run this with Claude Sonnet 3.5 if you want more conservative, flag-heavy output — Claude tends to over-flag, which is actually useful for compliance review. GPT-4o tends to write more fluently but flag less aggressively.

    6. Chronology Builder from Emails and Notes

    When to use it: You have a pile of emails, text summaries, and scattered notes and need a clean timeline. Works for breach-of-contract disputes, employment matters, domestic cases — anywhere a clear sequence of events matters.

    What to expect: A date-ordered table or list with source attribution. The model is good at pulling dates and sequencing events. It will occasionally misread ambiguous date formats (MM/DD vs. DD/MM) — flag that in the prompt if your documents mix formats.

    You are a legal assistant building a factual chronology for a solo attorney.
    
    Below are excerpts from emails, notes, and documents related to a single matter. Extract every datable event and build a chronology.
    
    Output format: A table with columns — Date | Event Description | Source | Significance Flag
    
    Rules:
    - Use the exact date from the source if available. If only a month/year is given, note that.
    - If a date is ambiguous or inferred, mark it [INFERRED DATE].
    - Significance Flag: mark events as [KEY] if they appear directly relevant to the core dispute; mark [BACKGROUND] for context events.
    - Do not add events not supported by the source material.
    - If two events appear to conflict in the record, flag both with [CONFLICT].
    
    Source material:
    [PASTE EMAILS, NOTES, AND EXCERPTS HERE — REDACT IDENTIFYING INFO]

    Tweaks: For long document sets, run this in batches by time period and then ask the model to merge and de-duplicate the resulting tables. Ask it to “merge the following two chronology tables, removing duplicate entries and resolving conflicts where the same event appears twice with different dates.”

    7. Settlement Agreement Plain-Language Summary for the Client

    When to use it: You’ve negotiated a settlement and need to explain it to a client who is not a lawyer. You want a summary that covers what they’re agreeing to, what they’re giving up, and what happens next — without the legalese.

    What to expect: A clean, readable summary organized by what the client receives, what the client must do, what the client cannot do after signing, and key dates. The model handles plain-language conversion well. Do not send this summary to the client in place of the actual agreement — it’s a companion document you review with them.

    You are a legal assistant helping a solo attorney explain a settlement agreement to a client in plain language.
    
    Settlement agreement text:
    [PASTE THE SETTLEMENT AGREEMENT — REDACT NAMES IF NEEDED]
    
    Write a plain-language summary for the client. Use simple sentences. No legal jargon without a plain-English explanation in parentheses.
    
    Organize the summary into these sections:
    1. What You Are Getting (payments, actions, other relief)
    2. What You Must Do (release of claims, confidentiality obligations, other duties)
    3. What You Cannot Do After Signing (restrictions, non-disparagement, non-compete if applicable)
    4. Important Dates and Deadlines
    5. What Happens If Either Side Doesn't Follow Through
    
    End with a short paragraph reminding the client to ask their attorney any questions before signing.
    
    Do not interpret ambiguous clauses — flag them with [ASK YOUR ATTORNEY ABOUT THIS].

    Tweaks: Adjust reading level with “Write at a 7th-grade reading level” or “Write for a sophisticated business client.” The model handles both well. If the agreement is long, paste it in sections and ask for section-by-section summaries first, then ask for a consolidated summary.

    8. Interrogatory Response First Draft

    When to use it: Opposing counsel has served interrogatories. You have your client’s answers in rough form — notes from a call, a client-filled questionnaire, bullet points. You need a properly formatted first draft before you do the real lawyering.

    What to expect: Formally formatted responses with proper headers, general objections section, and individual responses. The model will draft objections only if you give it grounds — it won’t invent them. You will need to review every objection for jurisdictional validity and every substantive response for accuracy. This prompt saves formatting time, not judgment time.

    You are a legal assistant helping a solo attorney draft interrogatory responses.
    
    Jurisdiction: [STATE / FEDERAL — DISTRICT IF FEDERAL]
    Case type: [E.G., employment discrimination / breach of contract]
    
    Interrogatories served:
    [PASTE THE INTERROGATORIES]
    
    Client's rough answers (as provided — do not treat these as verified):
    [PASTE THE CLIENT'S NOTES OR QUESTIONNAIRE ANSWERS]
    
    Draft formal interrogatory responses. Follow this structure:
    - Standard caption and introduction (use [CASE CAPTION] placeholder)
    - General Objections section — include only objections supported by these grounds: [LIST ANY GROUNDS YOU WANT INCLUDED, E.G., "overbroad," "unduly burdensome," "attorney-client privilege"]
    - Individual responses keyed to each interrogatory number
    - Where the client's answer is incomplete, draft the response to reflect what was provided and add [ATTORNEY: CONFIRM/SUPPLEMENT]
    - Where no client answer was provided, write [NO RESPONSE PROVIDED — ATTORNEY ACTION REQUIRED]
    
    Do not add substantive information the client did not provide.

    Tweaks: If you want the model to draft privilege-specific objections, add the privilege basis and a brief description of what you’re protecting. Never let the model guess at privilege — it will get it wrong.

    9. Objection-Letter Style Review of Opposing Counsel Correspondence

    When to use it: Opposing counsel sent a letter with factual characterizations, legal positions, or demands. You want a structured breakdown before you respond — what they claimed, what’s disputable, what’s accurate, and what they may be setting up.

    What to expect: A point-by-point analysis of the letter’s claims, flagging factual assertions, legal conclusions, and rhetorical moves separately. This is a thinking tool, not a draft response. It’s genuinely useful for clearing your head before you pick up the phone or start typing.

    You are a legal assistant helping a solo attorney analyze a letter from opposing counsel.
    
    Letter from opposing counsel:
    [PASTE THE LETTER]
    
    Your client's matter context (brief summary only):
    [2-3 SENTENCES ON THE MATTER — NO PRIVILEGED DETAIL]
    
    Analyze the letter with the following breakdown:
    1. Factual Claims — List each factual assertion made in the letter. For each, note whether it appears accurate, disputable, or unverifiable based on the context provided.
    2. Legal Positions — Identify any legal conclusions or theories asserted. Flag these as [LEGAL POSITION — ATTORNEY REVIEW NEEDED].
    3. Implicit Threats or Posturing — Note any implied threats, deadlines, or strategic positioning.
    4. Demands — List all explicit demands, including response deadlines.
    5. Suggested Response Points — For each factual claim marked disputable, note what a response might address. Do not draft the response itself.
    
    Do not evaluate the legal merit of positions — flag them for attorney review.

    Tweaks: This prompt works well as a second pass after you’ve already read the letter yourself. Run it after forming your own initial reaction and compare the model’s breakdown to your instincts — the gaps are usually informative.

    10. End-of-Week Matter Status Email to a Client

    When to use it: Friday afternoon. You have five active matters and five clients who haven’t heard from you since Tuesday. You have notes on what happened this week. You need five short emails in twenty minutes.

    What to expect: A professional, warm, appropriately brief client update email. The model writes competent client-facing prose without the wooden formality of a form letter. You’ll still need to fact-check every line against your actual matter status — the model only knows what you tell it.

    You are a legal assistant helping a solo attorney write a client status update email.
    
    Matter context:
    - Matter type: [E.G., pending litigation / contract negotiation / estate plan]
    - Current stage: [E.G., discovery / drafting / awaiting opposing party response]
    - What happened this week: [BRIEF BULLET POINTS]
    - What is happening next: [NEXT 1-2 STEPS]
    - Any action needed from client: [YES/NO — IF YES, DESCRIBE]
    - Tone: [PROFESSIONAL AND WARM / FORMAL / CASUAL — CLIENT'S PREFERENCE]
    
    Write a brief client update email (150-250 words). 
    - Address the client as [CLIENT FIRST NAME].
    - Sign as [ATTORNEY NAME].
    - Do not include specific dollar amounts, legal conclusions, or strategic assessments.
    - End with a clear statement of what the client should do next, if anything.
    - Do not use legal jargon without a plain-English explanation.

    Tweaks: Build a simple text file with your five active matters’ bullet-point status each Friday afternoon and run this prompt five times in a row. Takes about fifteen minutes total once you have the habit. Some attorneys batch this in a single prompt asking for all five emails at once — results are slightly lower quality but still usable.

    Notes on Using These Prompts

    Model Choice: GPT-4o vs. Claude Sonnet 3.5

    I ran all ten prompts on both GPT-4o (via ChatGPT Plus) and Claude Sonnet 3.5 (via Claude.ai Pro). Short verdict: GPT-4o produces more fluent, polished prose — better for the demand letter, the client email, and the plain-language settlement summary. Claude Sonnet 3.5 is more conservative and flags more aggressively — better for the engagement letter review and the interrogatory draft, where over-flagging is a feature, not a bug. For the conflict check and chronology, they perform comparably. Neither is accurate enough on jurisdiction-specific legal cites to skip your own review.

    Customization Variables to Build In

    Every prompt above has bracket variables. The ones worth standardizing across your practice:

    • [JURISDICTION] — Add this to every prompt. It doesn’t guarantee accurate statutory cites, but it steers the model’s general framing correctly.
    • [PRACTICE AREA] — Narrows the model’s issue-spotting. Without it, you get generic output.
    • [TONE] — Matters more than you’d expect on client-facing documents. Define your client communication style once and paste it in.
    • [ATTORNEY REVIEW NEEDED] — Keep this flag language consistent across all prompts so you know at a glance what the model flagged when you’re editing.

    Where These Prompts Break

    The conflict check breaks when your existing client list is inconsistently formatted — the model can’t catch what it can’t parse. The deposition outline breaks on highly technical expert matters where the model lacks domain context. The demand letter breaks when the legal theory is novel or jurisdiction-specific enough that [INSERT AUTHORITY] placeholders dominate the whole legal basis section — at that point, you’re writing from scratch anyway. The interrogatory draft breaks when client answers are vague or contradictory, because the model fills gaps with plausible-sounding content it doesn’t actually know. Every prompt breaks on long documents that exceed the context window — split them.

    One Hard Rule

    These prompts produce first drafts. You edit, verify, and sign. If a line in the output doesn’t match your actual knowledge of the matter, cut it. The model doesn’t know your client. You do.

    Save these to a note, a doc, or a snippet manager like TextExpander or Raycast Snippets. The ten minutes you spend organizing them now will pay back within the first week you use them.

    Related reading