Five prompts, run in order, will get you a structured first-pass review of a third-party contract before you touch a redline — here’s the exact sequence, verbatim, with notes on where it holds and where it falls apart.
Third-party paper is the friction point most solo and small-firm lawyers handle the same way they always have: read the whole thing, flag as you go, hope you caught everything. That works. It also takes two to four hours on a mid-length MSA. This sequence hands the first pass to Claude or GPT-4o, extracts structured outputs at each step, and leaves you doing the one thing AI still can’t do — judging what the risk actually means for your specific client. The prompts were built for contracts in the 10–40 page range. Above 40 pages, read the context window notes at the end.
How these prompts were chosen
Each prompt in the sequence produces a discrete output that feeds the next one. They’re ordered to mirror what an experienced contracts lawyer actually does: orient (what does this contract require?), diagnose (what’s missing or sloppy?), triage risk (what can hurt the client?), calibrate (how bad is it given the client’s position?), then document (what do I tell the client and opposing counsel?). Skipping steps or running them out of order produces muddier results. Run them in a single long conversation thread so each prompt inherits the prior context — don’t start a new chat between steps.
1. Extract obligations and deadlines
Run this first. Before you can assess risk, you need a clean inventory of what the contract actually obligates each party to do and when. Paste the full contract text immediately after this prompt in the same message.
You are a contract analysis assistant. I am going to paste a contract below. Read the entire contract carefully.
Your task: Extract every obligation, right, and deadline from this contract. Organize your output as follows:
1. OBLIGATIONS — MY CLIENT: A bulleted list of every affirmative obligation placed on [PARTY A / insert your client's role, e.g., "the Vendor" or "the Licensee"]. For each obligation, note the section number and any triggering condition.
2. OBLIGATIONS — COUNTERPARTY: Same format for the other party.
3. DEADLINES AND NOTICE PERIODS: A separate table with three columns — Event, Deadline or Notice Period, Section Reference. Include payment terms, renewal windows, termination notice periods, cure periods, and any other time-sensitive triggers.
4. UNCLEAR OR AMBIGUOUS OBLIGATIONS: List any obligation where the responsible party, timing, or scope is not clearly defined. Quote the relevant language.
Do not summarize the contract overall. Do not give legal advice. Output only the structured lists above.
[PASTE CONTRACT TEXT HERE]Expect 400–800 words of output on a typical 20-page SaaS or services agreement. If the model starts summarizing instead of listing, add “Do not write prose summaries. Use only bullet points and tables” to the top of the prompt. On Claude 3.5 Sonnet, the table formatting holds well. On GPT-4o, you may get a looser structure — add “Use markdown tables” if you’re in a canvas or interface that renders them.
2. Identify boilerplate gaps and missing definitions
Standard third-party paper often omits definitions that matter to your client’s specific situation, or uses defined terms inconsistently. This prompt catches that before you get to substantive risk review.
Now review the same contract for structural and definitional issues.
Your task:
1. MISSING DEFINITIONS: List every capitalized term that is used in the contract body but is not defined in the Definitions section (or anywhere in the contract). For each, quote one sentence where the term appears.
2. INCONSISTENT USAGE: Identify any term that appears to be used with different meanings or scope in different sections. Quote both instances.
3. MISSING STANDARD PROVISIONS: Flag any of the following that are absent from the contract: governing law clause, dispute resolution clause, entire agreement / integration clause, amendment procedure, assignment restriction, force majeure, notice provision with contact details, counterparts / electronic signature clause. State clearly which are missing and which are present.
4. INTERNALLY INCONSISTENT TERMS: Flag any place where two clauses appear to conflict with each other. Quote both clauses and identify the section numbers.
Output only the structured lists above. Do not summarize the contract.This prompt runs in the same thread — the model already has the contract text from prompt 1. You don’t need to re-paste. If you’re hitting context limits on a long MSA, paste only the definitions section and the first 10 pages before running this prompt, then run it again on the back half. You’ll lose cross-document comparison on the second run, which matters most for item 4 above.

3. Flag risk-shifting clauses
This is the prompt that does the heaviest lifting. It pulls every clause that shifts financial or legal exposure between the parties, with no editorial spin — just extraction and quotation. You apply the judgment in the next step.
Now analyze the contract for risk-shifting provisions. Cover each of the following categories. For each clause you identify, quote the exact language, cite the section number, and write one neutral sentence describing what the clause does. Do not characterize the clause as good or bad. Do not advise.
1. INDEMNIFICATION: All indemnification obligations — who indemnifies whom, for what triggers, and whether there are carve-outs for the indemnifying party's own negligence or willful misconduct.
2. LIMITATION OF LIABILITY: Any cap on damages (include the cap amount or formula if stated), any exclusion of consequential or indirect damages, and any carve-outs to those limitations (e.g., IP infringement, fraud, gross negligence).
3. IP OWNERSHIP AND ASSIGNMENT: Any clause addressing ownership of work product, deliverables, inventions, or improvements. Note whether the clause is a present assignment ("hereby assigns") or an agreement to assign ("agrees to assign"). Note any carve-outs for pre-existing IP or background IP.
4. REPRESENTATIONS AND WARRANTIES: List all representations and warranties made by each party. Flag any that are qualified by "knowledge" or "materiality" and any that are conspicuously absent (e.g., no warranty of fitness for purpose, no warranty of non-infringement).
5. TERMINATION RIGHTS: All termination triggers for each party — termination for cause, termination for convenience, termination for insolvency. Note any asymmetry (e.g., one party can terminate for convenience, the other cannot).
6. AUTO-RENEWAL AND PRICE ESCALATION: Any automatic renewal provision, any price escalation formula, and the notice window required to prevent auto-renewal.
Output structured lists only. Quote the contract text directly for each item.This is the prompt where Claude 3.5 Sonnet earns the extra cost over GPT-4o-mini. On a heavily-negotiated enterprise software agreement with layered carve-outs and cross-references, Claude reads the inter-clause relationships more accurately. GPT-4o-mini will catch the obvious caps and indemnity triggers, but it misses carve-outs buried in definitions or in exhibit terms. For a straightforward services agreement, the cheaper model is fine. For a software license with a complex SLA exhibit, use Claude.
4. Compare against a stated risk profile
This is where you tell the model which side your client is on and what level of exposure they can absorb. The three profiles below are starting points — edit them to match what you actually know about the client and matter.
Based on your analysis above, evaluate the contract from the perspective of [INSERT PROFILE — choose one or write your own]:
PROFILE A — FAVORED CLIENT: My client has significant leverage and expects the contract to be tilted in their favor. Flag any provision that is less favorable than market standard for the stronger party. Flag any missing protections a stronger party would normally demand.
PROFILE B — BALANCED: My client is seeking a market-standard, balanced agreement. Flag any provision that materially departs from balanced allocation of risk — in either direction. Note whether the departure favors or disfavors my client.
PROFILE C — VENDOR-FAVORABLE PAPER: My client is the vendor and this is the vendor's own form. My client wants to understand what concessions may be necessary to close deals. Flag any provision that a sophisticated counterparty's counsel is likely to push back on, and note the likely pushback.
For each flagged item:
- Identify the section number and quote the relevant language.
- State specifically how it departs from the chosen profile.
- Rate the priority: HIGH (likely to affect deal economics or create significant exposure), MEDIUM (worth negotiating if time permits), LOW (standard ask that may not move the needle).
Do not give legal advice. Do not recommend specific contract language. Output a prioritized list only.The priority ratings are the most useful output here — they let you scope your redline conversation with the client before you spend time drafting. In practice, HIGH items on a balanced-profile review track closely with what experienced contracts lawyers flag first. MEDIUM and LOW ratings are noisier; treat them as a checklist to eyeball, not a final word. Swap in your own profile language if the client’s situation is more specific — for example, a regulated entity with insurance constraints, or a startup with no revenue that can’t backstop an uncapped indemnity.
5. Generate a redline rationale memo
The final prompt turns the structured outputs from prompts 3 and 4 into a working document you can hand to a client or use as your own drafting notes before opening the redline. This is not a memo you send without reading — it’s a first draft that saves you the blank-page problem.
Using the risk analysis and profile comparison above, draft a contract review memo structured as follows. Address the memo to [CLIENT NAME / "the client"] from [YOUR NAME / "reviewing counsel"]. Date it [DATE].
SECTION 1 — OVERVIEW: Two to three sentences summarizing the nature of the agreement, the parties, and the general posture of the draft (e.g., "This is a vendor-favorable SaaS agreement. The draft contains several provisions that would require revision before execution under a balanced risk profile."). Do not be conclusory about legal enforceability.
SECTION 2 — KEY ISSUES FOR CLIENT DECISION: A numbered list of the HIGH-priority items from the risk profile comparison. For each item: state what the current contract says (in plain language, not legal jargon), state why it is flagged as high priority for this client's profile, and state what outcome the client should decide on before redlining begins. Do not draft contract language. Do not advise the client what to decide.
SECTION 3 — NEGOTIATION TARGETS: A numbered list of MEDIUM-priority items formatted the same way as Section 2.
SECTION 4 — ITEMS TO NOTE BUT NOT PRIORITIZE: A brief list of LOW-priority items. One sentence each.
SECTION 5 — QUESTIONS FOR CLIENT BEFORE REDLINE: List any factual questions that need answers before the redline can be completed (e.g., "Does the client have existing IP that should be carved out of the assignment clause?", "What is the client's insurance coverage for the indemnification obligation in Section 9.2?").
Write in plain, direct language. No legal conclusions. No recommendations about what the client should sign or not sign. Format for a professional memo — not bullet-heavy, use short paragraphs within each numbered item.Section 5 — the questions list — is often the most valuable output of the entire sequence. It surfaces the gaps between what the contract says and what you don’t yet know about the client’s situation. On one test run against a 28-page IT services agreement, the model generated nine factual questions, seven of which were legitimate blockers to completing the redline. Two were redundant. That’s a usable signal-to-noise ratio for a first draft.
Notes on using these prompts
Model choice
Claude 3.5 Sonnet (claude-3-5-sonnet-20241022 in the API, or Claude.ai Pro at $20/month) handles the nuance in prompts 3 and 4 better than any other model I’ve run this sequence against. It tracks cross-references between clauses and definitions more reliably, and it’s less likely to collapse carve-outs into the main clause when summarizing. GPT-4o is a close second for the same price tier. GPT-4o-mini at roughly one-fifteenth the API cost is useful for running prompt 1 against a stack of contracts in parallel — obligation extraction is mechanical enough that the cheaper model performs acceptably. Don’t use GPT-4o-mini for prompts 3 and 4 on anything above moderate complexity.
Context window limits
Claude 3.5 Sonnet’s 200k-token context window handles most MSAs without issue. GPT-4o’s 128k window is adequate for contracts up to roughly 60–70 pages of plain text. Problems start when you add exhibits. A master services agreement with three SOWs, a data processing addendum, and an acceptable use policy can push 100k tokens of contract text alone, leaving little room for the model to hold its own outputs across five prompts. If the contract exceeds 40 pages, split it: run the sequence on the core agreement, then run prompts 1 and 3 separately on each exhibit, and combine the outputs manually before running prompt 5.
Where this workflow breaks
Heavily-amended drafts are the main failure mode. If you paste a contract that has already been through two rounds of negotiation — tracked changes accepted, bracketed alternatives still in, comments embedded — the model will mis-read the document. It will sometimes treat bracketed alternatives as agreed text, and it will occasionally flag a provision as present when it was in fact struck in a prior round. Clean the document before you paste it: accept all tracked changes you want the model to see, delete all comments, remove all bracketed alternatives except the current working version. This is a fifteen-minute task that prevents a half-hour of bad output.
The other break point is contract types the models haven’t seen much of. Highly specialized agreements — certain energy contracts, bespoke financing structures, niche IP licenses with industry-specific custom terms — produce weaker results on prompts 3 and 4 because the model’s sense of “market standard” is thinner. The obligation extraction in prompts 1 and 2 still works on unusual contract types; the risk calibration in prompt 4 gets shakier. Treat the profile comparison output with more skepticism on unfamiliar paper.
Finally: this sequence reviews a contract. It does not replace judgment about whether specific terms are acceptable for a specific client in a specific transaction. The memo from prompt 5 is a drafting aid, not a deliverable. Read every flagged item against the actual contract text before you act on it.
Run the sequence once on a low-stakes matter you already know well. Compare the model’s output against your own notes. That calibration exercise — run once — will tell you exactly how much to trust each prompt’s output on your practice area’s typical paper.
Related reading
- Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool
- 10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)
- Document Automation with Claude and Microsoft Word: A Walkthrough for Small Firms
- 7 Demand Letter Drafting Prompts That Actually Save Time
- 10 Claude Prompts for Faster Discovery Document Review

