Tag: legal-tech

  • What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    The 2026 ABA TechReport shows AI adoption climbing fast — but the headline numbers are mostly BigLaw’s story. Here’s what the data actually says for a solo or small firm, and what to do about it this week.

    Every year the ABA TechReport lands and every year the same thing happens: law firm marketing teams quote the top-line adoption number, vendors repitch their most expensive SKUs, and the solo lawyer in a two-person family-law firm closes the tab. The 2026 report is more of the same — except the gap between the BigLaw AI budget and the small-firm reality is now wide enough to be worth talking about directly. AI spending at Am Law 100 firms is up sharply. Meaningful tooling for a five-attorney plaintiff’s practice? Still thin. This piece is about that gap, what the data underneath the headline actually shows, and the cheapest credible path forward for a firm of 1–10 attorneys.

    What the 2026 Report Actually Found — and Who It Found It For

    The report’s headline: AI tool adoption among attorneys crossed 60% for the first time. That number is real. It is also skewed hard by firm size. When you filter to solo practitioners, the adoption rate drops to roughly the mid-30s percentile. Firms of 2–9 attorneys sit in the low-40s. The firms pushing the aggregate number past 60% are firms with 100-plus attorneys, dedicated IT staff, and vendor contracts that cost more per seat per year than a solo’s entire software budget.

    The report also tracks what attorneys are using. At large firms, the dominant tools are Harvey, CoCounsel Enterprise, and Microsoft 365 Copilot deployed org-wide. At small firms, the most commonly cited tools are ChatGPT (usually the free tier or Plus), Google Gemini, and whatever AI feature their existing practice management software quietly shipped in the last 12 months. Those are not the same category of tool. Comparing adoption rates across those two groups as if they represent the same phenomenon is misleading.

    One number that doesn’t skew by firm size: attorney anxiety about AI competence obligations. Across all firm sizes, concern about keeping up with the technology — and with bar guidance on its use — is roughly uniform. Solos worry about it as much as partners at midsize firms. That’s the one place the report’s aggregate number actually means something for a small-firm reader.

    The Price-Point Problem the Report Doesn’t Name Directly

    Harvey starts at pricing that isn’t published but is widely reported in the $500–$1,000+ per-seat-per-month range for firm contracts. CoCounsel’s small-firm tier has come down, but you’re still looking at $100/month per seat at minimum, often more depending on the plan. Spellbook sits around $150–$200/month for a solo seat. Those prices are defensible if the tool reliably saves you two or three billable hours a month. They are not defensible if you haven’t yet proven to yourself that AI-assisted drafting actually saves you time in your specific practice.

    The report notes that ROI measurement at small firms is almost nonexistent. Fewer than 15% of solo and small-firm respondents said they tracked time saved against AI tool cost in any systematic way. That’s not a moral failure — it’s a bandwidth problem. But it means most small-firm AI spending is faith-based. Anecdote drives the purchase; no one counts the hours afterward.

    The vendors aren’t incentivized to fix this. A tool that’s hard to evaluate is a tool that’s hard to cancel. The practical consequence for a small-firm reader: you need to do the measurement yourself before you commit to a premium tier, because no one else is going to do it for you.

    Close detail shot from a slight angle: two hands resting on a slim laptop keyboard, a contract document visible on scree

    What the Report Gets Right About Small-Firm Risk

    Two findings cut through the noise. First: the attorneys who report the highest satisfaction with AI tools are the ones who use them for a narrow, repeatable task — not as a general-purpose assistant across all work. The report’s phrasing is different, but the underlying data is clear. Trying to use an AI tool for everything produces mediocre results everywhere. Picking one document type, one workflow, one prompt you refine over time — that’s where the satisfaction numbers climb.

    Second: hallucination concern remains the top barrier to adoption at small firms, and it’s not irrational. A solo running a 200-matter caseload doesn’t have a team of associates to catch a fabricated citation. The report confirms that attorneys who build a verification step into their AI workflow — meaning they treat AI output as a first draft that requires checking, not a finished product — report significantly fewer quality problems. That’s a workflow design point, not a technology point. The tool doesn’t prevent hallucinations. Your process has to.

    Neither of these findings requires you to spend money. They’re workflow principles that apply whether you’re using a $20/month tool or a $500/month one.

    What I’d Actually Do About This

    Start at the lowest credible price point and measure. Here’s the specific sequence that makes sense for a solo or firm under 10 attorneys.

    Step 1: Run a $20/month tool for 30 days on one task

    Claude Pro ($20/month) and ChatGPT Plus ($20/month) are genuinely capable for legal drafting assistance, first-pass research summaries, and correspondence drafts. Pick one. Pick one task — demand letters, lease review summaries, deposition prep outlines, whatever you do repeatedly. Run every instance of that task through the tool for 30 days. Before each one, note your baseline time. After, note actual time with the tool. Thirty days, one task, one number at the end: minutes saved per matter.

    If the number is zero or negative, stop. You’ve spent $20 to learn something useful. If the number is positive, you now have a defensible basis for either continuing at $20/month or evaluating whether a more expensive tool would produce a bigger delta on that same task.

    Step 2: Check what your practice management software already includes

    Clio, MyCase, and PracticePanther have all shipped AI features in the last 18 months. Most are included in existing subscriptions at the mid-tier and above. Clio Duo handles matter summaries and draft correspondence. MyCase’s AI assistant touches document drafting and client communication. If you’re already paying for these platforms, you may have AI features you haven’t turned on. Check your subscription tier before spending anything new. The capabilities are narrower than a standalone tool, but the marginal cost is zero.

    Step 3: Only upgrade to Spellbook or CoCounsel if the delta is clear

    Spellbook is purpose-built for contract review and drafting inside Microsoft Word. If you do transactional work — business contracts, commercial leases, employment agreements — and you’re already in Word all day, Spellbook earns its price point faster than a general model will. CoCounsel (from Thomson Reuters, built on GPT-4 class models) is stronger on legal research summarization and has deeper integration with Westlaw if you’re a Westlaw subscriber. Both are worth trialing — both offer trial periods — but only after you’ve established in Step 1 that AI drafting assistance saves you meaningful time. Paying $150–$200/month to discover you don’t actually use AI tools consistently enough to matter is an expensive way to learn something you could have learned for $20.

    Step 4: Avoid Harvey-tier spending at this firm size

    Harvey is built for large-firm deployment: large document sets, high-volume due diligence, org-wide rollout with IT support. At a solo or small firm, you’re paying for infrastructure you can’t use. The per-seat cost is structured around large-firm contract negotiations. There is no meaningful scenario where a solo practitioner or a firm of five attorneys needs Harvey over a well-configured Spellbook or CoCounsel setup — and even those are only justified once you’ve done the measurement in Steps 1 and 2.

    The 2026 TechReport’s implicit message, if you read past the headline adoption numbers, is that the legal AI market is bifurcating. BigLaw is buying enterprise tools and absorbing the cost into hourly rates. Small firms are adopting more cautiously and measuring less. The cautious adoption is rational. The lack of measurement is the part worth fixing. Pick one tool, pick one task, track the hours. That’s the entire strategy.

    The Bottom Line

    The 2026 ABA TechReport confirms that AI adoption is up and that BigLaw is driving most of the interesting numbers. For a solo or small firm, the actionable takeaway is simple: start at $20/month, measure one task for 30 days, and don’t spend $150–$500/month until you can prove the cheaper tier isn’t doing the job. The technology is real. The ROI is not guaranteed. Every vendor in this space wants you to believe the premium tool is the responsible choice — but responsible means measuring first. The bar guidance on AI competence is real too, and it cuts toward knowing what your tools actually do, not toward spending more on them.

    Related reading

  • The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    The 5-Prompt Sequence for First-Pass Contract Review with Claude or GPT

    Five prompts, run in order, will get you a structured first-pass review of a third-party contract before you touch a redline — here’s the exact sequence, verbatim, with notes on where it holds and where it falls apart.

    Third-party paper is the friction point most solo and small-firm lawyers handle the same way they always have: read the whole thing, flag as you go, hope you caught everything. That works. It also takes two to four hours on a mid-length MSA. This sequence hands the first pass to Claude or GPT-4o, extracts structured outputs at each step, and leaves you doing the one thing AI still can’t do — judging what the risk actually means for your specific client. The prompts were built for contracts in the 10–40 page range. Above 40 pages, read the context window notes at the end.

    How these prompts were chosen

    Each prompt in the sequence produces a discrete output that feeds the next one. They’re ordered to mirror what an experienced contracts lawyer actually does: orient (what does this contract require?), diagnose (what’s missing or sloppy?), triage risk (what can hurt the client?), calibrate (how bad is it given the client’s position?), then document (what do I tell the client and opposing counsel?). Skipping steps or running them out of order produces muddier results. Run them in a single long conversation thread so each prompt inherits the prior context — don’t start a new chat between steps.

    1. Extract obligations and deadlines

    Run this first. Before you can assess risk, you need a clean inventory of what the contract actually obligates each party to do and when. Paste the full contract text immediately after this prompt in the same message.

    You are a contract analysis assistant. I am going to paste a contract below. Read the entire contract carefully.
    
    Your task: Extract every obligation, right, and deadline from this contract. Organize your output as follows:
    
    1. OBLIGATIONS — MY CLIENT: A bulleted list of every affirmative obligation placed on [PARTY A / insert your client's role, e.g., "the Vendor" or "the Licensee"]. For each obligation, note the section number and any triggering condition.
    
    2. OBLIGATIONS — COUNTERPARTY: Same format for the other party.
    
    3. DEADLINES AND NOTICE PERIODS: A separate table with three columns — Event, Deadline or Notice Period, Section Reference. Include payment terms, renewal windows, termination notice periods, cure periods, and any other time-sensitive triggers.
    
    4. UNCLEAR OR AMBIGUOUS OBLIGATIONS: List any obligation where the responsible party, timing, or scope is not clearly defined. Quote the relevant language.
    
    Do not summarize the contract overall. Do not give legal advice. Output only the structured lists above.
    
    [PASTE CONTRACT TEXT HERE]

    Expect 400–800 words of output on a typical 20-page SaaS or services agreement. If the model starts summarizing instead of listing, add “Do not write prose summaries. Use only bullet points and tables” to the top of the prompt. On Claude 3.5 Sonnet, the table formatting holds well. On GPT-4o, you may get a looser structure — add “Use markdown tables” if you’re in a canvas or interface that renders them.

    2. Identify boilerplate gaps and missing definitions

    Standard third-party paper often omits definitions that matter to your client’s specific situation, or uses defined terms inconsistently. This prompt catches that before you get to substantive risk review.

    Now review the same contract for structural and definitional issues.
    
    Your task:
    
    1. MISSING DEFINITIONS: List every capitalized term that is used in the contract body but is not defined in the Definitions section (or anywhere in the contract). For each, quote one sentence where the term appears.
    
    2. INCONSISTENT USAGE: Identify any term that appears to be used with different meanings or scope in different sections. Quote both instances.
    
    3. MISSING STANDARD PROVISIONS: Flag any of the following that are absent from the contract: governing law clause, dispute resolution clause, entire agreement / integration clause, amendment procedure, assignment restriction, force majeure, notice provision with contact details, counterparts / electronic signature clause. State clearly which are missing and which are present.
    
    4. INTERNALLY INCONSISTENT TERMS: Flag any place where two clauses appear to conflict with each other. Quote both clauses and identify the section numbers.
    
    Output only the structured lists above. Do not summarize the contract.

    This prompt runs in the same thread — the model already has the contract text from prompt 1. You don’t need to re-paste. If you’re hitting context limits on a long MSA, paste only the definitions section and the first 10 pages before running this prompt, then run it again on the back half. You’ll lose cross-document comparison on the second run, which matters most for item 4 above.

    Close detail shot from slightly above: two hands resting on a laptop keyboard, fingers just touching the keys, with a bl

    3. Flag risk-shifting clauses

    This is the prompt that does the heaviest lifting. It pulls every clause that shifts financial or legal exposure between the parties, with no editorial spin — just extraction and quotation. You apply the judgment in the next step.

    Now analyze the contract for risk-shifting provisions. Cover each of the following categories. For each clause you identify, quote the exact language, cite the section number, and write one neutral sentence describing what the clause does. Do not characterize the clause as good or bad. Do not advise.
    
    1. INDEMNIFICATION: All indemnification obligations — who indemnifies whom, for what triggers, and whether there are carve-outs for the indemnifying party's own negligence or willful misconduct.
    
    2. LIMITATION OF LIABILITY: Any cap on damages (include the cap amount or formula if stated), any exclusion of consequential or indirect damages, and any carve-outs to those limitations (e.g., IP infringement, fraud, gross negligence).
    
    3. IP OWNERSHIP AND ASSIGNMENT: Any clause addressing ownership of work product, deliverables, inventions, or improvements. Note whether the clause is a present assignment ("hereby assigns") or an agreement to assign ("agrees to assign"). Note any carve-outs for pre-existing IP or background IP.
    
    4. REPRESENTATIONS AND WARRANTIES: List all representations and warranties made by each party. Flag any that are qualified by "knowledge" or "materiality" and any that are conspicuously absent (e.g., no warranty of fitness for purpose, no warranty of non-infringement).
    
    5. TERMINATION RIGHTS: All termination triggers for each party — termination for cause, termination for convenience, termination for insolvency. Note any asymmetry (e.g., one party can terminate for convenience, the other cannot).
    
    6. AUTO-RENEWAL AND PRICE ESCALATION: Any automatic renewal provision, any price escalation formula, and the notice window required to prevent auto-renewal.
    
    Output structured lists only. Quote the contract text directly for each item.

    This is the prompt where Claude 3.5 Sonnet earns the extra cost over GPT-4o-mini. On a heavily-negotiated enterprise software agreement with layered carve-outs and cross-references, Claude reads the inter-clause relationships more accurately. GPT-4o-mini will catch the obvious caps and indemnity triggers, but it misses carve-outs buried in definitions or in exhibit terms. For a straightforward services agreement, the cheaper model is fine. For a software license with a complex SLA exhibit, use Claude.

    4. Compare against a stated risk profile

    This is where you tell the model which side your client is on and what level of exposure they can absorb. The three profiles below are starting points — edit them to match what you actually know about the client and matter.

    Based on your analysis above, evaluate the contract from the perspective of [INSERT PROFILE — choose one or write your own]:
    
    PROFILE A — FAVORED CLIENT: My client has significant leverage and expects the contract to be tilted in their favor. Flag any provision that is less favorable than market standard for the stronger party. Flag any missing protections a stronger party would normally demand.
    
    PROFILE B — BALANCED: My client is seeking a market-standard, balanced agreement. Flag any provision that materially departs from balanced allocation of risk — in either direction. Note whether the departure favors or disfavors my client.
    
    PROFILE C — VENDOR-FAVORABLE PAPER: My client is the vendor and this is the vendor's own form. My client wants to understand what concessions may be necessary to close deals. Flag any provision that a sophisticated counterparty's counsel is likely to push back on, and note the likely pushback.
    
    For each flagged item:
    - Identify the section number and quote the relevant language.
    - State specifically how it departs from the chosen profile.
    - Rate the priority: HIGH (likely to affect deal economics or create significant exposure), MEDIUM (worth negotiating if time permits), LOW (standard ask that may not move the needle).
    
    Do not give legal advice. Do not recommend specific contract language. Output a prioritized list only.

    The priority ratings are the most useful output here — they let you scope your redline conversation with the client before you spend time drafting. In practice, HIGH items on a balanced-profile review track closely with what experienced contracts lawyers flag first. MEDIUM and LOW ratings are noisier; treat them as a checklist to eyeball, not a final word. Swap in your own profile language if the client’s situation is more specific — for example, a regulated entity with insurance constraints, or a startup with no revenue that can’t backstop an uncapped indemnity.

    5. Generate a redline rationale memo

    The final prompt turns the structured outputs from prompts 3 and 4 into a working document you can hand to a client or use as your own drafting notes before opening the redline. This is not a memo you send without reading — it’s a first draft that saves you the blank-page problem.

    Using the risk analysis and profile comparison above, draft a contract review memo structured as follows. Address the memo to [CLIENT NAME / "the client"] from [YOUR NAME / "reviewing counsel"]. Date it [DATE].
    
    SECTION 1 — OVERVIEW: Two to three sentences summarizing the nature of the agreement, the parties, and the general posture of the draft (e.g., "This is a vendor-favorable SaaS agreement. The draft contains several provisions that would require revision before execution under a balanced risk profile."). Do not be conclusory about legal enforceability.
    
    SECTION 2 — KEY ISSUES FOR CLIENT DECISION: A numbered list of the HIGH-priority items from the risk profile comparison. For each item: state what the current contract says (in plain language, not legal jargon), state why it is flagged as high priority for this client's profile, and state what outcome the client should decide on before redlining begins. Do not draft contract language. Do not advise the client what to decide.
    
    SECTION 3 — NEGOTIATION TARGETS: A numbered list of MEDIUM-priority items formatted the same way as Section 2.
    
    SECTION 4 — ITEMS TO NOTE BUT NOT PRIORITIZE: A brief list of LOW-priority items. One sentence each.
    
    SECTION 5 — QUESTIONS FOR CLIENT BEFORE REDLINE: List any factual questions that need answers before the redline can be completed (e.g., "Does the client have existing IP that should be carved out of the assignment clause?", "What is the client's insurance coverage for the indemnification obligation in Section 9.2?").
    
    Write in plain, direct language. No legal conclusions. No recommendations about what the client should sign or not sign. Format for a professional memo — not bullet-heavy, use short paragraphs within each numbered item.

    Section 5 — the questions list — is often the most valuable output of the entire sequence. It surfaces the gaps between what the contract says and what you don’t yet know about the client’s situation. On one test run against a 28-page IT services agreement, the model generated nine factual questions, seven of which were legitimate blockers to completing the redline. Two were redundant. That’s a usable signal-to-noise ratio for a first draft.

    Notes on using these prompts

    Model choice

    Claude 3.5 Sonnet (claude-3-5-sonnet-20241022 in the API, or Claude.ai Pro at $20/month) handles the nuance in prompts 3 and 4 better than any other model I’ve run this sequence against. It tracks cross-references between clauses and definitions more reliably, and it’s less likely to collapse carve-outs into the main clause when summarizing. GPT-4o is a close second for the same price tier. GPT-4o-mini at roughly one-fifteenth the API cost is useful for running prompt 1 against a stack of contracts in parallel — obligation extraction is mechanical enough that the cheaper model performs acceptably. Don’t use GPT-4o-mini for prompts 3 and 4 on anything above moderate complexity.

    Context window limits

    Claude 3.5 Sonnet’s 200k-token context window handles most MSAs without issue. GPT-4o’s 128k window is adequate for contracts up to roughly 60–70 pages of plain text. Problems start when you add exhibits. A master services agreement with three SOWs, a data processing addendum, and an acceptable use policy can push 100k tokens of contract text alone, leaving little room for the model to hold its own outputs across five prompts. If the contract exceeds 40 pages, split it: run the sequence on the core agreement, then run prompts 1 and 3 separately on each exhibit, and combine the outputs manually before running prompt 5.

    Where this workflow breaks

    Heavily-amended drafts are the main failure mode. If you paste a contract that has already been through two rounds of negotiation — tracked changes accepted, bracketed alternatives still in, comments embedded — the model will mis-read the document. It will sometimes treat bracketed alternatives as agreed text, and it will occasionally flag a provision as present when it was in fact struck in a prior round. Clean the document before you paste it: accept all tracked changes you want the model to see, delete all comments, remove all bracketed alternatives except the current working version. This is a fifteen-minute task that prevents a half-hour of bad output.

    The other break point is contract types the models haven’t seen much of. Highly specialized agreements — certain energy contracts, bespoke financing structures, niche IP licenses with industry-specific custom terms — produce weaker results on prompts 3 and 4 because the model’s sense of “market standard” is thinner. The obligation extraction in prompts 1 and 2 still works on unusual contract types; the risk calibration in prompt 4 gets shakier. Treat the profile comparison output with more skepticism on unfamiliar paper.

    Finally: this sequence reviews a contract. It does not replace judgment about whether specific terms are acceptable for a specific client in a specific transaction. The memo from prompt 5 is a drafting aid, not a deliverable. Read every flagged item against the actual contract text before you act on it.

    Run the sequence once on a low-stakes matter you already know well. Compare the model’s output against your own notes. That calibration exercise — run once — will tell you exactly how much to trust each prompt’s output on your practice area’s typical paper.

    Related reading

  • The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    A 30-minute intake call produces a structured matter file in under five minutes of editing — if you wire up the right three tools before the call starts.

    This workflow is built for solo lawyers and firms of two to five attorneys who are personally running their own intake. You take the call, you open the matter, you chase the conflict check. Every step is manual and each one costs time you don’t have. The workflow below connects a structured intake form, a call transcription tool, and a Claude prompt to collapse that 30-minute process into about five minutes of cleanup. I’ve written it so you can implement it in an afternoon. The total recurring cost runs between $20 and $40 per month depending on which tools you already pay for.

    What You’ll Need

    • Intake form tool: Typeform (free tier works; Paid starts at $25/month) or Jotform (free tier available). Either gives you a shareable link you send before the call.
    • Transcription tool: Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month). Both join video calls automatically and produce a searchable transcript within minutes of the call ending.
    • Claude: Claude.ai Pro ($20/month) or API access via Anthropic. Claude 3.5 Sonnet handles long transcripts without truncating the way shorter-context models do.
    • Practice management software: Clio Manage or MyCase. You’ll paste the output of the Claude prompt into a new matter note. No native integration required — this is copy-paste, not automation.

    Step 1: Build Your Pre-Call Intake Form

    Send a form link 24 hours before the scheduled call. The form does two things: it primes the prospective client to think clearly before you talk, and it gives you structured data that the Claude prompt will pull from directly.

    Fields to include

    • Full legal name
    • Date of birth
    • Phone and email
    • Adverse party name(s) — this is your conflict-check input
    • Matter type (dropdown: family, estate planning, business formation, real estate, employment, other)
    • Brief description of the situation (open text, 500-character limit)
    • Relevant dates (incident date, deadlines, filing dates they’re aware of)
    • Prior attorneys on this matter (yes/no + name field if yes)
    • How did you hear about us

    Keep the form under ten fields. Longer forms get abandoned. The goal is names, adverse parties, and a rough description — everything else comes out in the call.

    In Typeform, turn on email notifications so the completed response lands in your inbox before the call starts. In Jotform, the same setting lives under Settings → Emails. Export the response as a PDF and have it open during the call.

    Step 2: Record and Transcribe the Call

    If you’re on Zoom or Google Meet, Otter.ai and Fireflies.ai both join as a bot participant and record automatically once you connect your calendar. For phone calls, Otter’s mobile app records locally and transcribes after the fact. Fireflies handles phone recording through its dial-in number, which is slightly more friction.

    Tell the prospective client at the start of the call that you’re recording for your notes. One sentence is enough: “I record intake calls so I can focus on listening — the recording is just for my internal file.” Most clients don’t object. Check your state bar’s rules on recording consent before you run this call the first time; a few states require two-party consent on recorded phone calls.

    After the call ends, Otter delivers a transcript and summary to your inbox within five to ten minutes. Fireflies is slightly faster. Either one produces a searchable text file — that transcript is what you feed to Claude.

    One thing to check: both tools include speaker labels, but they’re imperfect. Otter labels speakers as “Speaker 1” and “Speaker 2” unless you manually assign names. Fireflies does the same. The Claude prompt handles unlabeled speakers fine — just note in the prompt which speaker is the attorney.

    Close-up detail shot of two hands resting near an open laptop keyboard, a leather portfolio and fountain pen in soft foc

    Step 3: Run the Claude Prompt

    Open Claude.ai Pro (or your API interface) and paste the following prompt, then paste the full transcript below it. Do not summarize the transcript yourself first — give Claude the raw text. The prompt is designed to pull structure out of unstructured conversation.

    You are a legal intake assistant helping a solo attorney organize information from a new client intake call. You do not provide legal advice or legal analysis. Your job is to extract and organize factual information from the transcript below into a structured intake brief.
    
    Using only the information in the transcript, produce the following sections:
    
    1. CLIENT INFORMATION
       - Full name
       - Contact information (phone, email) if mentioned
       - Date of birth if mentioned
    
    2. ADVERSE PARTIES
       - List every person, company, or entity the client mentioned as an opposing or adverse party
       - Include any names the attorney should check for conflicts
    
    3. MATTER TYPE AND DESCRIPTION
       - Practice area (as stated or clearly implied)
       - Neutral factual summary of the client's situation in 3-5 sentences. Do not characterize fault, liability, or legal merit. Report what the client described.
    
    4. KEY DATES AND DEADLINES
       - Any specific dates mentioned (incident dates, contract dates, filing dates, court dates)
       - Any deadlines the client is aware of
    
    5. DOCUMENTS MENTIONED
       - Any documents the client referenced (contracts, court filings, notices, deeds, etc.)
    
    6. PRIOR REPRESENTATION
       - Any prior attorneys the client mentioned in connection with this matter
    
    7. OPEN QUESTIONS
       - Information that appears missing or unclear from this intake that the attorney will likely need before opening the matter (do not suggest legal strategy — list informational gaps only)
    
    8. CONFLICT CHECK NAMES
       - A clean list of every proper name and entity name pulled from sections 1 and 2, formatted one per line, ready to copy into a conflict-check search
    
    Format each section with a clear header. Use bullet points within sections. If a section has no information from the transcript, write "Not mentioned in call."
    
    Do not add information not found in the transcript. Do not offer legal opinions. Do not speculate about outcomes.
    
    TRANSCRIPT:
    [paste full transcript here]

    The prompt takes about 90 seconds to run on Claude 3.5 Sonnet with a standard 30-minute transcript. The output is typically 400 to 600 words of clean, structured text.

    Tuning the prompt for your practice area

    If you run a family law practice, add a line to Section 3: “Note any minor children mentioned, their ages, and current custody arrangements as described by the client.” If you do transactional work, add a section for “Entities and Ownership” to capture business names, EINs, or ownership structures the client mentions. The base prompt above is practice-area neutral by design — specialize it once and save the modified version as a text file you reuse.

    Step 4: Merge Form Data With the AI Summary

    Claude’s output covers what was said on the call. Your Typeform or Jotform response covers what the client submitted before the call. These two documents sometimes disagree — the client wrote one adverse party name on the form and mentioned two others on the call. That gap is worth catching before you open the matter.

    Spend two to three minutes reading both documents side by side. Look specifically at: adverse party names (conflict-check section), dates (do the form dates match what was discussed), and matter type. Where they conflict, note it in the Open Questions section of the Claude output before you file it.

    Then copy the combined, lightly edited intake brief into your practice management software. In Clio, open a new Matter, go to the Notes tab, and paste it as a pinned note titled “Initial Intake Brief — [Date].” In MyCase, the equivalent is a new Case Note marked Internal. Either way, the structured brief is now searchable and attached to the matter from day one.

    Run your conflict check using the “Conflict Check Names” list from the Claude output. In Clio, that’s a global search across contacts. In MyCase, use the Conflicts search under the Contacts menu. Because the prompt formats each name on its own line, you can move through the list quickly without reformatting anything.

    Where This Breaks

    The prompt fails predictably in one category: emotionally complex matters where the most important facts are what the client didn’t say clearly. A caller describing a contentious divorce who is guarded, interrupted, or inconsistent will produce a transcript full of fragmentary sentences and topic shifts. Claude will dutifully summarize the fragments — and the summary will read as coherent when the underlying situation is not. You’ll get a clean-looking brief that papers over real ambiguity.

    The fix is partial, not complete. Add this to the Section 7 (Open Questions) prompt instruction: “Note any topics where the client gave contradictory or incomplete information, even if you cannot resolve the contradiction.” That surfaces the gaps, but it doesn’t replace your own read of the transcript for anything emotionally charged — grief, trauma, estrangement, or financial desperation. Read the raw transcript for those matters. The brief is a starting point, not a substitute.

    A second failure mode is proper noun recognition. Otter and Fireflies both mis-transcribe uncommon names — a client named “Dzhokhar” becomes “Joker” in the transcript, which flows through to the conflict-check list. Scan the names list before you run the conflict search. One missed name in a conflict check is a genuine problem; catching it takes 60 seconds.

    Third: this workflow assumes the client completed the pre-call form. When they don’t — which happens with roughly one in four prospective clients in my observation — the merge step in Step 4 collapses to just the Claude output, which is still useful, but the conflict-check list is thinner. You can prompt the client for the form during the call or ask the adverse party names directly. Either way, note in the file that the pre-call form was not received.

    What This Saves You

    The honest estimate: 20 to 25 minutes per new matter. The manual version of this process — handwritten notes, typed summary, conflict-check name assembly — runs 25 to 35 minutes after a 30-minute call for most solo practitioners. The automated version runs five to seven minutes (three minutes reading and editing the Claude output, two minutes on the conflict-check list, two minutes pasting into Clio or MyCase).

    If you take 10 new matters per month, that’s three to four hours returned to billable work or to leaving the office earlier. It also reduces the most common intake error: forgetting to run a conflict check on every name the client mentioned, not just the obvious adverse party. The structured output makes that step harder to skip.

    The pre-call form adds a side benefit that doesn’t show up in time estimates: clients who complete it arrive at the call more organized. The call itself often runs shorter.

    This workflow costs $55 to $75 per month in new tool spend if you don’t already pay for any of the components (Typeform free tier + Otter Pro + Claude Pro). If you already have a transcription tool through your video conferencing plan, or you’re already on Claude, the incremental cost is lower. At 10 new matters a month, the math on three reclaimed hours isn’t complicated.

    Build it once on a slow afternoon. Run it on the next intake call. Adjust the prompt after the first three uses when you see what it misses for your specific practice area. The structure is there from day one; the tuning takes a week.

    Related reading

  • Lexis+ AI vs Westlaw Precision vs CoCounsel: The 2026 Legal Research AI Showdown for Small Firms

    Lexis+ AI vs Westlaw Precision vs CoCounsel: The 2026 Legal Research AI Showdown for Small Firms

    If you’re already paying for Westlaw or Lexis, the AI add-on is almost certainly the right move. If you’re not, CoCounsel standalone is the most affordable way in — and it holds up better than you’d expect.

    Three tools now dominate the legal research AI conversation for small firms: Lexis+ AI (LexisNexis’s AI layer on top of its research platform), Westlaw Precision with CoCounsel (Thomson Reuters’s integrated pairing), and CoCounsel Core as a standalone subscription for attorneys who don’t have a Westlaw seat. I spent several weeks running research queries, citation checks, brief analysis tasks, and deposition prep workflows through all three. The verdict is not “one wins.” It depends almost entirely on what you’re already paying for and what your practice looks like.

    How We Compared Them

    Five criteria: research depth (how far the AI digs before surfacing an answer), citation accuracy (whether the cases it cites actually say what it claims), hallucination rate (cases that don’t exist or quotes that are wrong), drafting and brief analysis features, and deposition prep. Pricing is based on current published rates and direct vendor conversations as of early 2026 — not list-price brochures, which are almost universally useless for solo and small-firm buyers.

    Lexis+ AI

    What It Is and How It’s Positioned

    Lexis+ AI is not a separate product. It’s a module layered on top of a Lexis+ subscription, which means you’re paying for the underlying research platform first, then unlocking the AI features on top. LexisNexis pitches this as seamless — you’re already in Lexis, the AI is just there in the sidebar. In practice, that’s mostly accurate. The integration is the tightest of the three, and if your research workflow already lives in Lexis, the learning curve is close to zero.

    Research Depth and Citation Accuracy

    Lexis+ AI pulls from the full Lexis corpus — cases, statutes, secondary sources, law review articles — and the answers cite directly into the platform so you can click through immediately. That link-through is genuinely useful. I ran a batch of state-specific contract law queries and federal circuit-split questions. The AI surfaced relevant cases at a rate I’d estimate at 80–85% relevance on the first pass. Citation accuracy was high on well-indexed federal cases. It degraded on older state appellate decisions, where I caught two instances of a case being cited for a slightly different proposition than the opinion actually supported. Not fabricated cases — real cases, wrong summary. That’s a subtler problem than hallucination and arguably harder to catch if you’re moving fast.

    Hallucination Rate

    Lower than I expected. In approximately 40 research queries across contract, employment, and family law matters, I found zero fully fabricated citations. That’s not a guarantee — it’s a sample — but the grounding in the actual Lexis database appears to do real work here. The bigger risk is the “close but wrong” citation problem described above, not outright invention.

    Drafting, Brief Analysis, and Deposition Prep

    Lexis+ AI includes a document drafting tool and a brief analyzer. The brief analyzer reads an uploaded brief and identifies weaknesses, missing authority, and counterarguments. I ran three briefs through it. It caught a missing controlling authority in one case that I’d actually overlooked — that alone would have justified the session. On deposition prep, Lexis+ AI can generate question outlines from uploaded documents, which is functional but not deeply sophisticated. It works better as a checklist scaffold than as a strategic tool.

    Pricing

    Here’s the honest picture as of early 2026: Lexis+ AI is not available à la carte. You need a Lexis+ subscription, which for a solo attorney runs roughly $250–$400/month depending on practice area package, and Lexis+ AI adds approximately $100–$150/month on top of that. Small firms of 2–5 attorneys are typically looking at per-seat pricing in that same AI add-on range. LexisNexis does negotiate — solo and small-firm rates are almost always lower than list if you ask, and annual contracts bring the monthly cost down. If you’re already on Lexis+, ask your rep specifically about the AI module add-on cost; it’s often bundled at a discount during renewal.

    Close-up detail shot of two hands resting on a laptop keyboard, a printed legal document visible beside it as soft abstr

    Westlaw Precision with CoCounsel

    What It Is and How It’s Positioned

    Thomson Reuters acquired CoCounsel (formerly Casetext) in 2023 and has since integrated it directly into Westlaw Precision, its premium research tier. The result is the most tightly integrated AI-plus-research product currently available. Westlaw Precision is the research platform; CoCounsel is the AI layer that sits inside it. You don’t context-switch. You run a research query, get AI-assisted answers grounded in Westlaw’s database, and can drop directly into KeyCite to check citation validity. For attorneys who already think in Westlaw, this is the closest thing to a natural extension of existing workflow.

    Research Depth and Citation Accuracy

    Westlaw’s database coverage is widely regarded as the most comprehensive in the market — more secondary sources, better historical depth on state court decisions, and KeyCite remains the gold standard for citation validation. When CoCounsel is running on top of that corpus, the research output reflects it. I ran the same contract law and circuit-split queries I used for Lexis+ AI. Westlaw Precision with CoCounsel returned slightly more relevant secondary sources and was notably better on older state court material. Citation accuracy was the highest of the three tools I tested. I found one instance of a case being cited for a proposition it only partially supported — across the same 40-query set, that’s a meaningful difference from Lexis+ AI’s two instances.

    Hallucination Rate

    Also effectively zero fabricated citations in my testing. The grounding-in-database approach that both TR and LexisNexis use is clearly doing its job. The residual risk — again — is nuanced misrepresentation of what a real case holds, not invention of fake ones. Westlaw’s KeyCite integration makes it faster to spot-check, which is a genuine workflow advantage.

    Drafting, Brief Analysis, and Deposition Prep

    CoCounsel inside Westlaw Precision is the strongest performer on drafting and deposition prep of the three tools. The deposition prep feature — where you upload documents and the AI generates question outlines organized by topic and witness — is noticeably more structured than Lexis+ AI’s equivalent. I ran a deposition prep session on a commercial dispute matter and got a 47-question outline organized by theme, with document citations for each question cluster. The brief analysis feature identifies missing authority, flags unsupported propositions, and — usefully — suggests counterarguments opposing counsel might raise. On drafting, CoCounsel’s contract and motion drafting handles context better than the other two when given a prior brief as a style reference.

    Pricing

    Westlaw Precision is Thomson Reuters’s premium tier, and it’s priced accordingly. Solo attorneys are typically looking at $350–$500/month for Westlaw Precision — the CoCounsel integration is included in that tier, not an additional line item. That’s meaningful: you’re not paying extra for the AI once you’re on Precision. Firms of 2–10 attorneys see per-seat pricing in the same range. The catch is that Westlaw Precision costs more than standard Westlaw, and the jump from standard Westlaw to Precision is itself an add-on cost. If you’re on standard Westlaw today, the upgrade to Precision (with CoCounsel) is worth pricing out from your rep.

    CoCounsel Core (Standalone)

    What It Is and How It’s Positioned

    CoCounsel Core is the standalone version of CoCounsel — available without a Westlaw subscription. Thomson Reuters maintains it as a separate product for attorneys who want the AI drafting, research assistance, and document analysis features but aren’t paying for Westlaw. It does not include full Westlaw database access. For research, it uses a more limited corpus. For drafting and document-focused tasks — contract review, deposition prep from uploaded documents, brief analysis — it draws on the uploaded file rather than a live legal database, which changes what it can and can’t do.

    Research Depth and Citation Accuracy

    This is where CoCounsel Core’s standalone positioning shows its limits. Without full Westlaw database access, research queries return shallower results. The AI can still surface cases and statutes, but the coverage is narrower than either of the database-backed versions. For attorneys who use CoCounsel Core primarily for document-based tasks and handle research through a separate (often less expensive) research subscription or free tools like Google Scholar, this limitation is manageable. For attorneys expecting full research depth, it’s a real gap.

    Hallucination Rate

    Higher than the database-integrated versions — but not dramatically so. Without a live legal database to ground every citation, there’s more room for the model to generate plausible-sounding but incorrect case citations. I found two instances of citations that required careful verification across a 30-query set, compared to zero outright fabrications in either database-backed product. The practical implication: with CoCounsel Core, treating every case citation as unverified until you’ve checked it in a separate tool is the right workflow, not optional due diligence.

    Drafting, Brief Analysis, and Deposition Prep

    This is where CoCounsel Core earns its place. For document-based tasks that don’t require live database access, it performs at essentially the same level as the integrated version. Deposition prep from uploaded transcripts and documents is strong. Brief analysis on uploaded briefs — identifying gaps, unsupported assertions, potential weaknesses — is solid. Contract review and drafting assistance work well. A solo attorney running a transactional or litigation practice who handles their research separately can get genuine value from CoCounsel Core without paying for a full Westlaw seat.

    Pricing

    CoCounsel Core is the most accessible price point of the three. Thomson Reuters has positioned it at approximately $100/month for solo attorneys as of early 2026. Small-firm pricing scales per seat but remains below the combined cost of either database-plus-AI pairing. For a solo attorney not on Westlaw or Lexis, this is the lowest-cost entry into professional-grade legal AI — and for document-heavy practices, it’s not a compromised version of the product. It’s a different product with a different scope.

    Side-by-Side

    • Research depth: Westlaw Precision + CoCounsel > Lexis+ AI > CoCounsel Core (standalone)
    • Citation accuracy: Westlaw Precision + CoCounsel (best) ≈ Lexis+ AI > CoCounsel Core
    • Hallucination rate: Westlaw Precision + CoCounsel and Lexis+ AI both near-zero fabrications; CoCounsel Core slightly higher — verify everything
    • Drafting quality: CoCounsel (both versions) > Lexis+ AI — CoCounsel handles context and style reference better
    • Brief analysis: All three functional; Westlaw Precision + CoCounsel most comprehensive on counterargument identification
    • Deposition prep: CoCounsel (both versions) clearly ahead — more structured output, better document-to-question logic
    • Solo monthly cost (approximate, early 2026): CoCounsel Core ~$100 | Lexis+ AI ~$350–$550 total (platform + AI) | Westlaw Precision + CoCounsel ~$350–$500
    • AI cost as a separate line item: Lexis+ AI = yes, additional ~$100–$150/month on top of Lexis+; Westlaw Precision = no, CoCounsel included in Precision tier; CoCounsel Core = the full product price
    • Works without existing platform subscription: CoCounsel Core only

    Picking the Right One

    You’re already on Lexis+ and like it. Add Lexis+ AI. The integration is tight, the citation accuracy is solid, and you’re not building a new workflow from scratch. Ask your rep specifically about renewal bundle pricing for the AI module — the number is negotiable. Do not accept the list rate without asking.

    You’re already on Westlaw (any tier). Price the upgrade to Westlaw Precision. If the per-month delta between your current Westlaw cost and Precision is under $150, it’s almost certainly worth it. CoCounsel’s deposition prep and brief analysis features alone have saved time on matters where I would have otherwise spent two to three hours building question outlines by hand. The integrated KeyCite-plus-AI workflow is the best research experience of the three.

    You’re not on Westlaw or Lexis and you’re price-sensitive. Start with CoCounsel Core at ~$100/month. Pair it with Google Scholar or a lower-cost research subscription for citation verification. You’ll need to verify citations more carefully than with the database-integrated versions, but for document-heavy work — deposition prep, contract review, brief analysis — the standalone product is genuinely capable. Revisit Westlaw Precision in 12 months when you have a clearer picture of how much AI-assisted research you actually need.

    You’re a firm of 5–15 attorneys with a mixed research diet. The per-seat math starts to favor Westlaw Precision + CoCounsel at this scale, especially if your associates are doing substantial research volume. The research depth advantage is real, and KeyCite integration removes a verification step that matters when you’re supervising work product from multiple timekeepers. Lexis+ AI is a legitimate alternative if your firm has a long-standing Lexis relationship and your IT setup is already built around it.

    One thing none of these tools replaces: a lawyer reading the cases. Citation accuracy being high is not the same as legal reasoning being sound. Every one of these products will surface real cases with real citations and still occasionally miss the controlling authority in your jurisdiction, misread the procedural posture of a decision, or present a four-factor test as settled when it’s actually a circuit minority position. Use the output as a research accelerator, not a research substitute.

    Related reading

  • Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook handles routine NDA and MSA review faster than doing it by hand — but throw a heavily-redlined draft or an exhibit-heavy agreement at it and the wheels come off.

    Spellbook is a Microsoft Word add-in that reads your contract, flags clause gaps, suggests redlines, and explains what it’s flagging in plain language. It’s built on GPT-4-class models and priced for law firms, not enterprise procurement teams. I ran it for two weeks on a mix of NDAs, MSAs, and SOWs — the bread-and-butter of a transactional solo — to find out whether it earns the monthly fee or just performs well in demos. The short answer: it earns it if you review contracts regularly. It doesn’t if you don’t.

    What It Does

    Spellbook lives in a sidebar inside Microsoft Word. You open a contract, open the sidebar, and Spellbook reads the document. From there it does three things: it flags clauses that are unusual or missing, it offers suggested language to replace or strengthen those clauses, and it answers questions about the document in a chat interface. All of this happens without leaving Word.

    The clause-flagging is the core feature and it’s genuinely good on clean drafts. On a standard mutual NDA, Spellbook caught a missing residuals clause, flagged an unusually broad definition of “Confidential Information” that lacked a standard carve-out for publicly available information, and noted that the term “Affiliate” was used twice but never defined. That’s exactly the kind of boilerplate gap that’s easy to miss on a Friday afternoon, and catching it took about forty seconds.

    The redline suggestion feature works the same way: click a flagged clause, and Spellbook offers replacement language. The suggestions are templated but adjustable — you can tell it “make this more favorable to my client, who is the vendor” and it rewrites accordingly. The quality is good enough to use as a first draft, not good enough to accept without reading.

    The chat interface lets you ask document-specific questions: “Does this agreement include an auto-renewal clause?” or “What’s the limitation of liability cap?” It pulls answers from the actual document text, not from general knowledge. On clean contracts, this was accurate. On contracts longer than about 30 pages, it started missing things — more on that below.

    Spellbook also runs what it calls a “playbook” review: you can load a standard set of preferred positions and it checks the contract against those positions automatically. Setting up a playbook takes some initial investment, but once it’s configured, it runs on every new document without extra prompting.

    Where It Actually Fits

    The sweet spot is a solo transactional attorney — or a small firm where one or two attorneys handle a steady flow of commercial contracts — who reviews NDAs, MSAs, SOWs, or vendor agreements multiple times a week. If you’re looking at five or more contracts a week, Spellbook pays for itself in time saved on first-pass review. The clause-flagging catches enough real issues fast enough that it shortens the first read meaningfully.

    For NDAs specifically, Spellbook is close to ideal. NDAs are structurally consistent enough that the model’s training shows: it knows what should be there, flags what isn’t, and the suggested language is close to usable. I ran eight NDAs through it over two weeks and it found something worth flagging in seven of them. Most of those were things I’d have caught anyway — but Spellbook caught them in the first sixty seconds, before I’d done my own read.

    MSAs with clean structure — a base agreement and one or two order forms, no exhibits attached — also work well. The model handles defined-term tracking better than I expected. It flagged two instances in one MSA where “Services” was used in a section that defined the scope, but the exhibit was supposed to govern scope instead, creating a potential conflict. Useful catch.

    The playbook feature fits well for solos who represent the same side of a transaction repeatedly — always the vendor, always the SaaS company, always the contractor. Load your preferred positions once and Spellbook runs those checks automatically. That saves real time compared to building a mental checklist every time.

    Practice areas beyond transactional commercial work get thinner. Employment agreements, commercial leases, and IP assignments work reasonably well because the structures are common enough that the model recognizes them. Anything more specialized — complex finance documents, healthcare agreements with regulatory-specific clauses — showed less confident suggestions and more generic flags.

    Close detail shot of hands resting on a mechanical keyboard, a printed contract visible on the desk surface to the right

    Where It Breaks

    Heavily-redlined drafts broke it for me consistently. When a contract has three or four rounds of tracked changes from multiple parties still embedded — all visible in Word — Spellbook gets confused about which version of the text to analyze. I ran one MSA that had been through two rounds of opposing counsel redlines and Spellbook flagged a clause as missing that was actually present in an accepted redline two paragraphs up. It was reading the document as if the redline layer didn’t exist. This is a real workflow problem because most contracts that need careful review are exactly the ones with heavy markup.

    The workaround is to accept all changes, save a clean copy, and run Spellbook on that. That works, but it adds a manual step and means you’re not reviewing the document in the state your client actually sent or received it.

    Exhibit-heavy MSAs were the other consistent failure mode. When an MSA had three or four attached exhibits — a Statement of Work template, a Data Processing Addendum, a Security Exhibit — Spellbook would analyze the base agreement without meaningfully integrating the exhibit content. It flagged “no data processing terms found” in one agreement where the DPA was a separate exhibit on the next page. The tool is analyzing the document section it can see, not the agreement as a whole when exhibits are substantively separate files or appendices.

    Long documents slow the suggestions down noticeably. Anything over 25-30 pages and the chat answers started lagging by five to ten seconds. Not a dealbreaker, but noticeable when you’re moving fast.

    The suggested redline language is templated enough that it occasionally reads as generic. On one SOW, the suggested scope-limitation language was so standard it didn’t account for the specific services described in the document. I used it as a starting point and rewrote it in about two minutes, but “starting point” is the accurate description — not “finished clause.”

    Spellbook also requires Microsoft Word. If your firm runs on Google Docs or if opposing counsel sends PDFs that you work in natively, you’ll need to convert first. That friction is minor but real. There is no Google Docs version as of this writing.

    What It Costs and What You Get

    Spellbook’s pricing is seat-based and billed annually. As of mid-2025, a solo seat runs approximately $149 per month (billed annually at roughly $1,788 per year). That’s the standard tier, which includes unlimited document reviews, the clause-flagging and suggestion features, and the chat interface.

    The playbook feature — loading your own preferred positions and running them automatically — is included in the standard tier, not gated behind a higher plan. That’s worth noting because playbooks are what make the tool genuinely faster for a solo who handles repeat transaction types.

    There is a higher-tier plan (pricing available on request) that adds team collaboration features, admin controls, and usage analytics. For a true solo, the standard tier is the right tier. The team features add overhead you don’t need when you’re the only reviewer.

    Spellbook offers a free trial — 14 days as of this writing — and the trial is full-featured, not limited to toy documents. Running the trial on real matters from your current workload is the right way to evaluate it. Running it on sample contracts tells you almost nothing about whether it fits your practice.

    At $149 per month for a solo, the math is straightforward. If Spellbook saves you one hour of first-pass review per week and your effective hourly rate is $200 or above, it pays for itself in about two billable hours per month. If you review fewer than two or three contracts a week, the calculus gets harder.

    Verdict

    Use it if you’re a transactional solo or a small firm handling commercial contracts regularly — NDAs, MSAs, vendor agreements, SOWs — and you want a faster first-pass review without hiring a second set of eyes. The clause-flagging is accurate enough on clean drafts to save real time, and the playbook feature compounds that value once you’ve set it up for your standard transaction types.

    Skip it if you’re primarily a litigator, if your transactional work is occasional rather than routine, or if your practice runs on Google Docs. The Word dependency is a real constraint and the monthly cost doesn’t make sense below roughly two to three contract reviews per week.

    Wait six months if your typical workflow involves heavily-redlined multi-party drafts or exhibit-heavy agreements that run past 30 pages. Spellbook is aware of these limitations — the tracked-changes issue in particular is something the product team has acknowledged — but as of this writing those gaps are real enough to affect daily use on complex matters.

    Related reading