Tag: ai-tools

  • What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    The 2026 ABA TechReport shows AI adoption climbing fast — but the headline numbers are mostly BigLaw’s story. Here’s what the data actually says for a solo or small firm, and what to do about it this week.

    Every year the ABA TechReport lands and every year the same thing happens: law firm marketing teams quote the top-line adoption number, vendors repitch their most expensive SKUs, and the solo lawyer in a two-person family-law firm closes the tab. The 2026 report is more of the same — except the gap between the BigLaw AI budget and the small-firm reality is now wide enough to be worth talking about directly. AI spending at Am Law 100 firms is up sharply. Meaningful tooling for a five-attorney plaintiff’s practice? Still thin. This piece is about that gap, what the data underneath the headline actually shows, and the cheapest credible path forward for a firm of 1–10 attorneys.

    What the 2026 Report Actually Found — and Who It Found It For

    The report’s headline: AI tool adoption among attorneys crossed 60% for the first time. That number is real. It is also skewed hard by firm size. When you filter to solo practitioners, the adoption rate drops to roughly the mid-30s percentile. Firms of 2–9 attorneys sit in the low-40s. The firms pushing the aggregate number past 60% are firms with 100-plus attorneys, dedicated IT staff, and vendor contracts that cost more per seat per year than a solo’s entire software budget.

    The report also tracks what attorneys are using. At large firms, the dominant tools are Harvey, CoCounsel Enterprise, and Microsoft 365 Copilot deployed org-wide. At small firms, the most commonly cited tools are ChatGPT (usually the free tier or Plus), Google Gemini, and whatever AI feature their existing practice management software quietly shipped in the last 12 months. Those are not the same category of tool. Comparing adoption rates across those two groups as if they represent the same phenomenon is misleading.

    One number that doesn’t skew by firm size: attorney anxiety about AI competence obligations. Across all firm sizes, concern about keeping up with the technology — and with bar guidance on its use — is roughly uniform. Solos worry about it as much as partners at midsize firms. That’s the one place the report’s aggregate number actually means something for a small-firm reader.

    The Price-Point Problem the Report Doesn’t Name Directly

    Harvey starts at pricing that isn’t published but is widely reported in the $500–$1,000+ per-seat-per-month range for firm contracts. CoCounsel’s small-firm tier has come down, but you’re still looking at $100/month per seat at minimum, often more depending on the plan. Spellbook sits around $150–$200/month for a solo seat. Those prices are defensible if the tool reliably saves you two or three billable hours a month. They are not defensible if you haven’t yet proven to yourself that AI-assisted drafting actually saves you time in your specific practice.

    The report notes that ROI measurement at small firms is almost nonexistent. Fewer than 15% of solo and small-firm respondents said they tracked time saved against AI tool cost in any systematic way. That’s not a moral failure — it’s a bandwidth problem. But it means most small-firm AI spending is faith-based. Anecdote drives the purchase; no one counts the hours afterward.

    The vendors aren’t incentivized to fix this. A tool that’s hard to evaluate is a tool that’s hard to cancel. The practical consequence for a small-firm reader: you need to do the measurement yourself before you commit to a premium tier, because no one else is going to do it for you.

    Close detail shot from a slight angle: two hands resting on a slim laptop keyboard, a contract document visible on scree

    What the Report Gets Right About Small-Firm Risk

    Two findings cut through the noise. First: the attorneys who report the highest satisfaction with AI tools are the ones who use them for a narrow, repeatable task — not as a general-purpose assistant across all work. The report’s phrasing is different, but the underlying data is clear. Trying to use an AI tool for everything produces mediocre results everywhere. Picking one document type, one workflow, one prompt you refine over time — that’s where the satisfaction numbers climb.

    Second: hallucination concern remains the top barrier to adoption at small firms, and it’s not irrational. A solo running a 200-matter caseload doesn’t have a team of associates to catch a fabricated citation. The report confirms that attorneys who build a verification step into their AI workflow — meaning they treat AI output as a first draft that requires checking, not a finished product — report significantly fewer quality problems. That’s a workflow design point, not a technology point. The tool doesn’t prevent hallucinations. Your process has to.

    Neither of these findings requires you to spend money. They’re workflow principles that apply whether you’re using a $20/month tool or a $500/month one.

    What I’d Actually Do About This

    Start at the lowest credible price point and measure. Here’s the specific sequence that makes sense for a solo or firm under 10 attorneys.

    Step 1: Run a $20/month tool for 30 days on one task

    Claude Pro ($20/month) and ChatGPT Plus ($20/month) are genuinely capable for legal drafting assistance, first-pass research summaries, and correspondence drafts. Pick one. Pick one task — demand letters, lease review summaries, deposition prep outlines, whatever you do repeatedly. Run every instance of that task through the tool for 30 days. Before each one, note your baseline time. After, note actual time with the tool. Thirty days, one task, one number at the end: minutes saved per matter.

    If the number is zero or negative, stop. You’ve spent $20 to learn something useful. If the number is positive, you now have a defensible basis for either continuing at $20/month or evaluating whether a more expensive tool would produce a bigger delta on that same task.

    Step 2: Check what your practice management software already includes

    Clio, MyCase, and PracticePanther have all shipped AI features in the last 18 months. Most are included in existing subscriptions at the mid-tier and above. Clio Duo handles matter summaries and draft correspondence. MyCase’s AI assistant touches document drafting and client communication. If you’re already paying for these platforms, you may have AI features you haven’t turned on. Check your subscription tier before spending anything new. The capabilities are narrower than a standalone tool, but the marginal cost is zero.

    Step 3: Only upgrade to Spellbook or CoCounsel if the delta is clear

    Spellbook is purpose-built for contract review and drafting inside Microsoft Word. If you do transactional work — business contracts, commercial leases, employment agreements — and you’re already in Word all day, Spellbook earns its price point faster than a general model will. CoCounsel (from Thomson Reuters, built on GPT-4 class models) is stronger on legal research summarization and has deeper integration with Westlaw if you’re a Westlaw subscriber. Both are worth trialing — both offer trial periods — but only after you’ve established in Step 1 that AI drafting assistance saves you meaningful time. Paying $150–$200/month to discover you don’t actually use AI tools consistently enough to matter is an expensive way to learn something you could have learned for $20.

    Step 4: Avoid Harvey-tier spending at this firm size

    Harvey is built for large-firm deployment: large document sets, high-volume due diligence, org-wide rollout with IT support. At a solo or small firm, you’re paying for infrastructure you can’t use. The per-seat cost is structured around large-firm contract negotiations. There is no meaningful scenario where a solo practitioner or a firm of five attorneys needs Harvey over a well-configured Spellbook or CoCounsel setup — and even those are only justified once you’ve done the measurement in Steps 1 and 2.

    The 2026 TechReport’s implicit message, if you read past the headline adoption numbers, is that the legal AI market is bifurcating. BigLaw is buying enterprise tools and absorbing the cost into hourly rates. Small firms are adopting more cautiously and measuring less. The cautious adoption is rational. The lack of measurement is the part worth fixing. Pick one tool, pick one task, track the hours. That’s the entire strategy.

    The Bottom Line

    The 2026 ABA TechReport confirms that AI adoption is up and that BigLaw is driving most of the interesting numbers. For a solo or small firm, the actionable takeaway is simple: start at $20/month, measure one task for 30 days, and don’t spend $150–$500/month until you can prove the cheaper tier isn’t doing the job. The technology is real. The ROI is not guaranteed. Every vendor in this space wants you to believe the premium tool is the responsible choice — but responsible means measuring first. The bar guidance on AI competence is real too, and it cuts toward knowing what your tools actually do, not toward spending more on them.

    Related reading

  • 10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    These ten prompts took me from blank page to usable first draft on actual client matters — intake calls, demand letters, deposition prep, and everything in between. Save them now; tweak the variables later.

    Every solo lawyer I talk to has the same complaint: too many tasks, not enough time, and AI tools that sound impressive until you actually try them on a real matter. The prompts below were built for ChatGPT (GPT-4o) and tested across family law, employment, and small-business transactional matters. They are not magic. They produce first drafts, not final work product. But a solid first draft that takes three minutes instead of forty-five minutes is the whole point.

    A few ground rules before you start. Never paste full client names, Social Security numbers, or identifying case details into a public AI tool. Use placeholders like [CLIENT], [OPPOSING PARTY], and [MATTER TYPE]. If your firm uses Microsoft Copilot or a privacy-partitioned ChatGPT Enterprise account, you have more flexibility — but check your bar’s current guidance on client data and AI tools before you do anything. These prompts work best as templates you adapt, not scripts you run verbatim.

    1. Intake Call Summary into a Structured Brief

    When to use it: Right after an intake call. You have rough notes or a transcript from a call-recording tool like Otter.ai or Fireflies. You need a clean, structured brief to open a new matter file.

    What to expect: A structured output with labeled sections — parties, key facts, potential claims, open questions, and recommended next steps. The model is good at pulling signal from messy notes. It will occasionally hallucinate a “fact” that wasn’t in your notes, so read it against your source before filing it anywhere.

    You are a legal assistant helping a solo attorney organize intake notes.
    
    Below are rough notes from a new client intake call. Convert them into a structured brief with these sections:
    1. Parties (client name placeholder, opposing party placeholder, any other relevant persons)
    2. Core Facts (bullet list, chronological where possible)
    3. Potential Claims or Issues (list only — do not evaluate likelihood)
    4. Documents Mentioned or Needed
    5. Open Questions for Follow-Up
    6. Suggested Next Steps
    
    Do not add facts not present in the notes. Flag anything unclear with [UNCLEAR].
    
    Intake notes:
    [PASTE YOUR NOTES HERE]

    Tweaks: Add a sixth section called “Conflicts Check Names” and ask the model to pull every person and entity name mentioned — that feeds directly into prompt #2. If you handle a specific practice area, add “Practice area: [AREA]” so the model can weight its issue-spotting accordingly.

    2. First-Pass Conflict Check from a Party List

    When to use it: You’ve got a new matter and a list of parties. You want a quick cross-reference against your existing client list before your conflicts-check software runs its full scan — or if you don’t have dedicated conflicts software.

    What to expect: The model will flag name matches, near-matches, and related entities. This is a first pass, not a complete conflicts check. Your malpractice carrier and bar rules require a real process — this prompt helps you surface obvious problems faster.

    You are a legal assistant running a first-pass conflicts check for a solo attorney.
    
    New matter parties:
    [LIST ALL PARTIES, ENTITIES, AND KEY PERSONS FROM THE NEW MATTER]
    
    Existing client and adverse party list:
    [PASTE YOUR CURRENT CLIENT/ADVERSE PARTY LIST — USE PLACEHOLDERS IF NEEDED]
    
    Tasks:
    1. Flag any exact name matches between the two lists.
    2. Flag any likely near-matches (similar names, abbreviations, DBAs).
    3. Flag any entities that share a name root with a listed party.
    4. List any names from the new matter that do NOT appear on the existing list (for your records).
    
    Format the output as a table with columns: New Matter Party | Match Found | Match Type | Notes.

    Tweaks: This prompt only works as well as the list you feed it. Keep a running CSV of client and adverse party names in a note or document you can paste quickly. If your existing list is long, break it into chunks — GPT-4o handles roughly 25,000 words of context, but accuracy degrades near the ceiling.

    3. Demand Letter Draft from a Fact Pattern

    When to use it: You have a settled fact pattern and a clear demand amount. You need a professional demand letter drafted before you spend thirty minutes staring at a blank template.

    What to expect: A complete letter with opening statement of representation, fact recitation, legal basis section (labeled as general — you’ll fill in controlling authority), demand, and deadline. The model writes competent prose. It will not cite your jurisdiction’s specific statutes correctly without prompting, so always check cites before sending.

    You are a legal assistant drafting a demand letter for a solo attorney.
    
    Facts:
    [SUMMARIZE THE CORE FACTS — WHO DID WHAT, WHEN, AND WHAT HARM RESULTED]
    
    Jurisdiction: [STATE]
    Practice area: [E.G., EMPLOYMENT / PERSONAL INJURY / CONTRACT]
    Demand amount: $[AMOUNT] or [DESCRIBE RELIEF SOUGHT]
    Response deadline: [NUMBER] days
    
    Draft a professional demand letter. Use formal tone. Include:
    - Opening paragraph identifying the attorney and client (use [ATTORNEY NAME] and [CLIENT] as placeholders)
    - Factual background section
    - Legal basis section — flag where jurisdiction-specific statutes or case law should be inserted with [INSERT AUTHORITY]
    - Clear statement of demand
    - Response deadline and consequence of non-response
    
    Do not invent legal citations. Use [INSERT AUTHORITY] wherever a cite is needed.

    Tweaks: Add “Tone: [firm but professional / aggressive / conciliatory]” to the prompt to shift the letter’s posture. For employment matters, add the employer’s size if known — it affects which statutes apply and the model will note that in the [INSERT AUTHORITY] placeholders.

    4. Deposition Outline from Case Documents

    When to use it: You have a deponent, a set of documents, and not enough time to build a line-by-line outline from scratch. Paste in the relevant excerpts — discovery responses, prior statements, key emails — and let the model draft your question framework.

    What to expect: A topical outline with suggested question areas, document tie-ins, and impeachment flags. The model is strong on organizing themes and weak on jurisdiction-specific deposition procedure. Expect to add foundation questions and objection-anticipation notes yourself.

    You are a legal assistant helping a solo attorney prepare for a deposition.
    
    Deponent: [ROLE — E.G., "Defendant employer's HR director" — no real names]
    Matter type: [E.G., wrongful termination / breach of contract]
    Key issues in dispute: [LIST 3-5 CORE DISPUTED FACTS OR LEGAL ELEMENTS]
    
    Documents provided (paste excerpts below):
    [PASTE RELEVANT EXCERPTS — REDACT IDENTIFYING INFO AS NEEDED]
    
    Create a deposition outline organized by topic. For each topic:
    1. State the goal of that topic section (what you are trying to establish or undermine)
    2. List 5-8 suggested open-ended questions
    3. Note any document the attorney should introduce during that section
    4. Flag any prior statements in the documents that could be used for impeachment
    
    Do not suggest legal strategy. Flag factual inconsistencies in the documents with [INCONSISTENCY NOTE].

    Tweaks: If you have a prior deposition transcript from the same witness in another matter, paste selected excerpts and add “Flag any statements inconsistent with the documents above.” The model handles cross-document comparison reasonably well within a single context window.

    Close-up of two hands resting on a slim laptop keyboard, a printed contract visible on the desk beside it as soft abstra

    5. Engagement Letter Customization

    When to use it: You have a master engagement letter template and need to adapt it for a specific matter type, fee arrangement, or client situation without rewriting the whole thing manually.

    What to expect: The model will insert the right variables, flag clauses that may not fit the matter type, and suggest additions you might have missed. It will not flag jurisdiction-specific requirements you haven’t told it about — you still need to know what your state bar requires in an engagement letter.

    You are a legal assistant helping a solo attorney customize an engagement letter.
    
    Base template:
    [PASTE YOUR ENGAGEMENT LETTER TEMPLATE]
    
    Matter details:
    - Matter type: [E.G., estate planning / civil litigation / business formation]
    - Fee arrangement: [E.G., flat fee $X / hourly at $X / contingency at X%]
    - Scope of representation: [DESCRIBE WHAT IS AND IS NOT INCLUDED]
    - Any special terms: [LIST ANY CLIENT-SPECIFIC ARRANGEMENTS]
    
    Tasks:
    1. Insert the matter-specific details into the appropriate places in the template.
    2. Flag any clauses in the template that may not fit this matter type with [REVIEW THIS CLAUSE].
    3. Suggest any standard clauses that appear to be missing for this matter type, labeled [SUGGESTED ADDITION].
    4. Do not change any clause language without flagging the change clearly.
    
    Output: The revised letter with all changes marked in [BRACKETS].

    Tweaks: Run this with Claude Sonnet 3.5 if you want more conservative, flag-heavy output — Claude tends to over-flag, which is actually useful for compliance review. GPT-4o tends to write more fluently but flag less aggressively.

    6. Chronology Builder from Emails and Notes

    When to use it: You have a pile of emails, text summaries, and scattered notes and need a clean timeline. Works for breach-of-contract disputes, employment matters, domestic cases — anywhere a clear sequence of events matters.

    What to expect: A date-ordered table or list with source attribution. The model is good at pulling dates and sequencing events. It will occasionally misread ambiguous date formats (MM/DD vs. DD/MM) — flag that in the prompt if your documents mix formats.

    You are a legal assistant building a factual chronology for a solo attorney.
    
    Below are excerpts from emails, notes, and documents related to a single matter. Extract every datable event and build a chronology.
    
    Output format: A table with columns — Date | Event Description | Source | Significance Flag
    
    Rules:
    - Use the exact date from the source if available. If only a month/year is given, note that.
    - If a date is ambiguous or inferred, mark it [INFERRED DATE].
    - Significance Flag: mark events as [KEY] if they appear directly relevant to the core dispute; mark [BACKGROUND] for context events.
    - Do not add events not supported by the source material.
    - If two events appear to conflict in the record, flag both with [CONFLICT].
    
    Source material:
    [PASTE EMAILS, NOTES, AND EXCERPTS HERE — REDACT IDENTIFYING INFO]

    Tweaks: For long document sets, run this in batches by time period and then ask the model to merge and de-duplicate the resulting tables. Ask it to “merge the following two chronology tables, removing duplicate entries and resolving conflicts where the same event appears twice with different dates.”

    7. Settlement Agreement Plain-Language Summary for the Client

    When to use it: You’ve negotiated a settlement and need to explain it to a client who is not a lawyer. You want a summary that covers what they’re agreeing to, what they’re giving up, and what happens next — without the legalese.

    What to expect: A clean, readable summary organized by what the client receives, what the client must do, what the client cannot do after signing, and key dates. The model handles plain-language conversion well. Do not send this summary to the client in place of the actual agreement — it’s a companion document you review with them.

    You are a legal assistant helping a solo attorney explain a settlement agreement to a client in plain language.
    
    Settlement agreement text:
    [PASTE THE SETTLEMENT AGREEMENT — REDACT NAMES IF NEEDED]
    
    Write a plain-language summary for the client. Use simple sentences. No legal jargon without a plain-English explanation in parentheses.
    
    Organize the summary into these sections:
    1. What You Are Getting (payments, actions, other relief)
    2. What You Must Do (release of claims, confidentiality obligations, other duties)
    3. What You Cannot Do After Signing (restrictions, non-disparagement, non-compete if applicable)
    4. Important Dates and Deadlines
    5. What Happens If Either Side Doesn't Follow Through
    
    End with a short paragraph reminding the client to ask their attorney any questions before signing.
    
    Do not interpret ambiguous clauses — flag them with [ASK YOUR ATTORNEY ABOUT THIS].

    Tweaks: Adjust reading level with “Write at a 7th-grade reading level” or “Write for a sophisticated business client.” The model handles both well. If the agreement is long, paste it in sections and ask for section-by-section summaries first, then ask for a consolidated summary.

    8. Interrogatory Response First Draft

    When to use it: Opposing counsel has served interrogatories. You have your client’s answers in rough form — notes from a call, a client-filled questionnaire, bullet points. You need a properly formatted first draft before you do the real lawyering.

    What to expect: Formally formatted responses with proper headers, general objections section, and individual responses. The model will draft objections only if you give it grounds — it won’t invent them. You will need to review every objection for jurisdictional validity and every substantive response for accuracy. This prompt saves formatting time, not judgment time.

    You are a legal assistant helping a solo attorney draft interrogatory responses.
    
    Jurisdiction: [STATE / FEDERAL — DISTRICT IF FEDERAL]
    Case type: [E.G., employment discrimination / breach of contract]
    
    Interrogatories served:
    [PASTE THE INTERROGATORIES]
    
    Client's rough answers (as provided — do not treat these as verified):
    [PASTE THE CLIENT'S NOTES OR QUESTIONNAIRE ANSWERS]
    
    Draft formal interrogatory responses. Follow this structure:
    - Standard caption and introduction (use [CASE CAPTION] placeholder)
    - General Objections section — include only objections supported by these grounds: [LIST ANY GROUNDS YOU WANT INCLUDED, E.G., "overbroad," "unduly burdensome," "attorney-client privilege"]
    - Individual responses keyed to each interrogatory number
    - Where the client's answer is incomplete, draft the response to reflect what was provided and add [ATTORNEY: CONFIRM/SUPPLEMENT]
    - Where no client answer was provided, write [NO RESPONSE PROVIDED — ATTORNEY ACTION REQUIRED]
    
    Do not add substantive information the client did not provide.

    Tweaks: If you want the model to draft privilege-specific objections, add the privilege basis and a brief description of what you’re protecting. Never let the model guess at privilege — it will get it wrong.

    9. Objection-Letter Style Review of Opposing Counsel Correspondence

    When to use it: Opposing counsel sent a letter with factual characterizations, legal positions, or demands. You want a structured breakdown before you respond — what they claimed, what’s disputable, what’s accurate, and what they may be setting up.

    What to expect: A point-by-point analysis of the letter’s claims, flagging factual assertions, legal conclusions, and rhetorical moves separately. This is a thinking tool, not a draft response. It’s genuinely useful for clearing your head before you pick up the phone or start typing.

    You are a legal assistant helping a solo attorney analyze a letter from opposing counsel.
    
    Letter from opposing counsel:
    [PASTE THE LETTER]
    
    Your client's matter context (brief summary only):
    [2-3 SENTENCES ON THE MATTER — NO PRIVILEGED DETAIL]
    
    Analyze the letter with the following breakdown:
    1. Factual Claims — List each factual assertion made in the letter. For each, note whether it appears accurate, disputable, or unverifiable based on the context provided.
    2. Legal Positions — Identify any legal conclusions or theories asserted. Flag these as [LEGAL POSITION — ATTORNEY REVIEW NEEDED].
    3. Implicit Threats or Posturing — Note any implied threats, deadlines, or strategic positioning.
    4. Demands — List all explicit demands, including response deadlines.
    5. Suggested Response Points — For each factual claim marked disputable, note what a response might address. Do not draft the response itself.
    
    Do not evaluate the legal merit of positions — flag them for attorney review.

    Tweaks: This prompt works well as a second pass after you’ve already read the letter yourself. Run it after forming your own initial reaction and compare the model’s breakdown to your instincts — the gaps are usually informative.

    10. End-of-Week Matter Status Email to a Client

    When to use it: Friday afternoon. You have five active matters and five clients who haven’t heard from you since Tuesday. You have notes on what happened this week. You need five short emails in twenty minutes.

    What to expect: A professional, warm, appropriately brief client update email. The model writes competent client-facing prose without the wooden formality of a form letter. You’ll still need to fact-check every line against your actual matter status — the model only knows what you tell it.

    You are a legal assistant helping a solo attorney write a client status update email.
    
    Matter context:
    - Matter type: [E.G., pending litigation / contract negotiation / estate plan]
    - Current stage: [E.G., discovery / drafting / awaiting opposing party response]
    - What happened this week: [BRIEF BULLET POINTS]
    - What is happening next: [NEXT 1-2 STEPS]
    - Any action needed from client: [YES/NO — IF YES, DESCRIBE]
    - Tone: [PROFESSIONAL AND WARM / FORMAL / CASUAL — CLIENT'S PREFERENCE]
    
    Write a brief client update email (150-250 words). 
    - Address the client as [CLIENT FIRST NAME].
    - Sign as [ATTORNEY NAME].
    - Do not include specific dollar amounts, legal conclusions, or strategic assessments.
    - End with a clear statement of what the client should do next, if anything.
    - Do not use legal jargon without a plain-English explanation.

    Tweaks: Build a simple text file with your five active matters’ bullet-point status each Friday afternoon and run this prompt five times in a row. Takes about fifteen minutes total once you have the habit. Some attorneys batch this in a single prompt asking for all five emails at once — results are slightly lower quality but still usable.

    Notes on Using These Prompts

    Model Choice: GPT-4o vs. Claude Sonnet 3.5

    I ran all ten prompts on both GPT-4o (via ChatGPT Plus) and Claude Sonnet 3.5 (via Claude.ai Pro). Short verdict: GPT-4o produces more fluent, polished prose — better for the demand letter, the client email, and the plain-language settlement summary. Claude Sonnet 3.5 is more conservative and flags more aggressively — better for the engagement letter review and the interrogatory draft, where over-flagging is a feature, not a bug. For the conflict check and chronology, they perform comparably. Neither is accurate enough on jurisdiction-specific legal cites to skip your own review.

    Customization Variables to Build In

    Every prompt above has bracket variables. The ones worth standardizing across your practice:

    • [JURISDICTION] — Add this to every prompt. It doesn’t guarantee accurate statutory cites, but it steers the model’s general framing correctly.
    • [PRACTICE AREA] — Narrows the model’s issue-spotting. Without it, you get generic output.
    • [TONE] — Matters more than you’d expect on client-facing documents. Define your client communication style once and paste it in.
    • [ATTORNEY REVIEW NEEDED] — Keep this flag language consistent across all prompts so you know at a glance what the model flagged when you’re editing.

    Where These Prompts Break

    The conflict check breaks when your existing client list is inconsistently formatted — the model can’t catch what it can’t parse. The deposition outline breaks on highly technical expert matters where the model lacks domain context. The demand letter breaks when the legal theory is novel or jurisdiction-specific enough that [INSERT AUTHORITY] placeholders dominate the whole legal basis section — at that point, you’re writing from scratch anyway. The interrogatory draft breaks when client answers are vague or contradictory, because the model fills gaps with plausible-sounding content it doesn’t actually know. Every prompt breaks on long documents that exceed the context window — split them.

    One Hard Rule

    These prompts produce first drafts. You edit, verify, and sign. If a line in the output doesn’t match your actual knowledge of the matter, cut it. The model doesn’t know your client. You do.

    Save these to a note, a doc, or a snippet manager like TextExpander or Raycast Snippets. The ten minutes you spend organizing them now will pay back within the first week you use them.

    Related reading

  • How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    You are already doing the work. This workflow makes sure you get paid for it.

    Most solo lawyers aren’t losing billable time because they’re lazy about tracking — they’re losing it because logging happens hours after the work, memory compresses a 40-minute call into a six-word entry, and back-to-back matters blur into a single undifferentiated afternoon. Studies on attorney time capture consistently land in the same neighborhood: 15–30% of billable work never makes it onto an invoice. This workflow fixes that without adding a dedicated time-tracking app to your monthly overhead. You need a transcription tool you may already have (Otter.ai or Fireflies.ai), your existing calendar, and a single Claude prompt you run once at end of day. The output drops into whatever practice management software you already use — Clio, MyCase, PracticePanther, or a spreadsheet if that’s where you are.

    What You’ll Need

    • Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month) — either works; Fireflies has slightly better Zoom/Teams auto-join, Otter is easier for in-person dictation via phone
    • Claude (claude.ai, Pro plan at $20/month, or API access if you want to automate later) — the prompt below was written and tested on Claude 3.5 Sonnet
    • Your existing calendar (Google Calendar or Outlook) — you’ll export or copy today’s event list
    • Your existing practice management software’s time entry screen — open it to receive the output
    • A matter-code list: a simple text list of your active matters and their billing codes, which you’ll paste into the prompt

    Step 1: Get Transcription Running in the Background

    The entire workflow depends on raw transcript text. Nothing fancy happens here — you are just making sure something is capturing words while you work.

    For calls and video meetings

    Connect Fireflies to your Google Meet, Zoom, or Teams calendar so it auto-joins every meeting. The first time it appears as “Notetaker,” alert participants that the meeting is being transcribed — check your state’s consent rules before you do this at all. One-party consent states give you more latitude on internal calls; two-party states mean you need explicit verbal acknowledgment before the bot stays in the room. Fireflies lets you configure a custom bot name (I use “LFB Notetaker”) so it looks less like a surveillance tool and more like a deliberate choice.

    For in-person work, research, and drafting time

    Open the Otter mobile app and hit record at the start of a drafting session or in-person client meeting. You don’t need to narrate every keystroke. Talking through what you’re doing — “starting review of the indemnification clause in the Smith MSA, flagging the liability cap” — gives Claude enough context to write a real time entry later. Even a 30-second verbal summary at the end of a task (“done with that, probably 45 minutes”) is enough. Otter’s transcripts are available in the app and exportable as plain text.

    Collect transcripts at end of day

    From Fireflies: go to Meetings, select each transcript from today, copy the full text or use the “Export as TXT” option. From Otter: open each conversation, hit the three-dot menu, and export as text. Paste all of today’s transcripts into a single plain-text document. Label each block with a rough time — “10:15 AM — Zoom call” — if the export doesn’t include timestamps. This takes under five minutes once it’s habitual.

    Step 2: Pull Your Calendar for the Day

    In Google Calendar, click the day view and copy the text of your appointments. In Outlook, use the “Today” view and do the same. You want event names, times, and any notes you added. This is the skeleton the AI uses to attribute time chunks to matters when transcripts are thin or missing. A calendar entry that reads “Garcia deposition prep — 2:00 PM – 4:00 PM” gives Claude a two-hour anchor even if you didn’t record anything during that block.

    Do not skip this step on days when you recorded everything. Calendar entries catch the gaps: the 20-minute call you took off-app, the courthouse run you forgot to narrate, the email sprint that never got a recording started.

    Step 3: Run the End-of-Day Claude Prompt

    Open Claude and paste the following prompt. Fill in the bracketed sections before sending. The prompt is long on purpose — Claude performs substantially better on time-entry tasks when it has explicit formatting rules and examples to follow rather than open-ended instructions.

    You are a legal billing assistant helping a solo attorney draft time entries for the day. You do NOT give legal advice. Your job is to convert raw transcript text and calendar entries into properly formatted billable time entries.
    
    ACTIVE MATTERS (billing codes and short names):
    [PASTE YOUR MATTER LIST HERE — e.g.:
      - 2024-047 / Garcia v. Hendricks (litigation)
      - 2024-061 / Patel Business Formation (transactional)
      - 2024-058 / Nguyen Estate Plan (estate)
      - ADMIN / Non-billable internal tasks]
    
    TODAY'S CALENDAR:
    [PASTE YOUR CALENDAR ENTRIES HERE — include event name, start time, end time, and any notes]
    
    TODAY'S TRANSCRIPTS:
    [PASTE ALL TRANSCRIPT TEXT HERE — label each block with approximate time if possible]
    
    ---
    
    INSTRUCTIONS:
    1. Review the calendar entries and transcripts together.
    2. For each identifiable block of work, draft one time entry in this exact format:
       - Matter: [billing code / matter name]
       - Date: [today's date]
       - Time (hours): [round to nearest 0.1]
       - Description: [one sentence, active voice, specific — what was done, not just "worked on matter"]
    3. If a transcript block clearly belongs to a specific matter, assign it. If you are not certain, flag it as [ATTRIBUTION UNCERTAIN] and explain briefly why.
    4. Do not invent work that is not supported by the calendar or transcripts.
    5. Do not combine entries from different matters into one entry.
    6. After the entries, add a section called "Gaps and Flags" that lists: (a) any calendar blocks with no transcript support, (b) any transcript content you could not attribute to a matter, and (c) any entries where the time estimate feels imprecise.
    7. Keep descriptions under 20 words. Write them in the style used in legal billing — e.g., "Reviewed indemnification clause; drafted revision and sent to client for approval."
    
    OUTPUT FORMAT:
    Return a numbered list of draft time entries followed by the Gaps and Flags section. Do not add commentary between entries.

    Claude will return a numbered list of draft entries and a flags section. The flags section is the part most lawyers skip — don’t. It surfaces the gaps where time walked out the door.

    Close-up detail shot of two hands resting on a laptop keyboard, a legal notepad with handwritten notes visible only as a

    Step 4: Review, Edit, and Enter

    Claude’s draft entries are a starting point, not a finished product. Plan for a five-to-ten minute review pass. What you’re checking: matter attribution accuracy, time estimates that feel off, and descriptions vague enough to draw a billing dispute.

    Fix attribution errors first

    On multi-matter days, Claude occasionally assigns work to the wrong matter when two clients share an industry or a topic — “reviewed contract clause” can land on the wrong billing code if both your open matters involve contract review. The [ATTRIBUTION UNCERTAIN] flag catches the obvious ones, but scan all entries. You know your matters; Claude doesn’t.

    Adjust time estimates

    Claude derives time from transcript timestamps and calendar blocks. If a calendar block says 60 minutes but you wrapped in 35, change it. If a transcript from a “30-minute check-in” runs 47 minutes of actual content, adjust upward. The AI is giving you a scaffolding, not an invoice.

    Enter into your practice management software

    Copy each approved entry into Clio, MyCase, PracticePanther, or wherever you track time. Most practice management platforms have a quick-add time entry screen that takes under 30 seconds per entry when the description is already written. You are not re-drafting from memory — you are pasting and confirming. That is the entire efficiency gain.

    Step 5: Build the Habit Loop

    This workflow produces diminishing results if transcripts are inconsistent. The reliable version runs every single day, not just on busy ones. Set a recurring calendar event at 5:00 PM — “Time entry review, 10 min.” That block keeps the habit alive. After two weeks it compresses to six minutes. After a month the transcript collection is reflexive and the prompt run takes under four minutes of active attention.

    If you want to reduce the manual copy-paste, Fireflies has a Zapier integration that can push transcripts to a Google Doc automatically. You can then keep a running daily doc and paste the whole thing into Claude at end of day rather than exporting individual transcripts. That setup takes about 20 minutes to configure once and saves two to three minutes daily.

    Where This Breaks

    Phone calls without recording consent. This is the most common gap. If you practice in a two-party consent state and forget to get acknowledgment before a call, you get no transcript for that call. The calendar entry will show up in the Gaps and Flags section, but Claude can only estimate — it has no content to work from. The fix is a verbal habit: “Just so you know, I may be recording this call for my notes — is that okay?” said in the first 15 seconds. If a client declines, take a 30-second voice memo immediately after you hang up describing what was covered.

    Multi-matter days with thin context. When you have five active matters and a day full of short, topic-overlapping calls, Claude’s attribution guesses degrade. “Discussed indemnification” does not uniquely identify a matter when you have three open transactional files. Narrating the client name or matter reference number into your voice notes at the start of each session eliminates most of this. “Starting Garcia call” at the top of a transcript is enough context for reliable attribution.

    Transcription errors on legal terms. Both Otter and Fireflies occasionally garble case names, statute citations, and proper nouns. “Promissory estoppel” becomes “promissory a stopple.” This matters less than you might think for time entries — the AI is extracting intent and duration, not quoting the transcript verbatim — but it can confuse attribution when a client name gets mangled. Scan the flags section; that’s where garbled attributions surface.

    Privacy and confidentiality obligations. Transcripts contain client information. Otter and Fireflies store data on their servers. Before you run this workflow, check your state bar’s guidance on cloud storage of client data and review each vendor’s data processing terms. Claude processes data through Anthropic’s API; the Pro plan’s privacy settings default to not training on your inputs, but confirm that in your account settings before pasting client-identifying information. Some attorneys use matter codes rather than client names in transcripts specifically to reduce exposure — a reasonable precaution.

    What This Saves You

    The honest estimate: for a solo billing 25–30 hours per week, recovering 15–20% of previously lost time means three to five additional billable hours per week. At $250/hour, that is $750–$1,250 per week that was already earned but never invoiced. The workflow costs under $40/month in tools (if you don’t already have Otter or Fireflies) and roughly 10 minutes per workday once the habit is set. The math is not subtle.

    Beyond revenue, the descriptions Claude drafts are longer and more specific than what most attorneys write under time pressure. Better descriptions mean fewer billing disputes, faster client approval, and a cleaner paper trail if a fee is ever challenged. That is a secondary benefit, but it compounds over a full year of billing files.

    This workflow will not work for every attorney on every day. Phone-heavy practices in two-party consent states will see smaller gains without the voice-memo habit. Attorneys with highly irregular schedules who forget to start recordings will get patchy transcripts and patchy output. But for a solo who runs a relatively consistent day of calls, drafting, and client meetings, this is the most direct path from “I think I billed about six hours today” to “I have eight verified entries in my practice management software and I know exactly what they cover.”

    Related reading

  • Document Automation with Claude and Microsoft Word: A Walkthrough for Small Firms

    Document Automation with Claude and Microsoft Word: A Walkthrough for Small Firms

    This workflow turns a single Word clause library and a Claude prompt template into a repeatable drafting machine — cutting 10 to 20 minutes off every engagement letter, NDA, demand letter, and fee agreement you produce.

    The workflow is built for solo attorneys and firms of two to ten lawyers who are drafting the same five to eight document types repeatedly and doing it mostly by hand. You don’t need a document automation platform, a monthly SaaS subscription, or a developer. You need a Claude account (the Pro tier works fine), Microsoft Word, and about two hours to set it up the first time. After that, each document runs in under five minutes of AI time, plus your review pass.

    What you’ll need

    • Claude Pro ($20/month at claude.ai) — the claude.ai web interface is sufficient; API access is optional but speeds things up if you want to go further later.
    • Microsoft Word — desktop version, Microsoft 365 or a perpetual license. The workflow works with the web version but the macro/style features described below require the desktop app.
    • One master clause library document — a single .docx file you’ll build in Step 1.
    • A plain-text intake form — a short list of matter variables (client name, jurisdiction, date, deal type, etc.) you fill out before each run. A Word table or a Notepad file both work.

    Step 1: Build your clause library in one Word document

    The clause library is the foundation. Without it, Claude drafts from general training data — which produces serviceable but generic language you’ll spend time rewriting anyway. With it, Claude assembles from clauses you’ve already vetted.

    Create a new Word document called CLAUSE_LIBRARY_MASTER.docx. Organize it with Word Heading 1 styles for document type and Heading 2 for each clause. A minimal starting library looks like this:

    • Engagement Letters: scope of representation, fee structure (hourly / flat / contingency variants), billing cycle and invoice terms, communication expectations, termination by client, termination by firm, conflict waiver carve-out, file retention notice.
    • NDAs: definition of confidential information, exclusions, permitted disclosures, term and termination, return/destruction of materials, remedies clause, governing law placeholder.
    • Demand Letters: opening statement of representation, factual background placeholder, legal basis paragraph (tort / contract / statutory variants), demand and deadline paragraph, reservation of rights, closing.
    • Fee Agreements: scope reference, rate schedule, retainer mechanics, billing increment, late payment, lien notice, dispute resolution over fees.

    Each clause entry should be the actual text you’d use — not a description of it. Pull from your best current templates. Flag jurisdiction-specific language with a bracketed tag like [JURISDICTION: CA only] or [JURISDICTION: TX only]. That tag will matter in Step 3.

    Keep every clause under 150 words. Long multi-part clauses should be split. When you paste library content into a Claude prompt later, shorter chunks give the model cleaner assembly instructions.

    Step 2: Create your intake variable sheet

    Before you run any prompt, fill out a matter intake sheet. This is just a list of variables. Keep it in a Word table at the top of each new matter folder, or in a pinned Notepad file you overwrite each time. The fields below cover all four document types — you’ll only use the relevant subset per document run.

    • CLIENT_NAME
    • CLIENT_ENTITY_TYPE (individual / LLC / corporation / etc.)
    • MATTER_TYPE (engagement letter / NDA / demand letter / fee agreement)
    • JURISDICTION (state)
    • GOVERNING_LAW_STATE (if different from jurisdiction)
    • ATTORNEY_NAME
    • FIRM_NAME
    • DATE
    • FEE_STRUCTURE (hourly at $X / flat fee of $X / contingency at X%)
    • BILLING_INCREMENT (e.g., 0.1 hour)
    • RETAINER_AMOUNT
    • OPPOSING_PARTY (demand letters only)
    • CLAIM_SUMMARY (demand letters only — two to four sentences, plain language)
    • DEMAND_AMOUNT (demand letters only)
    • RESPONSE_DEADLINE (demand letters only)
    • NDA_PARTIES (both party names and entity types)
    • NDA_PURPOSE (one sentence)
    • NDA_TERM (months/years)
    • SPECIAL_INSTRUCTIONS (anything that overrides standard clauses)

    Filling this out takes two to three minutes. That time investment is what makes the Claude output usable on the first pass rather than the third.

    Close-up detail shot of hands resting on a mechanical keyboard, a Word document visible on screen as soft abstract white

    Step 3: The Claude prompt template

    This is the core of the workflow. The prompt does three things: it tells Claude what document to produce, it feeds in your vetted clause library text, and it tells Claude exactly how to handle anything the library doesn’t cover. Run this at claude.ai with Claude 3.5 Sonnet (the default model as of mid-2025). Paste the filled-in intake variables where indicated.

    You are a document drafting assistant for a law firm. Your job is to assemble a [MATTER_TYPE] using the clause library I provide below. Follow these rules exactly:
    
    1. Use ONLY the clauses I provide in the CLAUSE LIBRARY section. Do not invent new legal language.
    2. Where a clause contains a [JURISDICTION: XX only] tag, include that clause ONLY if the jurisdiction in the matter variables matches. Otherwise, omit it and note the omission in brackets at the end of the document.
    3. Wherever you see a variable in ALL_CAPS in the clause text (e.g., CLIENT_NAME, FEE_STRUCTURE), replace it with the corresponding value from the MATTER VARIABLES section below.
    4. If the clause library does not contain a clause needed for this document type, insert a bracketed placeholder: [CLAUSE NEEDED: describe what's missing] — do not write the clause yourself.
    5. Output the assembled document in clean paragraph form, ready to paste into Word. Use clear section headings. Do not include commentary, footnotes, or explanations in the body — those go in a separate DRAFTING NOTES section at the end.
    6. After the document body, include a DRAFTING NOTES section listing: (a) any jurisdiction-specific clauses that were omitted because the library didn't have a match, (b) any [CLAUSE NEEDED] placeholders you inserted, (c) any variable fields you could not fill because the intake form was incomplete.
    
    ---
    
    MATTER VARIABLES:
    CLIENT_NAME: [paste value]
    CLIENT_ENTITY_TYPE: [paste value]
    MATTER_TYPE: [paste value]
    JURISDICTION: [paste value]
    GOVERNING_LAW_STATE: [paste value]
    ATTORNEY_NAME: [paste value]
    FIRM_NAME: [paste value]
    DATE: [paste value]
    FEE_STRUCTURE: [paste value]
    BILLING_INCREMENT: [paste value]
    RETAINER_AMOUNT: [paste value]
    [add or remove fields as relevant to this document type]
    SPECIAL_INSTRUCTIONS: [paste value or "none"]
    
    ---
    
    CLAUSE LIBRARY:
    [Paste the relevant section(s) of your CLAUSE_LIBRARY_MASTER.docx here. For an engagement letter, paste the Engagement Letters section. For an NDA, paste the NDA section. Keep the Heading 2 labels so Claude can reference them.]
    
    ---
    
    Produce the [MATTER_TYPE] now.

    A few notes on this prompt. The “do not invent new legal language” instruction is the most important line. Without it, Claude will helpfully fill gaps with plausible-sounding clauses that you haven’t reviewed. The bracketed placeholder approach — [CLAUSE NEEDED: describe what's missing] — surfaces those gaps cleanly so your review pass catches them immediately.

    The DRAFTING NOTES section at the end is genuinely useful. It acts as a checklist. If Claude flags “omitted retainer lien notice — no California version in library,” that’s your signal to add a California version to the clause library before the next matter.

    Step 4: Move the output into Word and run your review pass

    Claude outputs clean plain text. Copy the document body (everything above the DRAFTING NOTES section) and paste it into a new Word document. Use Paste Special → Keep Text Only, then apply your firm’s styles. This takes about 90 seconds.

    If your firm uses a branded Word template (.dotx file), open that template first, paste into it, and your headers, fonts, and margins apply automatically. For firms that haven’t built a .dotx template yet: build one. It takes an hour once and saves formatting time on every document you produce.

    Your review pass has three specific jobs. First, read every clause that carries a jurisdiction tag and confirm it’s correct for this matter — Claude can mis-match these if your intake variables are ambiguous. Second, read every line that replaced a variable and confirm the substitution makes grammatical and substantive sense. “The firm shall bill CLIENT_NAME at an hourly rate” assembles correctly; “The client, a LLC, agrees” does not and needs a quick fix. Third, work through Claude’s DRAFTING NOTES checklist and resolve every item before the document leaves your desk.

    Do not skip the review pass. This workflow does not produce final documents. It produces reviewed-ready drafts — which is still a meaningful time compression compared to starting from scratch or hunting through old files for the right template version.

    Step 5: Version control without extra software

    Version control for this workflow is low-tech by design. In your matter folder, save each Claude-generated draft with a filename convention: CLIENTNAME_DOCTYPE_v1_YYYYMMDD.docx. When you complete your review pass and make edits, save as v1-reviewed. When the document goes out, save as v1-final. If you revise after client feedback: v2.

    Also save the prompt you ran — paste it into a _PROMPT_LOG.txt file in the same folder. This takes ten seconds and gives you a complete record of what input generated what output. If you ever need to explain why a clause appeared in a document, you have the paper trail.

    For the clause library itself, treat it like source code. Save a dated backup whenever you add or change a clause: CLAUSE_LIBRARY_MASTER_20250610.docx. Keep the last three versions. You’ll want to know what the library looked like when you drafted a document six months ago.

    Where this breaks

    The single biggest failure mode is jurisdiction-specific clause gaps. Claude will follow the instruction to omit unmatched clauses and flag them — but only if your library tagged them correctly in the first place. If you built the library from California templates and you’re running a Texas matter, Claude may not know that what it’s assembling is California-only language unless you tagged it. Starting with a 50-state scope is unrealistic. Start with your primary jurisdiction and explicitly mark everything else as out-of-scope until you’ve added it.

    Variable substitution breaks on edge cases. A client who is a single-member LLC doing business under a DBA will confuse a simple variable replacement in ways you’ll catch only on review. Entities with long names break sentence flow. Joint representation (two clients, one engagement letter) requires a clause the basic library probably doesn’t handle. Build variants for those scenarios rather than expecting the prompt to figure them out.

    The prompt length has a ceiling. Claude 3.5 Sonnet handles a context window large enough for this workflow comfortably, but if you paste an entire 40-clause library plus a long intake form, output quality starts to slip — Claude may skip clauses or truncate the DRAFTING NOTES. Keep each library section to the clauses actually relevant to the document type you’re drafting. Don’t paste the whole library for an NDA run.

    Copy-paste friction is real. This workflow is faster than starting from scratch, but it’s not as fast as a purpose-built automation tool like HotDocs or Documate. If you’re producing more than 30 templated documents per month, the manual copy-paste steps add up and you should evaluate a proper document assembly platform instead. This workflow is the right fit for firms producing five to twenty templated documents per month who don’t want to pay platform fees or manage a separate system.

    Finally: Claude hallucinates. It does so less when constrained to a provided clause library, but it can still produce grammatically smooth sentences that say something slightly different from what your clause says. The review pass is not optional.

    What this saves you

    For a straightforward engagement letter — one client, one matter type, your primary jurisdiction — expect to spend two to three minutes filling out the intake form, one minute running the prompt, and five to eight minutes on the review pass. That’s under twelve minutes total, compared to a realistic twenty-five to thirty-five minutes pulling an old template, editing it, catching holdover text from the prior client, and formatting it correctly. The savings are in the middle: no hunting for the right old file, no manually replacing every instance of the prior client’s name, no re-applying styles.

    Demand letters save the most time because the factual background section — which you write yourself in the intake form as a plain-language summary — feeds directly into a structured output, replacing the blank-page problem. NDA runs are the fastest because the clause count is low and variable substitution is clean. Fee agreements run close to engagement letters in time.

    Over a week of ten to fifteen templated documents, the workflow realistically returns ninety minutes to two hours. That’s not dramatic. It’s also not nothing — it’s a full billing hour or more, every week, recovered from administrative drafting.

    The clause library also has a side benefit that doesn’t show up in time savings: it forces you to standardize. Firms that run this workflow for two months typically discover they had four slightly different versions of their termination clause floating across old templates. Consolidating them into the library is the kind of housekeeping that improves every document going forward, not just the ones produced by this workflow.

    Start with one document type, build out the library section for it, and run ten matters through it before you add the next type. The setup investment is real; spread it out and you’ll actually finish it.

    Related reading

  • Harvey vs CoCounsel for Solo Practitioners: Is Either Worth the Subscription?

    Harvey vs CoCounsel for Solo Practitioners: Is Either Worth the Subscription?

    CoCounsel is the realistic choice for solo and small-firm lawyers. Harvey is worth knowing about — and worth skipping until you’re billing at BigLaw volume.

    Both tools claim to do the same things: draft memos, review contracts, prep for depositions, answer research questions. The gap between them isn’t features — it’s who they were actually built for and what that means when a solo practitioner sits down and tries to get work done. Harvey assumes you have a knowledge management team, a dedicated IT contact, and a firm-negotiated enterprise contract. CoCounsel assumes you have a Westlaw login and thirty minutes to learn something new. For a firm of one to ten attorneys, that difference is the whole story.

    How We Compared Them

    The criteria: pricing transparency and accessibility for a solo, the workflow assumptions baked into each product, which practice areas and matter types actually benefit, how each tool handles legal research integration, and where each one breaks in day-to-day small-firm use. Harvey’s public documentation, reported pricing from legal tech press, and practitioner accounts informed the Harvey side. CoCounsel was evaluated based on its publicly available feature set, Thomson Reuters’ published pricing tiers, and reported user experience from solo and small-firm attorneys.

    Harvey

    Harvey is an AI tool built on top of large language models — including GPT-4 and Claude variants at different points — and positioned explicitly at large law firms and in-house legal departments. The firm’s early clients were Am Law 100 shops. That’s not incidental; it’s the design philosophy made visible.

    What Harvey does well, by most accounts: long-document analysis at scale, memo drafting from complex fact patterns, due diligence review across large document sets, and regulatory research in specialized practice areas. It has purpose-built modules for M&A, tax, and employment matters. Firms like Allen & Overy and PwC Legal negotiated early enterprise deployments. The product is genuinely sophisticated for those environments.

    The problem for a solo is pricing and access. Harvey does not publish pricing publicly. It does not offer a self-serve sign-up. Getting access requires contacting their sales team, going through a demo process, and negotiating a contract — one that, per legal tech reporting, typically runs into five figures annually for even small deployments. As of mid-2025, there is no solo tier, no monthly subscription option, and no free trial in the conventional sense. If you email Harvey today asking for a solo license, you will likely get a demo call and then a quote you won’t take.

    The workflow assumption Harvey makes is that you have infrastructure: a document management system to feed it, an IT contact to configure it, and volume — enough matters and enough hours to justify the contract. A firm billing 1,800 hours a year across two attorneys doesn’t have that volume. Harvey’s ROI math doesn’t pencil for a solo unless you’re in a practice area with genuinely massive document loads and high hourly rates.

    Pricing: Enterprise contract only. No published rate. Reported floor is approximately $50,000–$75,000 per year for small deployments, based on legal tech press coverage. No solo or small-firm tier as of this writing.

    CoCounsel (Thomson Reuters)

    CoCounsel is Thomson Reuters’ AI assistant, originally built by Casetext before TR acquired the company in 2023. That history matters: Casetext was specifically targeting solo and small-firm practitioners before the acquisition, and CoCounsel has retained more of that DNA than most acquired legal AI tools do.

    The feature set covers six main task types: contract review, legal research (with Westlaw integration), deposition preparation, document summarization, timeline creation from documents, and draft memo generation. Each runs as a distinct workflow inside the tool — you pick the task, upload documents or enter a question, and the tool walks through a structured output. That structure is a real advantage for a solo. You don’t need to write a sophisticated prompt from scratch; the tool frames the task for you and asks for what it needs.

    The Westlaw integration is the single clearest advantage CoCounsel has over everything else in this price range. Research outputs are grounded in Westlaw’s database with citations that link back to source material. That’s not a marketing claim — it’s a structural difference from tools that run a general-purpose LLM against the open web and hope the citations hold up. When CoCounsel cites a case, you can click through and verify it inside Westlaw. When a general-purpose tool cites a case, you need to go verify it yourself before you trust it.

    Contract review is the other task that consistently draws positive reports from small-firm practitioners. Feed it an NDA, a services agreement, or a lease, and CoCounsel returns a structured analysis flagging non-standard clauses, missing provisions, and risk areas — with explanations in plain language. The analysis isn’t perfect; it misses nuance in heavily negotiated custom agreements and occasionally over-flags standard boilerplate. But for a first-pass review on a matter where you’d otherwise spend forty-five minutes doing it manually, it earns its place.

    Deposition prep is a genuinely useful feature for litigation-heavy solos. Upload a deposition transcript or a set of prior statements, and CoCounsel identifies inconsistencies, generates potential follow-up question areas, and summarizes key testimony by topic. It won’t write your depo outline for you — the strategic judgment is still yours — but it compresses the document review phase significantly on complex multi-transcript matters.

    Pricing: CoCounsel is available as a standalone subscription or bundled with Westlaw. As of 2025, the standalone CoCounsel subscription runs approximately $100/month for individual users, with firm pricing available at higher tiers. Thomson Reuters has also offered CoCounsel as an add-on to existing Westlaw Edge subscriptions. Pricing has shifted since acquisition — confirm current rates directly with TR before budgeting.

    Close-up detail shot of two hands resting near a laptop keyboard, a printed contract document visible on the desk as abs

    Where Harvey Actually Fits (And It’s Not Here)

    Harvey fits a firm that has already decided to treat AI as infrastructure — budgeted like software and staffed accordingly. That means an Am Law 200 firm, a large regional firm, or a corporate legal department with enough matters to justify the contract and someone internally managing the deployment. Harvey’s contract review and memo drafting capabilities are legitimately strong in those environments, and the customization options — including firm-specific knowledge bases and matter type configurations — are features a BigLaw associate would actually use.

    For a solo doing estate planning, family law, small business transactional work, or even mid-volume litigation, Harvey doesn’t fit. Not because it couldn’t theoretically do the work, but because you can’t get a license at a price that makes sense, and even if you could, the onboarding and configuration overhead isn’t worth it at your matter volume. The product was not designed with your workflow in mind.

    Where CoCounsel Actually Fits

    CoCounsel fits best in three practice area profiles for solos and small firms:

    • Transactional practices with regular contract review. Business attorneys, real estate lawyers, and employment practitioners reviewing a steady volume of agreements get the clearest return. The contract review workflow handles NDAs, service agreements, and standard commercial leases well. It struggles with highly customized multi-party agreements — but those take more human judgment anyway.
    • Litigation practices with deposition-heavy matters. If you routinely prep for depositions involving multiple transcripts or large records, the depo prep feature compresses a real chunk of document review time. Solo litigators handling complex commercial disputes or employment cases with significant written record have reported meaningful time savings here.
    • Research-intensive practice areas already using Westlaw. If you’re already paying for Westlaw Edge, the CoCounsel add-on pricing is relatively modest for what you get. The grounded-research capability — citations that link to real, verifiable Westlaw sources — makes it materially more trustworthy than a general-purpose AI research tool for any matter where you’ll actually rely on the output.

    It fits less well for criminal defense solos (the research database is civil-law heavy), immigration practitioners who need highly current regulatory updates (Westlaw latency on agency guidance can be an issue), and any practice where the primary document type is something outside CoCounsel’s trained task set — medical records review, for instance, or highly technical patent claims.

    Where Each One Breaks

    Harvey’s failure modes (for small firms)

    The primary failure mode is access. You simply cannot get Harvey on a per-matter or month-to-month basis. The entire product is gated behind an enterprise sales process that isn’t designed to close a solo practitioner. If a solo somehow navigated to a license — through a law school affiliation, a pilot program, or a future product tier that doesn’t yet exist — the second failure mode is configuration overhead. Harvey’s firm-specific knowledge base features require setup time and technical input that a solo without IT support will find difficult to use well. The product’s sophistication is a liability when you have no one to configure it.

    CoCounsel’s failure modes

    CoCounsel hallucinates less than general-purpose tools in research mode — but it does still make errors, particularly on very recent case law or on niche jurisdictional questions with limited Westlaw coverage. The citations are verifiable, which means you can catch errors, but you still have to check. Treating CoCounsel’s research output as final without verification is a mistake regardless of the Westlaw backing.

    Contract review outputs can be inconsistent on length and depth. A ten-page NDA might get a thorough analysis; a fifteen-page distribution agreement with unusual indemnification structures might get a surface-level summary that misses the clauses you’d most want flagged. The tool doesn’t always signal when it’s uncertain, which means you need baseline contract knowledge to evaluate the output rather than relying on it blindly.

    The pricing structure has changed multiple times since the Casetext acquisition. Attorneys who subscribed at one price point have been migrated to Thomson Reuters’ billing infrastructure at different rates. Before you budget for it, verify the current price directly with TR — what you read in a two-year-old review may no longer be accurate.

    Integration with non-Westlaw research tools is limited. If you’re a Lexis shop, CoCounsel’s core research advantage disappears — the Westlaw citation backbone is what makes the research feature trustworthy, and without it you’re using a general-purpose LLM layer without the grounding. There is no equivalent Lexis+ AI comparison that plugs into CoCounsel’s task structure, though Lexis has its own AI product worth a separate look.

    Side-by-Side

    • Accessible to solos: CoCounsel ✓ — Harvey ✗
    • Published pricing: CoCounsel ✓ — Harvey ✗
    • Self-serve signup: CoCounsel ✓ — Harvey ✗
    • Westlaw integration: CoCounsel ✓ (deep, citation-linked) — Harvey ✗ (no native integration)
    • Lexis integration: CoCounsel ✗ — Harvey ✗ (neither)
    • Contract review: CoCounsel ✓ (solid first pass) — Harvey ✓ (strong, enterprise-grade)
    • Memo drafting: CoCounsel ✓ (competent) — Harvey ✓ (strong)
    • Deposition prep: CoCounsel ✓ — Harvey ✓
    • Structured task workflows: CoCounsel ✓ — Harvey ✓ (requires more configuration)
    • Firm knowledge base customization: CoCounsel limited — Harvey ✓ (enterprise feature)
    • Practical for 1–5 attorney firm: CoCounsel ✓ — Harvey ✗

    Picking the Right One

    If you’re a solo or running a firm of two to ten attorneys: CoCounsel is the practical answer. It’s accessible, priced in a range that works for small-firm economics, integrates with the research tool most attorneys already pay for, and covers the highest-value AI tasks — contract review, research, deposition prep — with enough reliability to earn a place in your workflow. It’s not flawless. You still verify research. You still apply judgment to contract analysis. But it does what it advertises at a price you can actually pay.

    If you’re a solo primarily on Lexis rather than Westlaw, CoCounsel’s research advantage is largely neutralized. In that case, the decision gets more complicated — Lexis+ AI, Spellbook, or a well-configured general-purpose tool like Claude or GPT-4o with your own prompt framework may serve you better for the price. That’s a separate comparison worth doing.

    Harvey is worth knowing about because it represents where legal AI is heading at the enterprise level — and understanding the product helps you calibrate what tools at your price point are actually delivering versus what they’re approximating. But subscribe to Harvey at your firm size right now? Skip it. There’s no path to a license that makes financial sense, and the product doesn’t need your business. Check back when they launch a small-firm tier — which, given competitive pressure from CoCounsel and others, seems increasingly likely within the next two to three years.

    Use CoCounsel if you’re on Westlaw and doing regular contract review or litigation prep. Skip Harvey if you’re under 25 attorneys and not already in an enterprise sales conversation. Wait six months before making any new legal AI commitment if Thomson Reuters announces a pricing restructure — they’ve done it before, and it’s worth knowing what you’re buying into before you sign.

    Related reading

  • Lexis+ AI vs Westlaw Precision vs CoCounsel: The 2026 Legal Research AI Showdown for Small Firms

    Lexis+ AI vs Westlaw Precision vs CoCounsel: The 2026 Legal Research AI Showdown for Small Firms

    If you’re already paying for Westlaw or Lexis, the AI add-on is almost certainly the right move. If you’re not, CoCounsel standalone is the most affordable way in — and it holds up better than you’d expect.

    Three tools now dominate the legal research AI conversation for small firms: Lexis+ AI (LexisNexis’s AI layer on top of its research platform), Westlaw Precision with CoCounsel (Thomson Reuters’s integrated pairing), and CoCounsel Core as a standalone subscription for attorneys who don’t have a Westlaw seat. I spent several weeks running research queries, citation checks, brief analysis tasks, and deposition prep workflows through all three. The verdict is not “one wins.” It depends almost entirely on what you’re already paying for and what your practice looks like.

    How We Compared Them

    Five criteria: research depth (how far the AI digs before surfacing an answer), citation accuracy (whether the cases it cites actually say what it claims), hallucination rate (cases that don’t exist or quotes that are wrong), drafting and brief analysis features, and deposition prep. Pricing is based on current published rates and direct vendor conversations as of early 2026 — not list-price brochures, which are almost universally useless for solo and small-firm buyers.

    Lexis+ AI

    What It Is and How It’s Positioned

    Lexis+ AI is not a separate product. It’s a module layered on top of a Lexis+ subscription, which means you’re paying for the underlying research platform first, then unlocking the AI features on top. LexisNexis pitches this as seamless — you’re already in Lexis, the AI is just there in the sidebar. In practice, that’s mostly accurate. The integration is the tightest of the three, and if your research workflow already lives in Lexis, the learning curve is close to zero.

    Research Depth and Citation Accuracy

    Lexis+ AI pulls from the full Lexis corpus — cases, statutes, secondary sources, law review articles — and the answers cite directly into the platform so you can click through immediately. That link-through is genuinely useful. I ran a batch of state-specific contract law queries and federal circuit-split questions. The AI surfaced relevant cases at a rate I’d estimate at 80–85% relevance on the first pass. Citation accuracy was high on well-indexed federal cases. It degraded on older state appellate decisions, where I caught two instances of a case being cited for a slightly different proposition than the opinion actually supported. Not fabricated cases — real cases, wrong summary. That’s a subtler problem than hallucination and arguably harder to catch if you’re moving fast.

    Hallucination Rate

    Lower than I expected. In approximately 40 research queries across contract, employment, and family law matters, I found zero fully fabricated citations. That’s not a guarantee — it’s a sample — but the grounding in the actual Lexis database appears to do real work here. The bigger risk is the “close but wrong” citation problem described above, not outright invention.

    Drafting, Brief Analysis, and Deposition Prep

    Lexis+ AI includes a document drafting tool and a brief analyzer. The brief analyzer reads an uploaded brief and identifies weaknesses, missing authority, and counterarguments. I ran three briefs through it. It caught a missing controlling authority in one case that I’d actually overlooked — that alone would have justified the session. On deposition prep, Lexis+ AI can generate question outlines from uploaded documents, which is functional but not deeply sophisticated. It works better as a checklist scaffold than as a strategic tool.

    Pricing

    Here’s the honest picture as of early 2026: Lexis+ AI is not available à la carte. You need a Lexis+ subscription, which for a solo attorney runs roughly $250–$400/month depending on practice area package, and Lexis+ AI adds approximately $100–$150/month on top of that. Small firms of 2–5 attorneys are typically looking at per-seat pricing in that same AI add-on range. LexisNexis does negotiate — solo and small-firm rates are almost always lower than list if you ask, and annual contracts bring the monthly cost down. If you’re already on Lexis+, ask your rep specifically about the AI module add-on cost; it’s often bundled at a discount during renewal.

    Close-up detail shot of two hands resting on a laptop keyboard, a printed legal document visible beside it as soft abstr

    Westlaw Precision with CoCounsel

    What It Is and How It’s Positioned

    Thomson Reuters acquired CoCounsel (formerly Casetext) in 2023 and has since integrated it directly into Westlaw Precision, its premium research tier. The result is the most tightly integrated AI-plus-research product currently available. Westlaw Precision is the research platform; CoCounsel is the AI layer that sits inside it. You don’t context-switch. You run a research query, get AI-assisted answers grounded in Westlaw’s database, and can drop directly into KeyCite to check citation validity. For attorneys who already think in Westlaw, this is the closest thing to a natural extension of existing workflow.

    Research Depth and Citation Accuracy

    Westlaw’s database coverage is widely regarded as the most comprehensive in the market — more secondary sources, better historical depth on state court decisions, and KeyCite remains the gold standard for citation validation. When CoCounsel is running on top of that corpus, the research output reflects it. I ran the same contract law and circuit-split queries I used for Lexis+ AI. Westlaw Precision with CoCounsel returned slightly more relevant secondary sources and was notably better on older state court material. Citation accuracy was the highest of the three tools I tested. I found one instance of a case being cited for a proposition it only partially supported — across the same 40-query set, that’s a meaningful difference from Lexis+ AI’s two instances.

    Hallucination Rate

    Also effectively zero fabricated citations in my testing. The grounding-in-database approach that both TR and LexisNexis use is clearly doing its job. The residual risk — again — is nuanced misrepresentation of what a real case holds, not invention of fake ones. Westlaw’s KeyCite integration makes it faster to spot-check, which is a genuine workflow advantage.

    Drafting, Brief Analysis, and Deposition Prep

    CoCounsel inside Westlaw Precision is the strongest performer on drafting and deposition prep of the three tools. The deposition prep feature — where you upload documents and the AI generates question outlines organized by topic and witness — is noticeably more structured than Lexis+ AI’s equivalent. I ran a deposition prep session on a commercial dispute matter and got a 47-question outline organized by theme, with document citations for each question cluster. The brief analysis feature identifies missing authority, flags unsupported propositions, and — usefully — suggests counterarguments opposing counsel might raise. On drafting, CoCounsel’s contract and motion drafting handles context better than the other two when given a prior brief as a style reference.

    Pricing

    Westlaw Precision is Thomson Reuters’s premium tier, and it’s priced accordingly. Solo attorneys are typically looking at $350–$500/month for Westlaw Precision — the CoCounsel integration is included in that tier, not an additional line item. That’s meaningful: you’re not paying extra for the AI once you’re on Precision. Firms of 2–10 attorneys see per-seat pricing in the same range. The catch is that Westlaw Precision costs more than standard Westlaw, and the jump from standard Westlaw to Precision is itself an add-on cost. If you’re on standard Westlaw today, the upgrade to Precision (with CoCounsel) is worth pricing out from your rep.

    CoCounsel Core (Standalone)

    What It Is and How It’s Positioned

    CoCounsel Core is the standalone version of CoCounsel — available without a Westlaw subscription. Thomson Reuters maintains it as a separate product for attorneys who want the AI drafting, research assistance, and document analysis features but aren’t paying for Westlaw. It does not include full Westlaw database access. For research, it uses a more limited corpus. For drafting and document-focused tasks — contract review, deposition prep from uploaded documents, brief analysis — it draws on the uploaded file rather than a live legal database, which changes what it can and can’t do.

    Research Depth and Citation Accuracy

    This is where CoCounsel Core’s standalone positioning shows its limits. Without full Westlaw database access, research queries return shallower results. The AI can still surface cases and statutes, but the coverage is narrower than either of the database-backed versions. For attorneys who use CoCounsel Core primarily for document-based tasks and handle research through a separate (often less expensive) research subscription or free tools like Google Scholar, this limitation is manageable. For attorneys expecting full research depth, it’s a real gap.

    Hallucination Rate

    Higher than the database-integrated versions — but not dramatically so. Without a live legal database to ground every citation, there’s more room for the model to generate plausible-sounding but incorrect case citations. I found two instances of citations that required careful verification across a 30-query set, compared to zero outright fabrications in either database-backed product. The practical implication: with CoCounsel Core, treating every case citation as unverified until you’ve checked it in a separate tool is the right workflow, not optional due diligence.

    Drafting, Brief Analysis, and Deposition Prep

    This is where CoCounsel Core earns its place. For document-based tasks that don’t require live database access, it performs at essentially the same level as the integrated version. Deposition prep from uploaded transcripts and documents is strong. Brief analysis on uploaded briefs — identifying gaps, unsupported assertions, potential weaknesses — is solid. Contract review and drafting assistance work well. A solo attorney running a transactional or litigation practice who handles their research separately can get genuine value from CoCounsel Core without paying for a full Westlaw seat.

    Pricing

    CoCounsel Core is the most accessible price point of the three. Thomson Reuters has positioned it at approximately $100/month for solo attorneys as of early 2026. Small-firm pricing scales per seat but remains below the combined cost of either database-plus-AI pairing. For a solo attorney not on Westlaw or Lexis, this is the lowest-cost entry into professional-grade legal AI — and for document-heavy practices, it’s not a compromised version of the product. It’s a different product with a different scope.

    Side-by-Side

    • Research depth: Westlaw Precision + CoCounsel > Lexis+ AI > CoCounsel Core (standalone)
    • Citation accuracy: Westlaw Precision + CoCounsel (best) ≈ Lexis+ AI > CoCounsel Core
    • Hallucination rate: Westlaw Precision + CoCounsel and Lexis+ AI both near-zero fabrications; CoCounsel Core slightly higher — verify everything
    • Drafting quality: CoCounsel (both versions) > Lexis+ AI — CoCounsel handles context and style reference better
    • Brief analysis: All three functional; Westlaw Precision + CoCounsel most comprehensive on counterargument identification
    • Deposition prep: CoCounsel (both versions) clearly ahead — more structured output, better document-to-question logic
    • Solo monthly cost (approximate, early 2026): CoCounsel Core ~$100 | Lexis+ AI ~$350–$550 total (platform + AI) | Westlaw Precision + CoCounsel ~$350–$500
    • AI cost as a separate line item: Lexis+ AI = yes, additional ~$100–$150/month on top of Lexis+; Westlaw Precision = no, CoCounsel included in Precision tier; CoCounsel Core = the full product price
    • Works without existing platform subscription: CoCounsel Core only

    Picking the Right One

    You’re already on Lexis+ and like it. Add Lexis+ AI. The integration is tight, the citation accuracy is solid, and you’re not building a new workflow from scratch. Ask your rep specifically about renewal bundle pricing for the AI module — the number is negotiable. Do not accept the list rate without asking.

    You’re already on Westlaw (any tier). Price the upgrade to Westlaw Precision. If the per-month delta between your current Westlaw cost and Precision is under $150, it’s almost certainly worth it. CoCounsel’s deposition prep and brief analysis features alone have saved time on matters where I would have otherwise spent two to three hours building question outlines by hand. The integrated KeyCite-plus-AI workflow is the best research experience of the three.

    You’re not on Westlaw or Lexis and you’re price-sensitive. Start with CoCounsel Core at ~$100/month. Pair it with Google Scholar or a lower-cost research subscription for citation verification. You’ll need to verify citations more carefully than with the database-integrated versions, but for document-heavy work — deposition prep, contract review, brief analysis — the standalone product is genuinely capable. Revisit Westlaw Precision in 12 months when you have a clearer picture of how much AI-assisted research you actually need.

    You’re a firm of 5–15 attorneys with a mixed research diet. The per-seat math starts to favor Westlaw Precision + CoCounsel at this scale, especially if your associates are doing substantial research volume. The research depth advantage is real, and KeyCite integration removes a verification step that matters when you’re supervising work product from multiple timekeepers. Lexis+ AI is a legitimate alternative if your firm has a long-standing Lexis relationship and your IT setup is already built around it.

    One thing none of these tools replaces: a lawyer reading the cases. Citation accuracy being high is not the same as legal reasoning being sound. Every one of these products will surface real cases with real citations and still occasionally miss the controlling authority in your jurisdiction, misread the procedural posture of a decision, or present a four-factor test as settled when it’s actually a circuit minority position. Use the output as a research accelerator, not a research substitute.

    Related reading

  • Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook handles routine NDA and MSA review faster than doing it by hand — but throw a heavily-redlined draft or an exhibit-heavy agreement at it and the wheels come off.

    Spellbook is a Microsoft Word add-in that reads your contract, flags clause gaps, suggests redlines, and explains what it’s flagging in plain language. It’s built on GPT-4-class models and priced for law firms, not enterprise procurement teams. I ran it for two weeks on a mix of NDAs, MSAs, and SOWs — the bread-and-butter of a transactional solo — to find out whether it earns the monthly fee or just performs well in demos. The short answer: it earns it if you review contracts regularly. It doesn’t if you don’t.

    What It Does

    Spellbook lives in a sidebar inside Microsoft Word. You open a contract, open the sidebar, and Spellbook reads the document. From there it does three things: it flags clauses that are unusual or missing, it offers suggested language to replace or strengthen those clauses, and it answers questions about the document in a chat interface. All of this happens without leaving Word.

    The clause-flagging is the core feature and it’s genuinely good on clean drafts. On a standard mutual NDA, Spellbook caught a missing residuals clause, flagged an unusually broad definition of “Confidential Information” that lacked a standard carve-out for publicly available information, and noted that the term “Affiliate” was used twice but never defined. That’s exactly the kind of boilerplate gap that’s easy to miss on a Friday afternoon, and catching it took about forty seconds.

    The redline suggestion feature works the same way: click a flagged clause, and Spellbook offers replacement language. The suggestions are templated but adjustable — you can tell it “make this more favorable to my client, who is the vendor” and it rewrites accordingly. The quality is good enough to use as a first draft, not good enough to accept without reading.

    The chat interface lets you ask document-specific questions: “Does this agreement include an auto-renewal clause?” or “What’s the limitation of liability cap?” It pulls answers from the actual document text, not from general knowledge. On clean contracts, this was accurate. On contracts longer than about 30 pages, it started missing things — more on that below.

    Spellbook also runs what it calls a “playbook” review: you can load a standard set of preferred positions and it checks the contract against those positions automatically. Setting up a playbook takes some initial investment, but once it’s configured, it runs on every new document without extra prompting.

    Where It Actually Fits

    The sweet spot is a solo transactional attorney — or a small firm where one or two attorneys handle a steady flow of commercial contracts — who reviews NDAs, MSAs, SOWs, or vendor agreements multiple times a week. If you’re looking at five or more contracts a week, Spellbook pays for itself in time saved on first-pass review. The clause-flagging catches enough real issues fast enough that it shortens the first read meaningfully.

    For NDAs specifically, Spellbook is close to ideal. NDAs are structurally consistent enough that the model’s training shows: it knows what should be there, flags what isn’t, and the suggested language is close to usable. I ran eight NDAs through it over two weeks and it found something worth flagging in seven of them. Most of those were things I’d have caught anyway — but Spellbook caught them in the first sixty seconds, before I’d done my own read.

    MSAs with clean structure — a base agreement and one or two order forms, no exhibits attached — also work well. The model handles defined-term tracking better than I expected. It flagged two instances in one MSA where “Services” was used in a section that defined the scope, but the exhibit was supposed to govern scope instead, creating a potential conflict. Useful catch.

    The playbook feature fits well for solos who represent the same side of a transaction repeatedly — always the vendor, always the SaaS company, always the contractor. Load your preferred positions once and Spellbook runs those checks automatically. That saves real time compared to building a mental checklist every time.

    Practice areas beyond transactional commercial work get thinner. Employment agreements, commercial leases, and IP assignments work reasonably well because the structures are common enough that the model recognizes them. Anything more specialized — complex finance documents, healthcare agreements with regulatory-specific clauses — showed less confident suggestions and more generic flags.

    Close detail shot of hands resting on a mechanical keyboard, a printed contract visible on the desk surface to the right

    Where It Breaks

    Heavily-redlined drafts broke it for me consistently. When a contract has three or four rounds of tracked changes from multiple parties still embedded — all visible in Word — Spellbook gets confused about which version of the text to analyze. I ran one MSA that had been through two rounds of opposing counsel redlines and Spellbook flagged a clause as missing that was actually present in an accepted redline two paragraphs up. It was reading the document as if the redline layer didn’t exist. This is a real workflow problem because most contracts that need careful review are exactly the ones with heavy markup.

    The workaround is to accept all changes, save a clean copy, and run Spellbook on that. That works, but it adds a manual step and means you’re not reviewing the document in the state your client actually sent or received it.

    Exhibit-heavy MSAs were the other consistent failure mode. When an MSA had three or four attached exhibits — a Statement of Work template, a Data Processing Addendum, a Security Exhibit — Spellbook would analyze the base agreement without meaningfully integrating the exhibit content. It flagged “no data processing terms found” in one agreement where the DPA was a separate exhibit on the next page. The tool is analyzing the document section it can see, not the agreement as a whole when exhibits are substantively separate files or appendices.

    Long documents slow the suggestions down noticeably. Anything over 25-30 pages and the chat answers started lagging by five to ten seconds. Not a dealbreaker, but noticeable when you’re moving fast.

    The suggested redline language is templated enough that it occasionally reads as generic. On one SOW, the suggested scope-limitation language was so standard it didn’t account for the specific services described in the document. I used it as a starting point and rewrote it in about two minutes, but “starting point” is the accurate description — not “finished clause.”

    Spellbook also requires Microsoft Word. If your firm runs on Google Docs or if opposing counsel sends PDFs that you work in natively, you’ll need to convert first. That friction is minor but real. There is no Google Docs version as of this writing.

    What It Costs and What You Get

    Spellbook’s pricing is seat-based and billed annually. As of mid-2025, a solo seat runs approximately $149 per month (billed annually at roughly $1,788 per year). That’s the standard tier, which includes unlimited document reviews, the clause-flagging and suggestion features, and the chat interface.

    The playbook feature — loading your own preferred positions and running them automatically — is included in the standard tier, not gated behind a higher plan. That’s worth noting because playbooks are what make the tool genuinely faster for a solo who handles repeat transaction types.

    There is a higher-tier plan (pricing available on request) that adds team collaboration features, admin controls, and usage analytics. For a true solo, the standard tier is the right tier. The team features add overhead you don’t need when you’re the only reviewer.

    Spellbook offers a free trial — 14 days as of this writing — and the trial is full-featured, not limited to toy documents. Running the trial on real matters from your current workload is the right way to evaluate it. Running it on sample contracts tells you almost nothing about whether it fits your practice.

    At $149 per month for a solo, the math is straightforward. If Spellbook saves you one hour of first-pass review per week and your effective hourly rate is $200 or above, it pays for itself in about two billable hours per month. If you review fewer than two or three contracts a week, the calculus gets harder.

    Verdict

    Use it if you’re a transactional solo or a small firm handling commercial contracts regularly — NDAs, MSAs, vendor agreements, SOWs — and you want a faster first-pass review without hiring a second set of eyes. The clause-flagging is accurate enough on clean drafts to save real time, and the playbook feature compounds that value once you’ve set it up for your standard transaction types.

    Skip it if you’re primarily a litigator, if your transactional work is occasional rather than routine, or if your practice runs on Google Docs. The Word dependency is a real constraint and the monthly cost doesn’t make sense below roughly two to three contract reviews per week.

    Wait six months if your typical workflow involves heavily-redlined multi-party drafts or exhibit-heavy agreements that run past 30 pages. Spellbook is aware of these limitations — the tracked-changes issue in particular is something the product team has acknowledged — but as of this writing those gaps are real enough to affect daily use on complex matters.

    Related reading

  • Clio vs MyCase vs Smokeball: Practice Management for Solo and Small Firms in 2026

    Clio vs MyCase vs Smokeball: Practice Management for Solo and Small Firms in 2026

    Three platforms dominate practice management for small firms in 2026 — and picking the wrong one costs you more than the subscription fee.

    Clio, MyCase, and Smokeball each made meaningful moves in 2025 and early 2026: new AI features, pricing restructures, and integrations that change the calculus for solo and small-firm buyers. This comparison is aimed at attorneys running solo practices or firms of 2–10 attorneys who are either choosing a platform for the first time or wondering whether to switch. The short version: Clio wins on integrations and flexibility, MyCase wins on price and simplicity, and Smokeball wins on document automation depth — particularly for litigation-heavy practices. None of them wins everywhere.

    How we compared them

    The criteria: pricing tiers and what actually changes between them, time tracking and billing workflow, client portal usability, document automation capability, AI features added since mid-2025, and third-party integrations. Where behavior differs by plan, that’s noted. Vendor marketing claims are paraphrased in plain language; wherever a feature has a known limitation, it’s flagged.

    Clio

    Clio is the largest dedicated legal practice management platform in North America by user count. That scale matters because it drives their integration catalog — currently over 200 third-party connections — and funds the R&D that produced Clio Duo, their AI layer rolled out through 2025 and refined into early 2026.

    What it does well

    Time tracking is Clio’s strongest billing-side feature. The desktop and mobile timers are reliable, the automatic time capture (which pulls from emails and calendar events) works without constant babysitting once configured, and the bill-review workflow is clean. If your firm bills hourly and tracks time across multiple matters simultaneously, Clio handles that better than either competitor at this price range.

    The integration catalog is genuinely differentiated. Native connections to QuickBooks Online, LawPay, Dropbox, Google Workspace, Microsoft 365, Zoom, and a long tail of specialized tools (Docketbird for federal court filings, CompuLaw for calendaring rules) mean Clio can slot into almost any existing workflow without forcing you to abandon tools you already pay for. For a firm that has already built a working stack, that flexibility is real money.

    Clio Duo — their AI assistant embedded in Clio Manage — handles matter summarization, draft email generation from matter context, and task creation from conversation. In 2025, they added document Q&A: you can ask questions about documents stored in a matter and get sourced answers. It works well for straightforward factual retrieval (“what is the termination clause in this contract?”) and breaks down on multi-document synthesis across large matter files. Clio Duo is included in the higher tiers; more on that under pricing.

    What it misses

    Document automation is weak for the price. Clio’s template system allows variable substitution but doesn’t approach the conditional logic depth of Smokeball or even some cheaper standalone tools. A firm doing heavy transactional or litigation document production will hit the ceiling fast. The client portal (Clio for Clients) is functional — secure messaging, document sharing, bill payment — but the UI is dated compared to MyCase’s portal and regularly draws complaints from clients who aren’t tech-comfortable. Onboarding complexity is also higher than MyCase; expect a few weeks before the team is actually running in it, not a few days.

    Pricing

    As of early 2026, Clio Manage runs on four tiers: EasyStart at $49/user/month (billing and time tracking only, no document management), Essentials at $79/user/month, Advanced at $109/user/month, and Complete at $139/user/month. Clio Duo is available at Advanced and Complete. Annual billing discounts these by roughly 20%. A solo at Essentials pays $79/month; a 5-attorney firm at Advanced pays $545/month before any add-ons. Clio Grow (their CRM and intake product) is a separate subscription — $99/user/month at the lowest tier — which surprises buyers who assumed intake was included.

    MyCase

    MyCase has positioned itself as the accessible alternative to Clio since around 2019, and in 2025 they leaned harder into that positioning with a pricing restructure and a client portal redesign. For a solo or a very small firm that bills flat-fee or needs a single platform to handle intake through invoice without complexity, MyCase is worth serious attention.

    What it does well

    The client portal is the best of the three. It’s cleanly designed, mobile-friendly, and clients consistently find it intuitive enough to use without a tutorial. Secure messaging, document uploads, invoice viewing and payment, and electronic signature requests all live in one place and work without friction. For consumer-facing practices — family law, estate planning, immigration, personal injury — where client communication volume is high and clients aren’t always tech-savvy, this matters a lot.

    MyCase added AI-assisted intake forms and matter summary generation in 2025 under their MyCase IQ branding. The intake form builder uses AI to suggest fields based on practice area, which is a practical time-saver when setting up new matter types. Matter summaries pull from case notes, documents, and communications and produce a readable briefing — useful for quickly handing off matters to coverage counsel or reviewing a file before a call. The quality is consistent enough to use as a starting point rather than a rough draft.

    Flat-fee billing is handled more naturally in MyCase than in Clio. Milestone billing, payment plans, and the ability to tie an invoice to a matter stage without workarounds are all built in. If a third of your matters are flat-fee and a third are hourly, MyCase handles the mix without forcing you to adapt your workflow to the software.

    What it misses

    The integration catalog is smaller than Clio’s — meaningfully so. MyCase connects to QuickBooks, LawPay, Stripe, Google Workspace, and a handful of others, but if you rely on specialized tools for court calendaring, e-discovery, or filing, you will hit gaps. Document automation exists but is template-basic; conditional logic and clause libraries are not present. Time tracking works but lacks the automatic-capture sophistication of Clio’s desktop app. For an hourly-billing practice with high time entry volume, the friction adds up.

    Pricing

    MyCase runs three tiers as of early 2026: Basic at $39/user/month, Pro at $69/user/month, and Advanced at $89/user/month. MyCase IQ (the AI features) is included in Pro and Advanced. Annual billing applies. A solo at Pro pays $69/month; a 5-attorney firm at Pro pays $345/month. eSign is included at Pro and above. This is the lowest all-in price of the three platforms for a firm of 2–5 attorneys that doesn’t need deep integrations or litigation document automation. There is no separate CRM product; intake and lead tracking are built into the platform at the Pro tier.

    Close-up detail shot of two hands resting near an open laptop keyboard on a warm wood desk, a printed document visible o

    Smokeball

    Smokeball targets litigation and transactional practices that live inside Microsoft 365. It is the most opinionated of the three — the software makes assumptions about how you work, and if those assumptions match your practice, the productivity gains are real. If they don’t, the rigidity will frustrate you within a month.

    What it does well

    Document automation is the headline feature and it earns the billing. Smokeball ships with thousands of pre-built forms organized by practice area and jurisdiction — state-specific court forms, transactional templates, demand letters — with conditional logic that actually branches based on matter data. For a litigation practice in a supported jurisdiction, the time from “new matter opened” to “first set of documents generated” is measurably shorter than on either competitor. The 2025 update added AI-assisted document drafting that pulls matter facts into template placeholders and flags missing data before you finalize — a practical quality-control step that reduces the embarrassing error rate on form-heavy matters.

    Smokeball’s automatic time capture is the most passive of the three. The Windows desktop app records time spent in Word documents, emails, and other applications associated with a matter without requiring the attorney to start a timer. For attorneys who consistently under-record time — a common and expensive habit — the difference in captured billable hours is the clearest financial argument for Smokeball’s higher price. The 2025 benchmarking data Smokeball publishes on this (claiming an average of 1.5–2 additional billable hours per attorney per day over self-reported time) is worth treating skeptically, but directionally, passive capture does recover time that manual logging misses.

    Smokeball AI, their 2025-launched assistant, handles document summarization, clause identification, and — most usefully — automatic population of matter fields from uploaded documents. Drop in a signed retainer and it pulls client name, address, matter type, and key dates into the matter record without manual entry. That specific feature saves real time on intake-heavy practices.

    What it misses

    Smokeball is Windows-first and Microsoft 365-dependent. Mac support exists but is thinner, and if your firm runs on Google Workspace, you will be fighting the software’s default assumptions on every file storage and email step. The client portal is functional but behind MyCase in usability by a clear margin. Pricing is the least transparent of the three — Smokeball does not publish per-user monthly pricing on its public site, which means you are going into a sales conversation before you can compare numbers, and the contract terms tend toward annual commitments with limited flexibility. The integration catalog is narrower than Clio’s; outside of Microsoft-adjacent tools and a core set of legal-specific integrations, the connections are limited.

    Pricing

    Smokeball does not list per-seat pricing publicly. Based on reported figures from 2025 buyer conversations, the Bill tier (entry-level, time tracking and billing) runs approximately $99/user/month, Grow (adds matter management and document automation) runs approximately $149/user/month, and Boost (full feature set including AI features) runs approximately $179/user/month — all on annual contracts. These numbers should be verified in your sales conversation because they shift. A 3-attorney firm at Grow is spending roughly $450–$540/month. That is more than MyCase and roughly comparable to Clio Advanced. Smokeball’s value case rests on the document automation and passive time capture offsetting the higher per-seat cost; whether that math works depends entirely on your matter volume and document density.

    Side-by-side

    • Entry price (solo, annual billing): MyCase Basic $39/mo → MyCase Pro $69/mo → Clio Essentials $79/mo → Clio Advanced $109/mo → Smokeball Bill ~$99/mo → Smokeball Grow ~$149/mo
    • Time tracking: Clio best for manual + automatic capture; Smokeball best for fully passive Windows capture; MyCase adequate for most, limited automatic capture
    • Billing types: All three handle hourly and flat-fee; MyCase handles milestone billing most naturally; Clio most flexible for complex trust accounting
    • Client portal: MyCase best UX; Clio functional; Smokeball least polished
    • Document automation: Smokeball clearly leads; Clio basic variable substitution; MyCase basic templates
    • AI features (2025–2026): All three have AI layers now — Clio Duo (document Q&A, matter summaries, task generation); MyCase IQ (intake assist, matter summaries); Smokeball AI (document population from uploads, clause ID, summarization)
    • Integrations: Clio 200+; MyCase ~30 core; Smokeball Microsoft-centric with ~20 legal-specific
    • Platform dependency: Clio cloud-agnostic; MyCase cloud-agnostic; Smokeball Windows + Microsoft 365 preferred
    • Pricing transparency: MyCase and Clio publish rates; Smokeball requires a sales call
    • Contract flexibility: Clio and MyCase offer monthly billing (at a premium); Smokeball pushes annual contracts

    Picking the right one

    If you are a solo or a firm of 2–4 attorneys doing consumer-facing work — family law, immigration, estate planning, criminal defense — and you want a single platform that is fast to learn, handles flat-fee and hourly billing, and gives clients a portal they will actually use, start with MyCase Pro at $69/user/month. You get the AI features, the clean portal, and built-in intake without a separate CRM subscription. The integration gaps will not affect most practices at this size.

    If you are a firm of 4–10 attorneys with a mixed practice, an existing stack of specialized tools, or a strong need for QuickBooks integration, docketing software connections, or flexibility to add tools as you grow, Clio Advanced at $109/user/month is the defensible choice. The higher per-seat cost buys you integration headroom and a time tracking system that scales. If Clio Grow (intake CRM) is relevant to your practice, budget for it separately — the combined cost is higher but the workflow is tighter than cobbling together intake tools on MyCase.

    If you run a litigation-heavy or transactional practice on Windows, your firm lives in Microsoft 365, and you generate high document volume per matter — personal injury, real estate closings, family law in a form-heavy jurisdiction, civil litigation — Smokeball Grow earns its price if the per-seat cost lands under $160/month in your negotiation. The document automation and passive time capture are genuinely differentiated features, not marketing copy. Get the pricing in writing before your trial period ends, and clarify the cancellation terms on the annual contract before you sign.

    If you are on a tight budget and billing under $15,000/month across the firm, MyCase Basic at $39/user/month is worth a 30-day trial before spending more. It covers the fundamentals — matter management, billing, client portal — without requiring you to commit to a platform you haven’t lived in yet.

    Verdict

    There is no single winner here — the right answer is genuinely practice-dependent, which is something vendor comparison sites understate because they are often paid to say otherwise.

    Use MyCase if you want the lowest total cost, the best client portal, and a platform your team will be running in within a week. Use Clio if your firm has an existing tool stack, bills heavily by the hour, or needs integration flexibility as you grow. Use Smokeball if you are Windows-and-Word-based, generate high document volume per matter, and will actually run the passive time capture — because that feature alone can justify the price difference if your attorneys are consistently under-recording time.

    All three platforms shipped meaningful AI updates in 2025. None of the AI layers replaces a dedicated AI drafting tool yet — they are best understood as workflow connectors that surface matter context at the right moment, not autonomous drafting engines. Treat them as useful additions to the platforms you already have reasons to choose, not as deciding factors on their own.

    Related reading