Tag: practice-management

  • What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    What the 2026 ABA TechReport Says About Small-Firm AI Adoption (And What to Actually Do About It)

    The 2026 ABA TechReport shows AI adoption climbing fast — but the headline numbers are mostly BigLaw’s story. Here’s what the data actually says for a solo or small firm, and what to do about it this week.

    Every year the ABA TechReport lands and every year the same thing happens: law firm marketing teams quote the top-line adoption number, vendors repitch their most expensive SKUs, and the solo lawyer in a two-person family-law firm closes the tab. The 2026 report is more of the same — except the gap between the BigLaw AI budget and the small-firm reality is now wide enough to be worth talking about directly. AI spending at Am Law 100 firms is up sharply. Meaningful tooling for a five-attorney plaintiff’s practice? Still thin. This piece is about that gap, what the data underneath the headline actually shows, and the cheapest credible path forward for a firm of 1–10 attorneys.

    What the 2026 Report Actually Found — and Who It Found It For

    The report’s headline: AI tool adoption among attorneys crossed 60% for the first time. That number is real. It is also skewed hard by firm size. When you filter to solo practitioners, the adoption rate drops to roughly the mid-30s percentile. Firms of 2–9 attorneys sit in the low-40s. The firms pushing the aggregate number past 60% are firms with 100-plus attorneys, dedicated IT staff, and vendor contracts that cost more per seat per year than a solo’s entire software budget.

    The report also tracks what attorneys are using. At large firms, the dominant tools are Harvey, CoCounsel Enterprise, and Microsoft 365 Copilot deployed org-wide. At small firms, the most commonly cited tools are ChatGPT (usually the free tier or Plus), Google Gemini, and whatever AI feature their existing practice management software quietly shipped in the last 12 months. Those are not the same category of tool. Comparing adoption rates across those two groups as if they represent the same phenomenon is misleading.

    One number that doesn’t skew by firm size: attorney anxiety about AI competence obligations. Across all firm sizes, concern about keeping up with the technology — and with bar guidance on its use — is roughly uniform. Solos worry about it as much as partners at midsize firms. That’s the one place the report’s aggregate number actually means something for a small-firm reader.

    The Price-Point Problem the Report Doesn’t Name Directly

    Harvey starts at pricing that isn’t published but is widely reported in the $500–$1,000+ per-seat-per-month range for firm contracts. CoCounsel’s small-firm tier has come down, but you’re still looking at $100/month per seat at minimum, often more depending on the plan. Spellbook sits around $150–$200/month for a solo seat. Those prices are defensible if the tool reliably saves you two or three billable hours a month. They are not defensible if you haven’t yet proven to yourself that AI-assisted drafting actually saves you time in your specific practice.

    The report notes that ROI measurement at small firms is almost nonexistent. Fewer than 15% of solo and small-firm respondents said they tracked time saved against AI tool cost in any systematic way. That’s not a moral failure — it’s a bandwidth problem. But it means most small-firm AI spending is faith-based. Anecdote drives the purchase; no one counts the hours afterward.

    The vendors aren’t incentivized to fix this. A tool that’s hard to evaluate is a tool that’s hard to cancel. The practical consequence for a small-firm reader: you need to do the measurement yourself before you commit to a premium tier, because no one else is going to do it for you.

    Close detail shot from a slight angle: two hands resting on a slim laptop keyboard, a contract document visible on scree

    What the Report Gets Right About Small-Firm Risk

    Two findings cut through the noise. First: the attorneys who report the highest satisfaction with AI tools are the ones who use them for a narrow, repeatable task — not as a general-purpose assistant across all work. The report’s phrasing is different, but the underlying data is clear. Trying to use an AI tool for everything produces mediocre results everywhere. Picking one document type, one workflow, one prompt you refine over time — that’s where the satisfaction numbers climb.

    Second: hallucination concern remains the top barrier to adoption at small firms, and it’s not irrational. A solo running a 200-matter caseload doesn’t have a team of associates to catch a fabricated citation. The report confirms that attorneys who build a verification step into their AI workflow — meaning they treat AI output as a first draft that requires checking, not a finished product — report significantly fewer quality problems. That’s a workflow design point, not a technology point. The tool doesn’t prevent hallucinations. Your process has to.

    Neither of these findings requires you to spend money. They’re workflow principles that apply whether you’re using a $20/month tool or a $500/month one.

    What I’d Actually Do About This

    Start at the lowest credible price point and measure. Here’s the specific sequence that makes sense for a solo or firm under 10 attorneys.

    Step 1: Run a $20/month tool for 30 days on one task

    Claude Pro ($20/month) and ChatGPT Plus ($20/month) are genuinely capable for legal drafting assistance, first-pass research summaries, and correspondence drafts. Pick one. Pick one task — demand letters, lease review summaries, deposition prep outlines, whatever you do repeatedly. Run every instance of that task through the tool for 30 days. Before each one, note your baseline time. After, note actual time with the tool. Thirty days, one task, one number at the end: minutes saved per matter.

    If the number is zero or negative, stop. You’ve spent $20 to learn something useful. If the number is positive, you now have a defensible basis for either continuing at $20/month or evaluating whether a more expensive tool would produce a bigger delta on that same task.

    Step 2: Check what your practice management software already includes

    Clio, MyCase, and PracticePanther have all shipped AI features in the last 18 months. Most are included in existing subscriptions at the mid-tier and above. Clio Duo handles matter summaries and draft correspondence. MyCase’s AI assistant touches document drafting and client communication. If you’re already paying for these platforms, you may have AI features you haven’t turned on. Check your subscription tier before spending anything new. The capabilities are narrower than a standalone tool, but the marginal cost is zero.

    Step 3: Only upgrade to Spellbook or CoCounsel if the delta is clear

    Spellbook is purpose-built for contract review and drafting inside Microsoft Word. If you do transactional work — business contracts, commercial leases, employment agreements — and you’re already in Word all day, Spellbook earns its price point faster than a general model will. CoCounsel (from Thomson Reuters, built on GPT-4 class models) is stronger on legal research summarization and has deeper integration with Westlaw if you’re a Westlaw subscriber. Both are worth trialing — both offer trial periods — but only after you’ve established in Step 1 that AI drafting assistance saves you meaningful time. Paying $150–$200/month to discover you don’t actually use AI tools consistently enough to matter is an expensive way to learn something you could have learned for $20.

    Step 4: Avoid Harvey-tier spending at this firm size

    Harvey is built for large-firm deployment: large document sets, high-volume due diligence, org-wide rollout with IT support. At a solo or small firm, you’re paying for infrastructure you can’t use. The per-seat cost is structured around large-firm contract negotiations. There is no meaningful scenario where a solo practitioner or a firm of five attorneys needs Harvey over a well-configured Spellbook or CoCounsel setup — and even those are only justified once you’ve done the measurement in Steps 1 and 2.

    The 2026 TechReport’s implicit message, if you read past the headline adoption numbers, is that the legal AI market is bifurcating. BigLaw is buying enterprise tools and absorbing the cost into hourly rates. Small firms are adopting more cautiously and measuring less. The cautious adoption is rational. The lack of measurement is the part worth fixing. Pick one tool, pick one task, track the hours. That’s the entire strategy.

    The Bottom Line

    The 2026 ABA TechReport confirms that AI adoption is up and that BigLaw is driving most of the interesting numbers. For a solo or small firm, the actionable takeaway is simple: start at $20/month, measure one task for 30 days, and don’t spend $150–$500/month until you can prove the cheaper tier isn’t doing the job. The technology is real. The ROI is not guaranteed. Every vendor in this space wants you to believe the premium tool is the responsible choice — but responsible means measuring first. The bar guidance on AI competence is real too, and it cuts toward knowing what your tools actually do, not toward spending more on them.

    Related reading

  • 10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    10 ChatGPT Prompts Every Solo Lawyer Should Save (Tested on Real Matters)

    These ten prompts took me from blank page to usable first draft on actual client matters — intake calls, demand letters, deposition prep, and everything in between. Save them now; tweak the variables later.

    Every solo lawyer I talk to has the same complaint: too many tasks, not enough time, and AI tools that sound impressive until you actually try them on a real matter. The prompts below were built for ChatGPT (GPT-4o) and tested across family law, employment, and small-business transactional matters. They are not magic. They produce first drafts, not final work product. But a solid first draft that takes three minutes instead of forty-five minutes is the whole point.

    A few ground rules before you start. Never paste full client names, Social Security numbers, or identifying case details into a public AI tool. Use placeholders like [CLIENT], [OPPOSING PARTY], and [MATTER TYPE]. If your firm uses Microsoft Copilot or a privacy-partitioned ChatGPT Enterprise account, you have more flexibility — but check your bar’s current guidance on client data and AI tools before you do anything. These prompts work best as templates you adapt, not scripts you run verbatim.

    1. Intake Call Summary into a Structured Brief

    When to use it: Right after an intake call. You have rough notes or a transcript from a call-recording tool like Otter.ai or Fireflies. You need a clean, structured brief to open a new matter file.

    What to expect: A structured output with labeled sections — parties, key facts, potential claims, open questions, and recommended next steps. The model is good at pulling signal from messy notes. It will occasionally hallucinate a “fact” that wasn’t in your notes, so read it against your source before filing it anywhere.

    You are a legal assistant helping a solo attorney organize intake notes.
    
    Below are rough notes from a new client intake call. Convert them into a structured brief with these sections:
    1. Parties (client name placeholder, opposing party placeholder, any other relevant persons)
    2. Core Facts (bullet list, chronological where possible)
    3. Potential Claims or Issues (list only — do not evaluate likelihood)
    4. Documents Mentioned or Needed
    5. Open Questions for Follow-Up
    6. Suggested Next Steps
    
    Do not add facts not present in the notes. Flag anything unclear with [UNCLEAR].
    
    Intake notes:
    [PASTE YOUR NOTES HERE]

    Tweaks: Add a sixth section called “Conflicts Check Names” and ask the model to pull every person and entity name mentioned — that feeds directly into prompt #2. If you handle a specific practice area, add “Practice area: [AREA]” so the model can weight its issue-spotting accordingly.

    2. First-Pass Conflict Check from a Party List

    When to use it: You’ve got a new matter and a list of parties. You want a quick cross-reference against your existing client list before your conflicts-check software runs its full scan — or if you don’t have dedicated conflicts software.

    What to expect: The model will flag name matches, near-matches, and related entities. This is a first pass, not a complete conflicts check. Your malpractice carrier and bar rules require a real process — this prompt helps you surface obvious problems faster.

    You are a legal assistant running a first-pass conflicts check for a solo attorney.
    
    New matter parties:
    [LIST ALL PARTIES, ENTITIES, AND KEY PERSONS FROM THE NEW MATTER]
    
    Existing client and adverse party list:
    [PASTE YOUR CURRENT CLIENT/ADVERSE PARTY LIST — USE PLACEHOLDERS IF NEEDED]
    
    Tasks:
    1. Flag any exact name matches between the two lists.
    2. Flag any likely near-matches (similar names, abbreviations, DBAs).
    3. Flag any entities that share a name root with a listed party.
    4. List any names from the new matter that do NOT appear on the existing list (for your records).
    
    Format the output as a table with columns: New Matter Party | Match Found | Match Type | Notes.

    Tweaks: This prompt only works as well as the list you feed it. Keep a running CSV of client and adverse party names in a note or document you can paste quickly. If your existing list is long, break it into chunks — GPT-4o handles roughly 25,000 words of context, but accuracy degrades near the ceiling.

    3. Demand Letter Draft from a Fact Pattern

    When to use it: You have a settled fact pattern and a clear demand amount. You need a professional demand letter drafted before you spend thirty minutes staring at a blank template.

    What to expect: A complete letter with opening statement of representation, fact recitation, legal basis section (labeled as general — you’ll fill in controlling authority), demand, and deadline. The model writes competent prose. It will not cite your jurisdiction’s specific statutes correctly without prompting, so always check cites before sending.

    You are a legal assistant drafting a demand letter for a solo attorney.
    
    Facts:
    [SUMMARIZE THE CORE FACTS — WHO DID WHAT, WHEN, AND WHAT HARM RESULTED]
    
    Jurisdiction: [STATE]
    Practice area: [E.G., EMPLOYMENT / PERSONAL INJURY / CONTRACT]
    Demand amount: $[AMOUNT] or [DESCRIBE RELIEF SOUGHT]
    Response deadline: [NUMBER] days
    
    Draft a professional demand letter. Use formal tone. Include:
    - Opening paragraph identifying the attorney and client (use [ATTORNEY NAME] and [CLIENT] as placeholders)
    - Factual background section
    - Legal basis section — flag where jurisdiction-specific statutes or case law should be inserted with [INSERT AUTHORITY]
    - Clear statement of demand
    - Response deadline and consequence of non-response
    
    Do not invent legal citations. Use [INSERT AUTHORITY] wherever a cite is needed.

    Tweaks: Add “Tone: [firm but professional / aggressive / conciliatory]” to the prompt to shift the letter’s posture. For employment matters, add the employer’s size if known — it affects which statutes apply and the model will note that in the [INSERT AUTHORITY] placeholders.

    4. Deposition Outline from Case Documents

    When to use it: You have a deponent, a set of documents, and not enough time to build a line-by-line outline from scratch. Paste in the relevant excerpts — discovery responses, prior statements, key emails — and let the model draft your question framework.

    What to expect: A topical outline with suggested question areas, document tie-ins, and impeachment flags. The model is strong on organizing themes and weak on jurisdiction-specific deposition procedure. Expect to add foundation questions and objection-anticipation notes yourself.

    You are a legal assistant helping a solo attorney prepare for a deposition.
    
    Deponent: [ROLE — E.G., "Defendant employer's HR director" — no real names]
    Matter type: [E.G., wrongful termination / breach of contract]
    Key issues in dispute: [LIST 3-5 CORE DISPUTED FACTS OR LEGAL ELEMENTS]
    
    Documents provided (paste excerpts below):
    [PASTE RELEVANT EXCERPTS — REDACT IDENTIFYING INFO AS NEEDED]
    
    Create a deposition outline organized by topic. For each topic:
    1. State the goal of that topic section (what you are trying to establish or undermine)
    2. List 5-8 suggested open-ended questions
    3. Note any document the attorney should introduce during that section
    4. Flag any prior statements in the documents that could be used for impeachment
    
    Do not suggest legal strategy. Flag factual inconsistencies in the documents with [INCONSISTENCY NOTE].

    Tweaks: If you have a prior deposition transcript from the same witness in another matter, paste selected excerpts and add “Flag any statements inconsistent with the documents above.” The model handles cross-document comparison reasonably well within a single context window.

    Close-up of two hands resting on a slim laptop keyboard, a printed contract visible on the desk beside it as soft abstra

    5. Engagement Letter Customization

    When to use it: You have a master engagement letter template and need to adapt it for a specific matter type, fee arrangement, or client situation without rewriting the whole thing manually.

    What to expect: The model will insert the right variables, flag clauses that may not fit the matter type, and suggest additions you might have missed. It will not flag jurisdiction-specific requirements you haven’t told it about — you still need to know what your state bar requires in an engagement letter.

    You are a legal assistant helping a solo attorney customize an engagement letter.
    
    Base template:
    [PASTE YOUR ENGAGEMENT LETTER TEMPLATE]
    
    Matter details:
    - Matter type: [E.G., estate planning / civil litigation / business formation]
    - Fee arrangement: [E.G., flat fee $X / hourly at $X / contingency at X%]
    - Scope of representation: [DESCRIBE WHAT IS AND IS NOT INCLUDED]
    - Any special terms: [LIST ANY CLIENT-SPECIFIC ARRANGEMENTS]
    
    Tasks:
    1. Insert the matter-specific details into the appropriate places in the template.
    2. Flag any clauses in the template that may not fit this matter type with [REVIEW THIS CLAUSE].
    3. Suggest any standard clauses that appear to be missing for this matter type, labeled [SUGGESTED ADDITION].
    4. Do not change any clause language without flagging the change clearly.
    
    Output: The revised letter with all changes marked in [BRACKETS].

    Tweaks: Run this with Claude Sonnet 3.5 if you want more conservative, flag-heavy output — Claude tends to over-flag, which is actually useful for compliance review. GPT-4o tends to write more fluently but flag less aggressively.

    6. Chronology Builder from Emails and Notes

    When to use it: You have a pile of emails, text summaries, and scattered notes and need a clean timeline. Works for breach-of-contract disputes, employment matters, domestic cases — anywhere a clear sequence of events matters.

    What to expect: A date-ordered table or list with source attribution. The model is good at pulling dates and sequencing events. It will occasionally misread ambiguous date formats (MM/DD vs. DD/MM) — flag that in the prompt if your documents mix formats.

    You are a legal assistant building a factual chronology for a solo attorney.
    
    Below are excerpts from emails, notes, and documents related to a single matter. Extract every datable event and build a chronology.
    
    Output format: A table with columns — Date | Event Description | Source | Significance Flag
    
    Rules:
    - Use the exact date from the source if available. If only a month/year is given, note that.
    - If a date is ambiguous or inferred, mark it [INFERRED DATE].
    - Significance Flag: mark events as [KEY] if they appear directly relevant to the core dispute; mark [BACKGROUND] for context events.
    - Do not add events not supported by the source material.
    - If two events appear to conflict in the record, flag both with [CONFLICT].
    
    Source material:
    [PASTE EMAILS, NOTES, AND EXCERPTS HERE — REDACT IDENTIFYING INFO]

    Tweaks: For long document sets, run this in batches by time period and then ask the model to merge and de-duplicate the resulting tables. Ask it to “merge the following two chronology tables, removing duplicate entries and resolving conflicts where the same event appears twice with different dates.”

    7. Settlement Agreement Plain-Language Summary for the Client

    When to use it: You’ve negotiated a settlement and need to explain it to a client who is not a lawyer. You want a summary that covers what they’re agreeing to, what they’re giving up, and what happens next — without the legalese.

    What to expect: A clean, readable summary organized by what the client receives, what the client must do, what the client cannot do after signing, and key dates. The model handles plain-language conversion well. Do not send this summary to the client in place of the actual agreement — it’s a companion document you review with them.

    You are a legal assistant helping a solo attorney explain a settlement agreement to a client in plain language.
    
    Settlement agreement text:
    [PASTE THE SETTLEMENT AGREEMENT — REDACT NAMES IF NEEDED]
    
    Write a plain-language summary for the client. Use simple sentences. No legal jargon without a plain-English explanation in parentheses.
    
    Organize the summary into these sections:
    1. What You Are Getting (payments, actions, other relief)
    2. What You Must Do (release of claims, confidentiality obligations, other duties)
    3. What You Cannot Do After Signing (restrictions, non-disparagement, non-compete if applicable)
    4. Important Dates and Deadlines
    5. What Happens If Either Side Doesn't Follow Through
    
    End with a short paragraph reminding the client to ask their attorney any questions before signing.
    
    Do not interpret ambiguous clauses — flag them with [ASK YOUR ATTORNEY ABOUT THIS].

    Tweaks: Adjust reading level with “Write at a 7th-grade reading level” or “Write for a sophisticated business client.” The model handles both well. If the agreement is long, paste it in sections and ask for section-by-section summaries first, then ask for a consolidated summary.

    8. Interrogatory Response First Draft

    When to use it: Opposing counsel has served interrogatories. You have your client’s answers in rough form — notes from a call, a client-filled questionnaire, bullet points. You need a properly formatted first draft before you do the real lawyering.

    What to expect: Formally formatted responses with proper headers, general objections section, and individual responses. The model will draft objections only if you give it grounds — it won’t invent them. You will need to review every objection for jurisdictional validity and every substantive response for accuracy. This prompt saves formatting time, not judgment time.

    You are a legal assistant helping a solo attorney draft interrogatory responses.
    
    Jurisdiction: [STATE / FEDERAL — DISTRICT IF FEDERAL]
    Case type: [E.G., employment discrimination / breach of contract]
    
    Interrogatories served:
    [PASTE THE INTERROGATORIES]
    
    Client's rough answers (as provided — do not treat these as verified):
    [PASTE THE CLIENT'S NOTES OR QUESTIONNAIRE ANSWERS]
    
    Draft formal interrogatory responses. Follow this structure:
    - Standard caption and introduction (use [CASE CAPTION] placeholder)
    - General Objections section — include only objections supported by these grounds: [LIST ANY GROUNDS YOU WANT INCLUDED, E.G., "overbroad," "unduly burdensome," "attorney-client privilege"]
    - Individual responses keyed to each interrogatory number
    - Where the client's answer is incomplete, draft the response to reflect what was provided and add [ATTORNEY: CONFIRM/SUPPLEMENT]
    - Where no client answer was provided, write [NO RESPONSE PROVIDED — ATTORNEY ACTION REQUIRED]
    
    Do not add substantive information the client did not provide.

    Tweaks: If you want the model to draft privilege-specific objections, add the privilege basis and a brief description of what you’re protecting. Never let the model guess at privilege — it will get it wrong.

    9. Objection-Letter Style Review of Opposing Counsel Correspondence

    When to use it: Opposing counsel sent a letter with factual characterizations, legal positions, or demands. You want a structured breakdown before you respond — what they claimed, what’s disputable, what’s accurate, and what they may be setting up.

    What to expect: A point-by-point analysis of the letter’s claims, flagging factual assertions, legal conclusions, and rhetorical moves separately. This is a thinking tool, not a draft response. It’s genuinely useful for clearing your head before you pick up the phone or start typing.

    You are a legal assistant helping a solo attorney analyze a letter from opposing counsel.
    
    Letter from opposing counsel:
    [PASTE THE LETTER]
    
    Your client's matter context (brief summary only):
    [2-3 SENTENCES ON THE MATTER — NO PRIVILEGED DETAIL]
    
    Analyze the letter with the following breakdown:
    1. Factual Claims — List each factual assertion made in the letter. For each, note whether it appears accurate, disputable, or unverifiable based on the context provided.
    2. Legal Positions — Identify any legal conclusions or theories asserted. Flag these as [LEGAL POSITION — ATTORNEY REVIEW NEEDED].
    3. Implicit Threats or Posturing — Note any implied threats, deadlines, or strategic positioning.
    4. Demands — List all explicit demands, including response deadlines.
    5. Suggested Response Points — For each factual claim marked disputable, note what a response might address. Do not draft the response itself.
    
    Do not evaluate the legal merit of positions — flag them for attorney review.

    Tweaks: This prompt works well as a second pass after you’ve already read the letter yourself. Run it after forming your own initial reaction and compare the model’s breakdown to your instincts — the gaps are usually informative.

    10. End-of-Week Matter Status Email to a Client

    When to use it: Friday afternoon. You have five active matters and five clients who haven’t heard from you since Tuesday. You have notes on what happened this week. You need five short emails in twenty minutes.

    What to expect: A professional, warm, appropriately brief client update email. The model writes competent client-facing prose without the wooden formality of a form letter. You’ll still need to fact-check every line against your actual matter status — the model only knows what you tell it.

    You are a legal assistant helping a solo attorney write a client status update email.
    
    Matter context:
    - Matter type: [E.G., pending litigation / contract negotiation / estate plan]
    - Current stage: [E.G., discovery / drafting / awaiting opposing party response]
    - What happened this week: [BRIEF BULLET POINTS]
    - What is happening next: [NEXT 1-2 STEPS]
    - Any action needed from client: [YES/NO — IF YES, DESCRIBE]
    - Tone: [PROFESSIONAL AND WARM / FORMAL / CASUAL — CLIENT'S PREFERENCE]
    
    Write a brief client update email (150-250 words). 
    - Address the client as [CLIENT FIRST NAME].
    - Sign as [ATTORNEY NAME].
    - Do not include specific dollar amounts, legal conclusions, or strategic assessments.
    - End with a clear statement of what the client should do next, if anything.
    - Do not use legal jargon without a plain-English explanation.

    Tweaks: Build a simple text file with your five active matters’ bullet-point status each Friday afternoon and run this prompt five times in a row. Takes about fifteen minutes total once you have the habit. Some attorneys batch this in a single prompt asking for all five emails at once — results are slightly lower quality but still usable.

    Notes on Using These Prompts

    Model Choice: GPT-4o vs. Claude Sonnet 3.5

    I ran all ten prompts on both GPT-4o (via ChatGPT Plus) and Claude Sonnet 3.5 (via Claude.ai Pro). Short verdict: GPT-4o produces more fluent, polished prose — better for the demand letter, the client email, and the plain-language settlement summary. Claude Sonnet 3.5 is more conservative and flags more aggressively — better for the engagement letter review and the interrogatory draft, where over-flagging is a feature, not a bug. For the conflict check and chronology, they perform comparably. Neither is accurate enough on jurisdiction-specific legal cites to skip your own review.

    Customization Variables to Build In

    Every prompt above has bracket variables. The ones worth standardizing across your practice:

    • [JURISDICTION] — Add this to every prompt. It doesn’t guarantee accurate statutory cites, but it steers the model’s general framing correctly.
    • [PRACTICE AREA] — Narrows the model’s issue-spotting. Without it, you get generic output.
    • [TONE] — Matters more than you’d expect on client-facing documents. Define your client communication style once and paste it in.
    • [ATTORNEY REVIEW NEEDED] — Keep this flag language consistent across all prompts so you know at a glance what the model flagged when you’re editing.

    Where These Prompts Break

    The conflict check breaks when your existing client list is inconsistently formatted — the model can’t catch what it can’t parse. The deposition outline breaks on highly technical expert matters where the model lacks domain context. The demand letter breaks when the legal theory is novel or jurisdiction-specific enough that [INSERT AUTHORITY] placeholders dominate the whole legal basis section — at that point, you’re writing from scratch anyway. The interrogatory draft breaks when client answers are vague or contradictory, because the model fills gaps with plausible-sounding content it doesn’t actually know. Every prompt breaks on long documents that exceed the context window — split them.

    One Hard Rule

    These prompts produce first drafts. You edit, verify, and sign. If a line in the output doesn’t match your actual knowledge of the matter, cut it. The model doesn’t know your client. You do.

    Save these to a note, a doc, or a snippet manager like TextExpander or Raycast Snippets. The ten minutes you spend organizing them now will pay back within the first week you use them.

    Related reading

  • How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    How to Cut Billable-Hour Friction with AI Time Tracking (No New Software Required)

    You are already doing the work. This workflow makes sure you get paid for it.

    Most solo lawyers aren’t losing billable time because they’re lazy about tracking — they’re losing it because logging happens hours after the work, memory compresses a 40-minute call into a six-word entry, and back-to-back matters blur into a single undifferentiated afternoon. Studies on attorney time capture consistently land in the same neighborhood: 15–30% of billable work never makes it onto an invoice. This workflow fixes that without adding a dedicated time-tracking app to your monthly overhead. You need a transcription tool you may already have (Otter.ai or Fireflies.ai), your existing calendar, and a single Claude prompt you run once at end of day. The output drops into whatever practice management software you already use — Clio, MyCase, PracticePanther, or a spreadsheet if that’s where you are.

    What You’ll Need

    • Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month) — either works; Fireflies has slightly better Zoom/Teams auto-join, Otter is easier for in-person dictation via phone
    • Claude (claude.ai, Pro plan at $20/month, or API access if you want to automate later) — the prompt below was written and tested on Claude 3.5 Sonnet
    • Your existing calendar (Google Calendar or Outlook) — you’ll export or copy today’s event list
    • Your existing practice management software’s time entry screen — open it to receive the output
    • A matter-code list: a simple text list of your active matters and their billing codes, which you’ll paste into the prompt

    Step 1: Get Transcription Running in the Background

    The entire workflow depends on raw transcript text. Nothing fancy happens here — you are just making sure something is capturing words while you work.

    For calls and video meetings

    Connect Fireflies to your Google Meet, Zoom, or Teams calendar so it auto-joins every meeting. The first time it appears as “Notetaker,” alert participants that the meeting is being transcribed — check your state’s consent rules before you do this at all. One-party consent states give you more latitude on internal calls; two-party states mean you need explicit verbal acknowledgment before the bot stays in the room. Fireflies lets you configure a custom bot name (I use “LFB Notetaker”) so it looks less like a surveillance tool and more like a deliberate choice.

    For in-person work, research, and drafting time

    Open the Otter mobile app and hit record at the start of a drafting session or in-person client meeting. You don’t need to narrate every keystroke. Talking through what you’re doing — “starting review of the indemnification clause in the Smith MSA, flagging the liability cap” — gives Claude enough context to write a real time entry later. Even a 30-second verbal summary at the end of a task (“done with that, probably 45 minutes”) is enough. Otter’s transcripts are available in the app and exportable as plain text.

    Collect transcripts at end of day

    From Fireflies: go to Meetings, select each transcript from today, copy the full text or use the “Export as TXT” option. From Otter: open each conversation, hit the three-dot menu, and export as text. Paste all of today’s transcripts into a single plain-text document. Label each block with a rough time — “10:15 AM — Zoom call” — if the export doesn’t include timestamps. This takes under five minutes once it’s habitual.

    Step 2: Pull Your Calendar for the Day

    In Google Calendar, click the day view and copy the text of your appointments. In Outlook, use the “Today” view and do the same. You want event names, times, and any notes you added. This is the skeleton the AI uses to attribute time chunks to matters when transcripts are thin or missing. A calendar entry that reads “Garcia deposition prep — 2:00 PM – 4:00 PM” gives Claude a two-hour anchor even if you didn’t record anything during that block.

    Do not skip this step on days when you recorded everything. Calendar entries catch the gaps: the 20-minute call you took off-app, the courthouse run you forgot to narrate, the email sprint that never got a recording started.

    Step 3: Run the End-of-Day Claude Prompt

    Open Claude and paste the following prompt. Fill in the bracketed sections before sending. The prompt is long on purpose — Claude performs substantially better on time-entry tasks when it has explicit formatting rules and examples to follow rather than open-ended instructions.

    You are a legal billing assistant helping a solo attorney draft time entries for the day. You do NOT give legal advice. Your job is to convert raw transcript text and calendar entries into properly formatted billable time entries.
    
    ACTIVE MATTERS (billing codes and short names):
    [PASTE YOUR MATTER LIST HERE — e.g.:
      - 2024-047 / Garcia v. Hendricks (litigation)
      - 2024-061 / Patel Business Formation (transactional)
      - 2024-058 / Nguyen Estate Plan (estate)
      - ADMIN / Non-billable internal tasks]
    
    TODAY'S CALENDAR:
    [PASTE YOUR CALENDAR ENTRIES HERE — include event name, start time, end time, and any notes]
    
    TODAY'S TRANSCRIPTS:
    [PASTE ALL TRANSCRIPT TEXT HERE — label each block with approximate time if possible]
    
    ---
    
    INSTRUCTIONS:
    1. Review the calendar entries and transcripts together.
    2. For each identifiable block of work, draft one time entry in this exact format:
       - Matter: [billing code / matter name]
       - Date: [today's date]
       - Time (hours): [round to nearest 0.1]
       - Description: [one sentence, active voice, specific — what was done, not just "worked on matter"]
    3. If a transcript block clearly belongs to a specific matter, assign it. If you are not certain, flag it as [ATTRIBUTION UNCERTAIN] and explain briefly why.
    4. Do not invent work that is not supported by the calendar or transcripts.
    5. Do not combine entries from different matters into one entry.
    6. After the entries, add a section called "Gaps and Flags" that lists: (a) any calendar blocks with no transcript support, (b) any transcript content you could not attribute to a matter, and (c) any entries where the time estimate feels imprecise.
    7. Keep descriptions under 20 words. Write them in the style used in legal billing — e.g., "Reviewed indemnification clause; drafted revision and sent to client for approval."
    
    OUTPUT FORMAT:
    Return a numbered list of draft time entries followed by the Gaps and Flags section. Do not add commentary between entries.

    Claude will return a numbered list of draft entries and a flags section. The flags section is the part most lawyers skip — don’t. It surfaces the gaps where time walked out the door.

    Close-up detail shot of two hands resting on a laptop keyboard, a legal notepad with handwritten notes visible only as a

    Step 4: Review, Edit, and Enter

    Claude’s draft entries are a starting point, not a finished product. Plan for a five-to-ten minute review pass. What you’re checking: matter attribution accuracy, time estimates that feel off, and descriptions vague enough to draw a billing dispute.

    Fix attribution errors first

    On multi-matter days, Claude occasionally assigns work to the wrong matter when two clients share an industry or a topic — “reviewed contract clause” can land on the wrong billing code if both your open matters involve contract review. The [ATTRIBUTION UNCERTAIN] flag catches the obvious ones, but scan all entries. You know your matters; Claude doesn’t.

    Adjust time estimates

    Claude derives time from transcript timestamps and calendar blocks. If a calendar block says 60 minutes but you wrapped in 35, change it. If a transcript from a “30-minute check-in” runs 47 minutes of actual content, adjust upward. The AI is giving you a scaffolding, not an invoice.

    Enter into your practice management software

    Copy each approved entry into Clio, MyCase, PracticePanther, or wherever you track time. Most practice management platforms have a quick-add time entry screen that takes under 30 seconds per entry when the description is already written. You are not re-drafting from memory — you are pasting and confirming. That is the entire efficiency gain.

    Step 5: Build the Habit Loop

    This workflow produces diminishing results if transcripts are inconsistent. The reliable version runs every single day, not just on busy ones. Set a recurring calendar event at 5:00 PM — “Time entry review, 10 min.” That block keeps the habit alive. After two weeks it compresses to six minutes. After a month the transcript collection is reflexive and the prompt run takes under four minutes of active attention.

    If you want to reduce the manual copy-paste, Fireflies has a Zapier integration that can push transcripts to a Google Doc automatically. You can then keep a running daily doc and paste the whole thing into Claude at end of day rather than exporting individual transcripts. That setup takes about 20 minutes to configure once and saves two to three minutes daily.

    Where This Breaks

    Phone calls without recording consent. This is the most common gap. If you practice in a two-party consent state and forget to get acknowledgment before a call, you get no transcript for that call. The calendar entry will show up in the Gaps and Flags section, but Claude can only estimate — it has no content to work from. The fix is a verbal habit: “Just so you know, I may be recording this call for my notes — is that okay?” said in the first 15 seconds. If a client declines, take a 30-second voice memo immediately after you hang up describing what was covered.

    Multi-matter days with thin context. When you have five active matters and a day full of short, topic-overlapping calls, Claude’s attribution guesses degrade. “Discussed indemnification” does not uniquely identify a matter when you have three open transactional files. Narrating the client name or matter reference number into your voice notes at the start of each session eliminates most of this. “Starting Garcia call” at the top of a transcript is enough context for reliable attribution.

    Transcription errors on legal terms. Both Otter and Fireflies occasionally garble case names, statute citations, and proper nouns. “Promissory estoppel” becomes “promissory a stopple.” This matters less than you might think for time entries — the AI is extracting intent and duration, not quoting the transcript verbatim — but it can confuse attribution when a client name gets mangled. Scan the flags section; that’s where garbled attributions surface.

    Privacy and confidentiality obligations. Transcripts contain client information. Otter and Fireflies store data on their servers. Before you run this workflow, check your state bar’s guidance on cloud storage of client data and review each vendor’s data processing terms. Claude processes data through Anthropic’s API; the Pro plan’s privacy settings default to not training on your inputs, but confirm that in your account settings before pasting client-identifying information. Some attorneys use matter codes rather than client names in transcripts specifically to reduce exposure — a reasonable precaution.

    What This Saves You

    The honest estimate: for a solo billing 25–30 hours per week, recovering 15–20% of previously lost time means three to five additional billable hours per week. At $250/hour, that is $750–$1,250 per week that was already earned but never invoiced. The workflow costs under $40/month in tools (if you don’t already have Otter or Fireflies) and roughly 10 minutes per workday once the habit is set. The math is not subtle.

    Beyond revenue, the descriptions Claude drafts are longer and more specific than what most attorneys write under time pressure. Better descriptions mean fewer billing disputes, faster client approval, and a cleaner paper trail if a fee is ever challenged. That is a secondary benefit, but it compounds over a full year of billing files.

    This workflow will not work for every attorney on every day. Phone-heavy practices in two-party consent states will see smaller gains without the voice-memo habit. Attorneys with highly irregular schedules who forget to start recordings will get patchy transcripts and patchy output. But for a solo who runs a relatively consistent day of calls, drafting, and client meetings, this is the most direct path from “I think I billed about six hours today” to “I have eight verified entries in my practice management software and I know exactly what they cover.”

    Related reading

  • The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    The AI-Powered Client Intake Workflow Every Solo Lawyer Should Steal

    A 30-minute intake call produces a structured matter file in under five minutes of editing — if you wire up the right three tools before the call starts.

    This workflow is built for solo lawyers and firms of two to five attorneys who are personally running their own intake. You take the call, you open the matter, you chase the conflict check. Every step is manual and each one costs time you don’t have. The workflow below connects a structured intake form, a call transcription tool, and a Claude prompt to collapse that 30-minute process into about five minutes of cleanup. I’ve written it so you can implement it in an afternoon. The total recurring cost runs between $20 and $40 per month depending on which tools you already pay for.

    What You’ll Need

    • Intake form tool: Typeform (free tier works; Paid starts at $25/month) or Jotform (free tier available). Either gives you a shareable link you send before the call.
    • Transcription tool: Otter.ai (Pro plan, $16.99/month) or Fireflies.ai (Pro plan, $18/month). Both join video calls automatically and produce a searchable transcript within minutes of the call ending.
    • Claude: Claude.ai Pro ($20/month) or API access via Anthropic. Claude 3.5 Sonnet handles long transcripts without truncating the way shorter-context models do.
    • Practice management software: Clio Manage or MyCase. You’ll paste the output of the Claude prompt into a new matter note. No native integration required — this is copy-paste, not automation.

    Step 1: Build Your Pre-Call Intake Form

    Send a form link 24 hours before the scheduled call. The form does two things: it primes the prospective client to think clearly before you talk, and it gives you structured data that the Claude prompt will pull from directly.

    Fields to include

    • Full legal name
    • Date of birth
    • Phone and email
    • Adverse party name(s) — this is your conflict-check input
    • Matter type (dropdown: family, estate planning, business formation, real estate, employment, other)
    • Brief description of the situation (open text, 500-character limit)
    • Relevant dates (incident date, deadlines, filing dates they’re aware of)
    • Prior attorneys on this matter (yes/no + name field if yes)
    • How did you hear about us

    Keep the form under ten fields. Longer forms get abandoned. The goal is names, adverse parties, and a rough description — everything else comes out in the call.

    In Typeform, turn on email notifications so the completed response lands in your inbox before the call starts. In Jotform, the same setting lives under Settings → Emails. Export the response as a PDF and have it open during the call.

    Step 2: Record and Transcribe the Call

    If you’re on Zoom or Google Meet, Otter.ai and Fireflies.ai both join as a bot participant and record automatically once you connect your calendar. For phone calls, Otter’s mobile app records locally and transcribes after the fact. Fireflies handles phone recording through its dial-in number, which is slightly more friction.

    Tell the prospective client at the start of the call that you’re recording for your notes. One sentence is enough: “I record intake calls so I can focus on listening — the recording is just for my internal file.” Most clients don’t object. Check your state bar’s rules on recording consent before you run this call the first time; a few states require two-party consent on recorded phone calls.

    After the call ends, Otter delivers a transcript and summary to your inbox within five to ten minutes. Fireflies is slightly faster. Either one produces a searchable text file — that transcript is what you feed to Claude.

    One thing to check: both tools include speaker labels, but they’re imperfect. Otter labels speakers as “Speaker 1” and “Speaker 2” unless you manually assign names. Fireflies does the same. The Claude prompt handles unlabeled speakers fine — just note in the prompt which speaker is the attorney.

    Close-up detail shot of two hands resting near an open laptop keyboard, a leather portfolio and fountain pen in soft foc

    Step 3: Run the Claude Prompt

    Open Claude.ai Pro (or your API interface) and paste the following prompt, then paste the full transcript below it. Do not summarize the transcript yourself first — give Claude the raw text. The prompt is designed to pull structure out of unstructured conversation.

    You are a legal intake assistant helping a solo attorney organize information from a new client intake call. You do not provide legal advice or legal analysis. Your job is to extract and organize factual information from the transcript below into a structured intake brief.
    
    Using only the information in the transcript, produce the following sections:
    
    1. CLIENT INFORMATION
       - Full name
       - Contact information (phone, email) if mentioned
       - Date of birth if mentioned
    
    2. ADVERSE PARTIES
       - List every person, company, or entity the client mentioned as an opposing or adverse party
       - Include any names the attorney should check for conflicts
    
    3. MATTER TYPE AND DESCRIPTION
       - Practice area (as stated or clearly implied)
       - Neutral factual summary of the client's situation in 3-5 sentences. Do not characterize fault, liability, or legal merit. Report what the client described.
    
    4. KEY DATES AND DEADLINES
       - Any specific dates mentioned (incident dates, contract dates, filing dates, court dates)
       - Any deadlines the client is aware of
    
    5. DOCUMENTS MENTIONED
       - Any documents the client referenced (contracts, court filings, notices, deeds, etc.)
    
    6. PRIOR REPRESENTATION
       - Any prior attorneys the client mentioned in connection with this matter
    
    7. OPEN QUESTIONS
       - Information that appears missing or unclear from this intake that the attorney will likely need before opening the matter (do not suggest legal strategy — list informational gaps only)
    
    8. CONFLICT CHECK NAMES
       - A clean list of every proper name and entity name pulled from sections 1 and 2, formatted one per line, ready to copy into a conflict-check search
    
    Format each section with a clear header. Use bullet points within sections. If a section has no information from the transcript, write "Not mentioned in call."
    
    Do not add information not found in the transcript. Do not offer legal opinions. Do not speculate about outcomes.
    
    TRANSCRIPT:
    [paste full transcript here]

    The prompt takes about 90 seconds to run on Claude 3.5 Sonnet with a standard 30-minute transcript. The output is typically 400 to 600 words of clean, structured text.

    Tuning the prompt for your practice area

    If you run a family law practice, add a line to Section 3: “Note any minor children mentioned, their ages, and current custody arrangements as described by the client.” If you do transactional work, add a section for “Entities and Ownership” to capture business names, EINs, or ownership structures the client mentions. The base prompt above is practice-area neutral by design — specialize it once and save the modified version as a text file you reuse.

    Step 4: Merge Form Data With the AI Summary

    Claude’s output covers what was said on the call. Your Typeform or Jotform response covers what the client submitted before the call. These two documents sometimes disagree — the client wrote one adverse party name on the form and mentioned two others on the call. That gap is worth catching before you open the matter.

    Spend two to three minutes reading both documents side by side. Look specifically at: adverse party names (conflict-check section), dates (do the form dates match what was discussed), and matter type. Where they conflict, note it in the Open Questions section of the Claude output before you file it.

    Then copy the combined, lightly edited intake brief into your practice management software. In Clio, open a new Matter, go to the Notes tab, and paste it as a pinned note titled “Initial Intake Brief — [Date].” In MyCase, the equivalent is a new Case Note marked Internal. Either way, the structured brief is now searchable and attached to the matter from day one.

    Run your conflict check using the “Conflict Check Names” list from the Claude output. In Clio, that’s a global search across contacts. In MyCase, use the Conflicts search under the Contacts menu. Because the prompt formats each name on its own line, you can move through the list quickly without reformatting anything.

    Where This Breaks

    The prompt fails predictably in one category: emotionally complex matters where the most important facts are what the client didn’t say clearly. A caller describing a contentious divorce who is guarded, interrupted, or inconsistent will produce a transcript full of fragmentary sentences and topic shifts. Claude will dutifully summarize the fragments — and the summary will read as coherent when the underlying situation is not. You’ll get a clean-looking brief that papers over real ambiguity.

    The fix is partial, not complete. Add this to the Section 7 (Open Questions) prompt instruction: “Note any topics where the client gave contradictory or incomplete information, even if you cannot resolve the contradiction.” That surfaces the gaps, but it doesn’t replace your own read of the transcript for anything emotionally charged — grief, trauma, estrangement, or financial desperation. Read the raw transcript for those matters. The brief is a starting point, not a substitute.

    A second failure mode is proper noun recognition. Otter and Fireflies both mis-transcribe uncommon names — a client named “Dzhokhar” becomes “Joker” in the transcript, which flows through to the conflict-check list. Scan the names list before you run the conflict search. One missed name in a conflict check is a genuine problem; catching it takes 60 seconds.

    Third: this workflow assumes the client completed the pre-call form. When they don’t — which happens with roughly one in four prospective clients in my observation — the merge step in Step 4 collapses to just the Claude output, which is still useful, but the conflict-check list is thinner. You can prompt the client for the form during the call or ask the adverse party names directly. Either way, note in the file that the pre-call form was not received.

    What This Saves You

    The honest estimate: 20 to 25 minutes per new matter. The manual version of this process — handwritten notes, typed summary, conflict-check name assembly — runs 25 to 35 minutes after a 30-minute call for most solo practitioners. The automated version runs five to seven minutes (three minutes reading and editing the Claude output, two minutes on the conflict-check list, two minutes pasting into Clio or MyCase).

    If you take 10 new matters per month, that’s three to four hours returned to billable work or to leaving the office earlier. It also reduces the most common intake error: forgetting to run a conflict check on every name the client mentioned, not just the obvious adverse party. The structured output makes that step harder to skip.

    The pre-call form adds a side benefit that doesn’t show up in time estimates: clients who complete it arrive at the call more organized. The call itself often runs shorter.

    This workflow costs $55 to $75 per month in new tool spend if you don’t already pay for any of the components (Typeform free tier + Otter Pro + Claude Pro). If you already have a transcription tool through your video conferencing plan, or you’re already on Claude, the incremental cost is lower. At 10 new matters a month, the math on three reclaimed hours isn’t complicated.

    Build it once on a slow afternoon. Run it on the next intake call. Adjust the prompt after the first three uses when you see what it misses for your specific practice area. The structure is there from day one; the tuning takes a week.

    Related reading

  • Clio vs MyCase vs Smokeball: Practice Management for Solo and Small Firms in 2026

    Clio vs MyCase vs Smokeball: Practice Management for Solo and Small Firms in 2026

    Three platforms dominate practice management for small firms in 2026 — and picking the wrong one costs you more than the subscription fee.

    Clio, MyCase, and Smokeball each made meaningful moves in 2025 and early 2026: new AI features, pricing restructures, and integrations that change the calculus for solo and small-firm buyers. This comparison is aimed at attorneys running solo practices or firms of 2–10 attorneys who are either choosing a platform for the first time or wondering whether to switch. The short version: Clio wins on integrations and flexibility, MyCase wins on price and simplicity, and Smokeball wins on document automation depth — particularly for litigation-heavy practices. None of them wins everywhere.

    How we compared them

    The criteria: pricing tiers and what actually changes between them, time tracking and billing workflow, client portal usability, document automation capability, AI features added since mid-2025, and third-party integrations. Where behavior differs by plan, that’s noted. Vendor marketing claims are paraphrased in plain language; wherever a feature has a known limitation, it’s flagged.

    Clio

    Clio is the largest dedicated legal practice management platform in North America by user count. That scale matters because it drives their integration catalog — currently over 200 third-party connections — and funds the R&D that produced Clio Duo, their AI layer rolled out through 2025 and refined into early 2026.

    What it does well

    Time tracking is Clio’s strongest billing-side feature. The desktop and mobile timers are reliable, the automatic time capture (which pulls from emails and calendar events) works without constant babysitting once configured, and the bill-review workflow is clean. If your firm bills hourly and tracks time across multiple matters simultaneously, Clio handles that better than either competitor at this price range.

    The integration catalog is genuinely differentiated. Native connections to QuickBooks Online, LawPay, Dropbox, Google Workspace, Microsoft 365, Zoom, and a long tail of specialized tools (Docketbird for federal court filings, CompuLaw for calendaring rules) mean Clio can slot into almost any existing workflow without forcing you to abandon tools you already pay for. For a firm that has already built a working stack, that flexibility is real money.

    Clio Duo — their AI assistant embedded in Clio Manage — handles matter summarization, draft email generation from matter context, and task creation from conversation. In 2025, they added document Q&A: you can ask questions about documents stored in a matter and get sourced answers. It works well for straightforward factual retrieval (“what is the termination clause in this contract?”) and breaks down on multi-document synthesis across large matter files. Clio Duo is included in the higher tiers; more on that under pricing.

    What it misses

    Document automation is weak for the price. Clio’s template system allows variable substitution but doesn’t approach the conditional logic depth of Smokeball or even some cheaper standalone tools. A firm doing heavy transactional or litigation document production will hit the ceiling fast. The client portal (Clio for Clients) is functional — secure messaging, document sharing, bill payment — but the UI is dated compared to MyCase’s portal and regularly draws complaints from clients who aren’t tech-comfortable. Onboarding complexity is also higher than MyCase; expect a few weeks before the team is actually running in it, not a few days.

    Pricing

    As of early 2026, Clio Manage runs on four tiers: EasyStart at $49/user/month (billing and time tracking only, no document management), Essentials at $79/user/month, Advanced at $109/user/month, and Complete at $139/user/month. Clio Duo is available at Advanced and Complete. Annual billing discounts these by roughly 20%. A solo at Essentials pays $79/month; a 5-attorney firm at Advanced pays $545/month before any add-ons. Clio Grow (their CRM and intake product) is a separate subscription — $99/user/month at the lowest tier — which surprises buyers who assumed intake was included.

    MyCase

    MyCase has positioned itself as the accessible alternative to Clio since around 2019, and in 2025 they leaned harder into that positioning with a pricing restructure and a client portal redesign. For a solo or a very small firm that bills flat-fee or needs a single platform to handle intake through invoice without complexity, MyCase is worth serious attention.

    What it does well

    The client portal is the best of the three. It’s cleanly designed, mobile-friendly, and clients consistently find it intuitive enough to use without a tutorial. Secure messaging, document uploads, invoice viewing and payment, and electronic signature requests all live in one place and work without friction. For consumer-facing practices — family law, estate planning, immigration, personal injury — where client communication volume is high and clients aren’t always tech-savvy, this matters a lot.

    MyCase added AI-assisted intake forms and matter summary generation in 2025 under their MyCase IQ branding. The intake form builder uses AI to suggest fields based on practice area, which is a practical time-saver when setting up new matter types. Matter summaries pull from case notes, documents, and communications and produce a readable briefing — useful for quickly handing off matters to coverage counsel or reviewing a file before a call. The quality is consistent enough to use as a starting point rather than a rough draft.

    Flat-fee billing is handled more naturally in MyCase than in Clio. Milestone billing, payment plans, and the ability to tie an invoice to a matter stage without workarounds are all built in. If a third of your matters are flat-fee and a third are hourly, MyCase handles the mix without forcing you to adapt your workflow to the software.

    What it misses

    The integration catalog is smaller than Clio’s — meaningfully so. MyCase connects to QuickBooks, LawPay, Stripe, Google Workspace, and a handful of others, but if you rely on specialized tools for court calendaring, e-discovery, or filing, you will hit gaps. Document automation exists but is template-basic; conditional logic and clause libraries are not present. Time tracking works but lacks the automatic-capture sophistication of Clio’s desktop app. For an hourly-billing practice with high time entry volume, the friction adds up.

    Pricing

    MyCase runs three tiers as of early 2026: Basic at $39/user/month, Pro at $69/user/month, and Advanced at $89/user/month. MyCase IQ (the AI features) is included in Pro and Advanced. Annual billing applies. A solo at Pro pays $69/month; a 5-attorney firm at Pro pays $345/month. eSign is included at Pro and above. This is the lowest all-in price of the three platforms for a firm of 2–5 attorneys that doesn’t need deep integrations or litigation document automation. There is no separate CRM product; intake and lead tracking are built into the platform at the Pro tier.

    Close-up detail shot of two hands resting near an open laptop keyboard on a warm wood desk, a printed document visible o

    Smokeball

    Smokeball targets litigation and transactional practices that live inside Microsoft 365. It is the most opinionated of the three — the software makes assumptions about how you work, and if those assumptions match your practice, the productivity gains are real. If they don’t, the rigidity will frustrate you within a month.

    What it does well

    Document automation is the headline feature and it earns the billing. Smokeball ships with thousands of pre-built forms organized by practice area and jurisdiction — state-specific court forms, transactional templates, demand letters — with conditional logic that actually branches based on matter data. For a litigation practice in a supported jurisdiction, the time from “new matter opened” to “first set of documents generated” is measurably shorter than on either competitor. The 2025 update added AI-assisted document drafting that pulls matter facts into template placeholders and flags missing data before you finalize — a practical quality-control step that reduces the embarrassing error rate on form-heavy matters.

    Smokeball’s automatic time capture is the most passive of the three. The Windows desktop app records time spent in Word documents, emails, and other applications associated with a matter without requiring the attorney to start a timer. For attorneys who consistently under-record time — a common and expensive habit — the difference in captured billable hours is the clearest financial argument for Smokeball’s higher price. The 2025 benchmarking data Smokeball publishes on this (claiming an average of 1.5–2 additional billable hours per attorney per day over self-reported time) is worth treating skeptically, but directionally, passive capture does recover time that manual logging misses.

    Smokeball AI, their 2025-launched assistant, handles document summarization, clause identification, and — most usefully — automatic population of matter fields from uploaded documents. Drop in a signed retainer and it pulls client name, address, matter type, and key dates into the matter record without manual entry. That specific feature saves real time on intake-heavy practices.

    What it misses

    Smokeball is Windows-first and Microsoft 365-dependent. Mac support exists but is thinner, and if your firm runs on Google Workspace, you will be fighting the software’s default assumptions on every file storage and email step. The client portal is functional but behind MyCase in usability by a clear margin. Pricing is the least transparent of the three — Smokeball does not publish per-user monthly pricing on its public site, which means you are going into a sales conversation before you can compare numbers, and the contract terms tend toward annual commitments with limited flexibility. The integration catalog is narrower than Clio’s; outside of Microsoft-adjacent tools and a core set of legal-specific integrations, the connections are limited.

    Pricing

    Smokeball does not list per-seat pricing publicly. Based on reported figures from 2025 buyer conversations, the Bill tier (entry-level, time tracking and billing) runs approximately $99/user/month, Grow (adds matter management and document automation) runs approximately $149/user/month, and Boost (full feature set including AI features) runs approximately $179/user/month — all on annual contracts. These numbers should be verified in your sales conversation because they shift. A 3-attorney firm at Grow is spending roughly $450–$540/month. That is more than MyCase and roughly comparable to Clio Advanced. Smokeball’s value case rests on the document automation and passive time capture offsetting the higher per-seat cost; whether that math works depends entirely on your matter volume and document density.

    Side-by-side

    • Entry price (solo, annual billing): MyCase Basic $39/mo → MyCase Pro $69/mo → Clio Essentials $79/mo → Clio Advanced $109/mo → Smokeball Bill ~$99/mo → Smokeball Grow ~$149/mo
    • Time tracking: Clio best for manual + automatic capture; Smokeball best for fully passive Windows capture; MyCase adequate for most, limited automatic capture
    • Billing types: All three handle hourly and flat-fee; MyCase handles milestone billing most naturally; Clio most flexible for complex trust accounting
    • Client portal: MyCase best UX; Clio functional; Smokeball least polished
    • Document automation: Smokeball clearly leads; Clio basic variable substitution; MyCase basic templates
    • AI features (2025–2026): All three have AI layers now — Clio Duo (document Q&A, matter summaries, task generation); MyCase IQ (intake assist, matter summaries); Smokeball AI (document population from uploads, clause ID, summarization)
    • Integrations: Clio 200+; MyCase ~30 core; Smokeball Microsoft-centric with ~20 legal-specific
    • Platform dependency: Clio cloud-agnostic; MyCase cloud-agnostic; Smokeball Windows + Microsoft 365 preferred
    • Pricing transparency: MyCase and Clio publish rates; Smokeball requires a sales call
    • Contract flexibility: Clio and MyCase offer monthly billing (at a premium); Smokeball pushes annual contracts

    Picking the right one

    If you are a solo or a firm of 2–4 attorneys doing consumer-facing work — family law, immigration, estate planning, criminal defense — and you want a single platform that is fast to learn, handles flat-fee and hourly billing, and gives clients a portal they will actually use, start with MyCase Pro at $69/user/month. You get the AI features, the clean portal, and built-in intake without a separate CRM subscription. The integration gaps will not affect most practices at this size.

    If you are a firm of 4–10 attorneys with a mixed practice, an existing stack of specialized tools, or a strong need for QuickBooks integration, docketing software connections, or flexibility to add tools as you grow, Clio Advanced at $109/user/month is the defensible choice. The higher per-seat cost buys you integration headroom and a time tracking system that scales. If Clio Grow (intake CRM) is relevant to your practice, budget for it separately — the combined cost is higher but the workflow is tighter than cobbling together intake tools on MyCase.

    If you run a litigation-heavy or transactional practice on Windows, your firm lives in Microsoft 365, and you generate high document volume per matter — personal injury, real estate closings, family law in a form-heavy jurisdiction, civil litigation — Smokeball Grow earns its price if the per-seat cost lands under $160/month in your negotiation. The document automation and passive time capture are genuinely differentiated features, not marketing copy. Get the pricing in writing before your trial period ends, and clarify the cancellation terms on the annual contract before you sign.

    If you are on a tight budget and billing under $15,000/month across the firm, MyCase Basic at $39/user/month is worth a 30-day trial before spending more. It covers the fundamentals — matter management, billing, client portal — without requiring you to commit to a platform you haven’t lived in yet.

    Verdict

    There is no single winner here — the right answer is genuinely practice-dependent, which is something vendor comparison sites understate because they are often paid to say otherwise.

    Use MyCase if you want the lowest total cost, the best client portal, and a platform your team will be running in within a week. Use Clio if your firm has an existing tool stack, bills heavily by the hour, or needs integration flexibility as you grow. Use Smokeball if you are Windows-and-Word-based, generate high document volume per matter, and will actually run the passive time capture — because that feature alone can justify the price difference if your attorneys are consistently under-recording time.

    All three platforms shipped meaningful AI updates in 2025. None of the AI layers replaces a dedicated AI drafting tool yet — they are best understood as workflow connectors that surface matter context at the right moment, not autonomous drafting engines. Treat them as useful additions to the platforms you already have reasons to choose, not as deciding factors on their own.

    Related reading