Tag: contract-review

  • Harvey vs CoCounsel for Solo Practitioners: Is Either Worth the Subscription?

    Harvey vs CoCounsel for Solo Practitioners: Is Either Worth the Subscription?

    CoCounsel is the realistic choice for solo and small-firm lawyers. Harvey is worth knowing about — and worth skipping until you’re billing at BigLaw volume.

    Both tools claim to do the same things: draft memos, review contracts, prep for depositions, answer research questions. The gap between them isn’t features — it’s who they were actually built for and what that means when a solo practitioner sits down and tries to get work done. Harvey assumes you have a knowledge management team, a dedicated IT contact, and a firm-negotiated enterprise contract. CoCounsel assumes you have a Westlaw login and thirty minutes to learn something new. For a firm of one to ten attorneys, that difference is the whole story.

    How We Compared Them

    The criteria: pricing transparency and accessibility for a solo, the workflow assumptions baked into each product, which practice areas and matter types actually benefit, how each tool handles legal research integration, and where each one breaks in day-to-day small-firm use. Harvey’s public documentation, reported pricing from legal tech press, and practitioner accounts informed the Harvey side. CoCounsel was evaluated based on its publicly available feature set, Thomson Reuters’ published pricing tiers, and reported user experience from solo and small-firm attorneys.

    Harvey

    Harvey is an AI tool built on top of large language models — including GPT-4 and Claude variants at different points — and positioned explicitly at large law firms and in-house legal departments. The firm’s early clients were Am Law 100 shops. That’s not incidental; it’s the design philosophy made visible.

    What Harvey does well, by most accounts: long-document analysis at scale, memo drafting from complex fact patterns, due diligence review across large document sets, and regulatory research in specialized practice areas. It has purpose-built modules for M&A, tax, and employment matters. Firms like Allen & Overy and PwC Legal negotiated early enterprise deployments. The product is genuinely sophisticated for those environments.

    The problem for a solo is pricing and access. Harvey does not publish pricing publicly. It does not offer a self-serve sign-up. Getting access requires contacting their sales team, going through a demo process, and negotiating a contract — one that, per legal tech reporting, typically runs into five figures annually for even small deployments. As of mid-2025, there is no solo tier, no monthly subscription option, and no free trial in the conventional sense. If you email Harvey today asking for a solo license, you will likely get a demo call and then a quote you won’t take.

    The workflow assumption Harvey makes is that you have infrastructure: a document management system to feed it, an IT contact to configure it, and volume — enough matters and enough hours to justify the contract. A firm billing 1,800 hours a year across two attorneys doesn’t have that volume. Harvey’s ROI math doesn’t pencil for a solo unless you’re in a practice area with genuinely massive document loads and high hourly rates.

    Pricing: Enterprise contract only. No published rate. Reported floor is approximately $50,000–$75,000 per year for small deployments, based on legal tech press coverage. No solo or small-firm tier as of this writing.

    CoCounsel (Thomson Reuters)

    CoCounsel is Thomson Reuters’ AI assistant, originally built by Casetext before TR acquired the company in 2023. That history matters: Casetext was specifically targeting solo and small-firm practitioners before the acquisition, and CoCounsel has retained more of that DNA than most acquired legal AI tools do.

    The feature set covers six main task types: contract review, legal research (with Westlaw integration), deposition preparation, document summarization, timeline creation from documents, and draft memo generation. Each runs as a distinct workflow inside the tool — you pick the task, upload documents or enter a question, and the tool walks through a structured output. That structure is a real advantage for a solo. You don’t need to write a sophisticated prompt from scratch; the tool frames the task for you and asks for what it needs.

    The Westlaw integration is the single clearest advantage CoCounsel has over everything else in this price range. Research outputs are grounded in Westlaw’s database with citations that link back to source material. That’s not a marketing claim — it’s a structural difference from tools that run a general-purpose LLM against the open web and hope the citations hold up. When CoCounsel cites a case, you can click through and verify it inside Westlaw. When a general-purpose tool cites a case, you need to go verify it yourself before you trust it.

    Contract review is the other task that consistently draws positive reports from small-firm practitioners. Feed it an NDA, a services agreement, or a lease, and CoCounsel returns a structured analysis flagging non-standard clauses, missing provisions, and risk areas — with explanations in plain language. The analysis isn’t perfect; it misses nuance in heavily negotiated custom agreements and occasionally over-flags standard boilerplate. But for a first-pass review on a matter where you’d otherwise spend forty-five minutes doing it manually, it earns its place.

    Deposition prep is a genuinely useful feature for litigation-heavy solos. Upload a deposition transcript or a set of prior statements, and CoCounsel identifies inconsistencies, generates potential follow-up question areas, and summarizes key testimony by topic. It won’t write your depo outline for you — the strategic judgment is still yours — but it compresses the document review phase significantly on complex multi-transcript matters.

    Pricing: CoCounsel is available as a standalone subscription or bundled with Westlaw. As of 2025, the standalone CoCounsel subscription runs approximately $100/month for individual users, with firm pricing available at higher tiers. Thomson Reuters has also offered CoCounsel as an add-on to existing Westlaw Edge subscriptions. Pricing has shifted since acquisition — confirm current rates directly with TR before budgeting.

    Close-up detail shot of two hands resting near a laptop keyboard, a printed contract document visible on the desk as abs

    Where Harvey Actually Fits (And It’s Not Here)

    Harvey fits a firm that has already decided to treat AI as infrastructure — budgeted like software and staffed accordingly. That means an Am Law 200 firm, a large regional firm, or a corporate legal department with enough matters to justify the contract and someone internally managing the deployment. Harvey’s contract review and memo drafting capabilities are legitimately strong in those environments, and the customization options — including firm-specific knowledge bases and matter type configurations — are features a BigLaw associate would actually use.

    For a solo doing estate planning, family law, small business transactional work, or even mid-volume litigation, Harvey doesn’t fit. Not because it couldn’t theoretically do the work, but because you can’t get a license at a price that makes sense, and even if you could, the onboarding and configuration overhead isn’t worth it at your matter volume. The product was not designed with your workflow in mind.

    Where CoCounsel Actually Fits

    CoCounsel fits best in three practice area profiles for solos and small firms:

    • Transactional practices with regular contract review. Business attorneys, real estate lawyers, and employment practitioners reviewing a steady volume of agreements get the clearest return. The contract review workflow handles NDAs, service agreements, and standard commercial leases well. It struggles with highly customized multi-party agreements — but those take more human judgment anyway.
    • Litigation practices with deposition-heavy matters. If you routinely prep for depositions involving multiple transcripts or large records, the depo prep feature compresses a real chunk of document review time. Solo litigators handling complex commercial disputes or employment cases with significant written record have reported meaningful time savings here.
    • Research-intensive practice areas already using Westlaw. If you’re already paying for Westlaw Edge, the CoCounsel add-on pricing is relatively modest for what you get. The grounded-research capability — citations that link to real, verifiable Westlaw sources — makes it materially more trustworthy than a general-purpose AI research tool for any matter where you’ll actually rely on the output.

    It fits less well for criminal defense solos (the research database is civil-law heavy), immigration practitioners who need highly current regulatory updates (Westlaw latency on agency guidance can be an issue), and any practice where the primary document type is something outside CoCounsel’s trained task set — medical records review, for instance, or highly technical patent claims.

    Where Each One Breaks

    Harvey’s failure modes (for small firms)

    The primary failure mode is access. You simply cannot get Harvey on a per-matter or month-to-month basis. The entire product is gated behind an enterprise sales process that isn’t designed to close a solo practitioner. If a solo somehow navigated to a license — through a law school affiliation, a pilot program, or a future product tier that doesn’t yet exist — the second failure mode is configuration overhead. Harvey’s firm-specific knowledge base features require setup time and technical input that a solo without IT support will find difficult to use well. The product’s sophistication is a liability when you have no one to configure it.

    CoCounsel’s failure modes

    CoCounsel hallucinates less than general-purpose tools in research mode — but it does still make errors, particularly on very recent case law or on niche jurisdictional questions with limited Westlaw coverage. The citations are verifiable, which means you can catch errors, but you still have to check. Treating CoCounsel’s research output as final without verification is a mistake regardless of the Westlaw backing.

    Contract review outputs can be inconsistent on length and depth. A ten-page NDA might get a thorough analysis; a fifteen-page distribution agreement with unusual indemnification structures might get a surface-level summary that misses the clauses you’d most want flagged. The tool doesn’t always signal when it’s uncertain, which means you need baseline contract knowledge to evaluate the output rather than relying on it blindly.

    The pricing structure has changed multiple times since the Casetext acquisition. Attorneys who subscribed at one price point have been migrated to Thomson Reuters’ billing infrastructure at different rates. Before you budget for it, verify the current price directly with TR — what you read in a two-year-old review may no longer be accurate.

    Integration with non-Westlaw research tools is limited. If you’re a Lexis shop, CoCounsel’s core research advantage disappears — the Westlaw citation backbone is what makes the research feature trustworthy, and without it you’re using a general-purpose LLM layer without the grounding. There is no equivalent Lexis+ AI comparison that plugs into CoCounsel’s task structure, though Lexis has its own AI product worth a separate look.

    Side-by-Side

    • Accessible to solos: CoCounsel ✓ — Harvey ✗
    • Published pricing: CoCounsel ✓ — Harvey ✗
    • Self-serve signup: CoCounsel ✓ — Harvey ✗
    • Westlaw integration: CoCounsel ✓ (deep, citation-linked) — Harvey ✗ (no native integration)
    • Lexis integration: CoCounsel ✗ — Harvey ✗ (neither)
    • Contract review: CoCounsel ✓ (solid first pass) — Harvey ✓ (strong, enterprise-grade)
    • Memo drafting: CoCounsel ✓ (competent) — Harvey ✓ (strong)
    • Deposition prep: CoCounsel ✓ — Harvey ✓
    • Structured task workflows: CoCounsel ✓ — Harvey ✓ (requires more configuration)
    • Firm knowledge base customization: CoCounsel limited — Harvey ✓ (enterprise feature)
    • Practical for 1–5 attorney firm: CoCounsel ✓ — Harvey ✗

    Picking the Right One

    If you’re a solo or running a firm of two to ten attorneys: CoCounsel is the practical answer. It’s accessible, priced in a range that works for small-firm economics, integrates with the research tool most attorneys already pay for, and covers the highest-value AI tasks — contract review, research, deposition prep — with enough reliability to earn a place in your workflow. It’s not flawless. You still verify research. You still apply judgment to contract analysis. But it does what it advertises at a price you can actually pay.

    If you’re a solo primarily on Lexis rather than Westlaw, CoCounsel’s research advantage is largely neutralized. In that case, the decision gets more complicated — Lexis+ AI, Spellbook, or a well-configured general-purpose tool like Claude or GPT-4o with your own prompt framework may serve you better for the price. That’s a separate comparison worth doing.

    Harvey is worth knowing about because it represents where legal AI is heading at the enterprise level — and understanding the product helps you calibrate what tools at your price point are actually delivering versus what they’re approximating. But subscribe to Harvey at your firm size right now? Skip it. There’s no path to a license that makes financial sense, and the product doesn’t need your business. Check back when they launch a small-firm tier — which, given competitive pressure from CoCounsel and others, seems increasingly likely within the next two to three years.

    Use CoCounsel if you’re on Westlaw and doing regular contract review or litigation prep. Skip Harvey if you’re under 25 attorneys and not already in an enterprise sales conversation. Wait six months before making any new legal AI commitment if Thomson Reuters announces a pricing restructure — they’ve done it before, and it’s worth knowing what you’re buying into before you sign.

    Related reading

  • Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook for Solo Lawyers: A Two-Week Test of the AI Contract Review Tool

    Spellbook handles routine NDA and MSA review faster than doing it by hand — but throw a heavily-redlined draft or an exhibit-heavy agreement at it and the wheels come off.

    Spellbook is a Microsoft Word add-in that reads your contract, flags clause gaps, suggests redlines, and explains what it’s flagging in plain language. It’s built on GPT-4-class models and priced for law firms, not enterprise procurement teams. I ran it for two weeks on a mix of NDAs, MSAs, and SOWs — the bread-and-butter of a transactional solo — to find out whether it earns the monthly fee or just performs well in demos. The short answer: it earns it if you review contracts regularly. It doesn’t if you don’t.

    What It Does

    Spellbook lives in a sidebar inside Microsoft Word. You open a contract, open the sidebar, and Spellbook reads the document. From there it does three things: it flags clauses that are unusual or missing, it offers suggested language to replace or strengthen those clauses, and it answers questions about the document in a chat interface. All of this happens without leaving Word.

    The clause-flagging is the core feature and it’s genuinely good on clean drafts. On a standard mutual NDA, Spellbook caught a missing residuals clause, flagged an unusually broad definition of “Confidential Information” that lacked a standard carve-out for publicly available information, and noted that the term “Affiliate” was used twice but never defined. That’s exactly the kind of boilerplate gap that’s easy to miss on a Friday afternoon, and catching it took about forty seconds.

    The redline suggestion feature works the same way: click a flagged clause, and Spellbook offers replacement language. The suggestions are templated but adjustable — you can tell it “make this more favorable to my client, who is the vendor” and it rewrites accordingly. The quality is good enough to use as a first draft, not good enough to accept without reading.

    The chat interface lets you ask document-specific questions: “Does this agreement include an auto-renewal clause?” or “What’s the limitation of liability cap?” It pulls answers from the actual document text, not from general knowledge. On clean contracts, this was accurate. On contracts longer than about 30 pages, it started missing things — more on that below.

    Spellbook also runs what it calls a “playbook” review: you can load a standard set of preferred positions and it checks the contract against those positions automatically. Setting up a playbook takes some initial investment, but once it’s configured, it runs on every new document without extra prompting.

    Where It Actually Fits

    The sweet spot is a solo transactional attorney — or a small firm where one or two attorneys handle a steady flow of commercial contracts — who reviews NDAs, MSAs, SOWs, or vendor agreements multiple times a week. If you’re looking at five or more contracts a week, Spellbook pays for itself in time saved on first-pass review. The clause-flagging catches enough real issues fast enough that it shortens the first read meaningfully.

    For NDAs specifically, Spellbook is close to ideal. NDAs are structurally consistent enough that the model’s training shows: it knows what should be there, flags what isn’t, and the suggested language is close to usable. I ran eight NDAs through it over two weeks and it found something worth flagging in seven of them. Most of those were things I’d have caught anyway — but Spellbook caught them in the first sixty seconds, before I’d done my own read.

    MSAs with clean structure — a base agreement and one or two order forms, no exhibits attached — also work well. The model handles defined-term tracking better than I expected. It flagged two instances in one MSA where “Services” was used in a section that defined the scope, but the exhibit was supposed to govern scope instead, creating a potential conflict. Useful catch.

    The playbook feature fits well for solos who represent the same side of a transaction repeatedly — always the vendor, always the SaaS company, always the contractor. Load your preferred positions once and Spellbook runs those checks automatically. That saves real time compared to building a mental checklist every time.

    Practice areas beyond transactional commercial work get thinner. Employment agreements, commercial leases, and IP assignments work reasonably well because the structures are common enough that the model recognizes them. Anything more specialized — complex finance documents, healthcare agreements with regulatory-specific clauses — showed less confident suggestions and more generic flags.

    Close detail shot of hands resting on a mechanical keyboard, a printed contract visible on the desk surface to the right

    Where It Breaks

    Heavily-redlined drafts broke it for me consistently. When a contract has three or four rounds of tracked changes from multiple parties still embedded — all visible in Word — Spellbook gets confused about which version of the text to analyze. I ran one MSA that had been through two rounds of opposing counsel redlines and Spellbook flagged a clause as missing that was actually present in an accepted redline two paragraphs up. It was reading the document as if the redline layer didn’t exist. This is a real workflow problem because most contracts that need careful review are exactly the ones with heavy markup.

    The workaround is to accept all changes, save a clean copy, and run Spellbook on that. That works, but it adds a manual step and means you’re not reviewing the document in the state your client actually sent or received it.

    Exhibit-heavy MSAs were the other consistent failure mode. When an MSA had three or four attached exhibits — a Statement of Work template, a Data Processing Addendum, a Security Exhibit — Spellbook would analyze the base agreement without meaningfully integrating the exhibit content. It flagged “no data processing terms found” in one agreement where the DPA was a separate exhibit on the next page. The tool is analyzing the document section it can see, not the agreement as a whole when exhibits are substantively separate files or appendices.

    Long documents slow the suggestions down noticeably. Anything over 25-30 pages and the chat answers started lagging by five to ten seconds. Not a dealbreaker, but noticeable when you’re moving fast.

    The suggested redline language is templated enough that it occasionally reads as generic. On one SOW, the suggested scope-limitation language was so standard it didn’t account for the specific services described in the document. I used it as a starting point and rewrote it in about two minutes, but “starting point” is the accurate description — not “finished clause.”

    Spellbook also requires Microsoft Word. If your firm runs on Google Docs or if opposing counsel sends PDFs that you work in natively, you’ll need to convert first. That friction is minor but real. There is no Google Docs version as of this writing.

    What It Costs and What You Get

    Spellbook’s pricing is seat-based and billed annually. As of mid-2025, a solo seat runs approximately $149 per month (billed annually at roughly $1,788 per year). That’s the standard tier, which includes unlimited document reviews, the clause-flagging and suggestion features, and the chat interface.

    The playbook feature — loading your own preferred positions and running them automatically — is included in the standard tier, not gated behind a higher plan. That’s worth noting because playbooks are what make the tool genuinely faster for a solo who handles repeat transaction types.

    There is a higher-tier plan (pricing available on request) that adds team collaboration features, admin controls, and usage analytics. For a true solo, the standard tier is the right tier. The team features add overhead you don’t need when you’re the only reviewer.

    Spellbook offers a free trial — 14 days as of this writing — and the trial is full-featured, not limited to toy documents. Running the trial on real matters from your current workload is the right way to evaluate it. Running it on sample contracts tells you almost nothing about whether it fits your practice.

    At $149 per month for a solo, the math is straightforward. If Spellbook saves you one hour of first-pass review per week and your effective hourly rate is $200 or above, it pays for itself in about two billable hours per month. If you review fewer than two or three contracts a week, the calculus gets harder.

    Verdict

    Use it if you’re a transactional solo or a small firm handling commercial contracts regularly — NDAs, MSAs, vendor agreements, SOWs — and you want a faster first-pass review without hiring a second set of eyes. The clause-flagging is accurate enough on clean drafts to save real time, and the playbook feature compounds that value once you’ve set it up for your standard transaction types.

    Skip it if you’re primarily a litigator, if your transactional work is occasional rather than routine, or if your practice runs on Google Docs. The Word dependency is a real constraint and the monthly cost doesn’t make sense below roughly two to three contract reviews per week.

    Wait six months if your typical workflow involves heavily-redlined multi-party drafts or exhibit-heavy agreements that run past 30 pages. Spellbook is aware of these limitations — the tracked-changes issue in particular is something the product team has acknowledged — but as of this writing those gaps are real enough to affect daily use on complex matters.

    Related reading