The Tech Savvy Lawyer Titelbild

The Tech Savvy Lawyer

The Tech Savvy Lawyer

Von: Michael D.J. Eisenberg
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

The Tech Savvy Lawyer interviews Judges, Lawyers, and other professionals discussing utilizing technology in the practice of law. It may springboard an idea and help you in your own pursuit of the business we call "practicing law". Please join us for interesting conversations enjoyable at any tech skill level!© ℗ 2020 Michael D.J. Eisenberg Politik & Regierungen
  • 🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️
    Feb 17 2026
    My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy. Join Justin and me as we discuss the following three questions and more! What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk?Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? In our conversation, we cover the following 00:00 – Welcome and guest introductionJustin joins the show and shares his current tech setup at his desk. 00:00–01:00 – Justin's current tech stackLenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making. 01:00–02:00 – Android vs. iPhone for AI useWhy Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability. 02:00–05:30 – Q1: Top three ways litigators should be using AI right nowUsing AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks. 05:30–07:30 – StrongSuit vs. basic tools like Word grammar checkHow StrongSuit aims to "up-level" a lawyer's writing, not just catch typos.Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents. 06:00–08:00 – AI context limits and scaling doc reviewConstraints of large models' context windows (around ~1M tokens ≈ ~750 pages).How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights. 08:00–09:00 – Handling tens of thousands of documentsHow StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters. 09:00–11:30 – Origin story of StrongSuitWhy Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.StrongSuit's focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step. 11:30–13:30 – From intake to brief drafting in minutesGenerating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.StrongSuit's long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment. 12:00–14:30 – How StrongSuit tackles hallucinationsBuilding a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.Validating citations by checking whether the Bluebook citation actually exists in StrongSuit's case database before surfacing it to the user.Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations. 14:30–16:30 – Coverage and jurisdictionsCoverage of all U.S. jurisdictions, federal and state, focused on precedential cases.Handling most regulations from administrative agencies, and limits around local ordinances.Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows. 15:00–17:00 – Security and confidentiality for litigatorsSOC 2 compliance and industry-standard encryption at rest and in transit.No model training on user data.Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys. 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigationMistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract ...
    Mehr anzeigen Weniger anzeigen
    35 Min.
  • TSL.P Labs 🧪: Courts Are Punishing Fake AI Evidence: How to Protect Your Cases, Clients, and License ⚖️🤖
    Feb 13 2026
    Everyday devices can capture extraordinary evidence, but the same tools can also manufacture convincing fakes. 🎥⚖️ In this episode, we unpack our February 9, 2026, editorial on how courts are punishing fake digital and AI-generated evidence, then translate the risk into practical guidance for lawyers and legal teams. You'll hear why judges are treating authenticity as a frontline issue, what ethical duties get triggered when AI touches evidence or briefing, and how a simple "authenticity playbook" can help you avoid career-ending mistakes. ✅ In our conversation, we cover the following 00:00:00 – Preview: From digital discovery to digital deception, and the question of what happens when your "star witness" is actually a hallucination or deepfake 🚨00:00:20 – Introducing the editorial "Everyday Tech, Extraordinary Evidence Again: How Courts Are Punishing Fake Digital and AI Data." 📄00:00:40 – Welcome to the Tech-Savvy Lawyer.Page Labs Initiative and this AI Deep Dive Roundtable 🎙️00:01:00 – Framing the episode: flipping last month's optimism about smartphones, dash cams, and wearables as case-winning "silent witnesses" to their dark mirror—AI-fabricated evidence 🌗00:01:30 – How everyday devices and AI tools can both supercharge litigation strategy and become ethical landmines under the ABA Model Rules ⚖️00:02:00 – Panel discussion opens: revisiting last month's "Everyday Tech, Extraordinary Evidence" AI bonus and the optimism around smartphone, smartwatch, and dash cam data as unbiased proof 📱⌚🚗00:02:30 – Remembering cases like the Minnesota shooting and why these devices were framed as "ultimate witnesses" if the data is preserved quickly enough 🕒00:03:00 – The pivot: same tools, new threats—moving from digital discovery to digital deception as deepfakes and hallucinations enter the evidentiary record 🤖00:03:30 – Setting the "mission" for the episode: examining how courts are reacting to AI-generated "slop" and deepfakes, with an increasingly aggressive posture toward sanctions 💣00:04:00 – Why courts are on high alert: the "democratization of deception," falling costs of convincing video fakes, and the collapse of the old presumption that "pictures don't lie" 🎬00:04:30 – Everyday scrutiny: judges now start with "Where did this come from?" and demand details on who created the file, how it was handled, and what the metadata shows 🔍00:05:00 – Metadata explained as the "data about the data"—timestamps, software history, edit traces—and how it reveals possible AI manipulation 🧬00:06:00 – Entering the "sanction phase": why we are beyond warnings and into real penalties for mishandling or fabricating digital and AI evidence 🚫00:06:30 – Horror Story #1 (Mendon v. Cushman & Wakefield, Cal. Super. Ct. 2025): plaintiffs submit videos, photos, and screenshots later determined to be deepfakes created or altered with generative AI 🧨00:07:00 – Judge Victoria Kakowski's response: finding that the deepfakes undermined the integrity of judicial proceedings and imposing terminating sanctions—"death penalty" for the lawsuit ⚖️00:07:30 – How a single deepfake "poisons the well," destroying the court's trust in all of a party's submissions and forfeiting their right to the court's time 💥00:08:00 – Horror Story #2 (S.D.N.Y. 2023): the New York "hallucinating lawyer" case where six imaginary cases generated by ChatGPT were filed without verification 📚00:08:30 – Rule 11 sanctions and humiliation: Judge Castel's order, monetary penalty, and the requirement to send apology letters to real judges whose names were misused ✉️00:09:00 – California follow-on: appellate lawyer Amir Mustaf files an appeal brief with 21 fake citations, triggering a 10,000-dollar sanction and a finding that he did not read or verify his own filing 💸00:09:30 – Courts' reasoning: outsourcing your job to an AI tool is not just being wrong—it is wasting judicial resources and taxpayer money 🧾00:10:00 – Do we need new laws? Why Michael argues that existing ABA Model Rules already provide the safety rails; the task is to apply them to AI and digital evidence, not to reinvent them 🧩00:10:20 – Rule 1.1 (competence): why "I'm not a tech person" is no longer a viable excuse if you use AI to enhance video or draft briefs without understanding or verifying the output 🧠00:11:00 – Rule 1.6 (confidentiality): the ethical minefield of uploading client dash cam video or wearable medical data to consumer-grade AI tools and risking privilege leakage ☁️00:11:30 – Training risk: how client data can end up in model training sets and why "quick AI summaries" can inadvertently expose secrets 🔐00:12:00 – Rules 3.3 and 4.1 (candor and truthfulness): presenting AI-altered media as original or failing to verify AI output can now be treated as misrepresentation 🤥00:12:30 – Rules 5.1–5.3 (supervision): why partners and ...
    Mehr anzeigen Weniger anzeigen
    19 Min.
  • TSL.P Labs 🧪: Legal Tech Wars, Client Data, and Your Law License: An AI-Powered Ethics Deep Dive ⚖️🤖
    Feb 6 2026
    📌 To Busy to Read This Week's Editorial? Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer Page Labs Initiative episode, AI co-hosts walk through how high‑profile "legal tech wars" between practice‑management vendors and AI research startups can push your client data into the litigation spotlight and create real ethics exposure under ABA Model Rules 1.1, 1.6, and 5.3. We'll explore what happens when core platforms face federal lawsuits, why discovery and forensic audits can put confidential matters in front of third parties, and how API lockdowns, stalled product roadmaps, and forced sales can grind your practice operations to a halt. More importantly, you'll get a clear five‑step action plan—inventorying your tech stack, confirming data‑export rights, mapping backup providers, documenting diligence, and communicating with clients—that works even if you consider yourself "moderately tech‑savvy" at best. Whether you're a solo, a small‑firm practitioner, in‑house, or simply AI‑curious, this conversation will help you evaluate whether you are the supervisor of your legal tech—or its hostage. 🔐 In our conversation, we cover the following 00:00:00 – Setting the stage: Legal tech wars, "Godzilla vs. Kong," and why vendor lawsuits are not just Silicon Valley drama for spectators.00:01:00 – Introducing the Tech-Savvy Lawyer Page Labs Initiative and the use of AI-generated discussions to stress-test legal tech ethics in real-world scenarios.00:02:00 – Who's fighting and why it matters: Clio as the "nervous system" of many firms versus Alexi as the "brainy intern" of AI legal research.00:03:00 – The client data crossfire: How disputes over data access and training AI tools turn your routine practice data into high-stakes litigation evidence.00:04:00 – Allegations in the Clio–Alexi dispute, from improper data access to claims of anti-competitive gatekeeping of legal industry data.00:05:00 – Visualizing risk: Client files as sandcastles on a shelled beach and why this reframes vendor fights as ethics issues, not IT gossip.00:06:00 – ABA Model Rule 1.1 (Competence): What "technology competence" really entails and why ignorance of vendor instability is no longer defensible.00:07:00 – Continuity planning as competence: Injunctions, frozen servers, vendor shutdowns, and how missed deadlines can become malpractice.00:08:00 – ABA Model Rule 1.6 (Confidentiality): The "danger zone" of treating the cloud like a bank vault and misunderstanding who really holds the key.00:09:00 – Discovery risk explained: Forensic audits, third‑party access, protective orders that fail, and the cascading impact on client secrets.00:10:00 – Data‑export rights as your "escape hatch": Why "usable formats" (CSV, PDF) matter more than bare contractual promises.00:11:00 – Practical homework: Testing whether you can actually export your case list today, not during a crisis.00:12:00 – ABA Model Rule 5.3 (Supervision): Treating software vendors like non‑lawyer assistants you actively supervise rather than passive utilities.00:13:00 – Asking better questions: Uptime, security posture, and whether your vendor is using your data in its own defense.00:14:00 – Operational friction: Rising subscription costs, API lockdowns, broken integrations, and the return of manual copy‑pasting.00:15:00 – Vaporware and stalled product roadmaps: How litigation diverts engineering resources away from features you are counting on.00:16:00 – Forced sales and 30‑day shutdown notices: Data‑migration nightmares under pressure and why waiting is the riskiest strategy.00:17:00 – The five‑step moderate‑tech action plan: Inventory dependencies, review contracts, map contingencies, document diligence, and communicate with nuance.00:18:00 – Turning risk management into a client‑facing strength and part of your value story in pitches and ongoing relationships.00:19:00 – Reframing legal tech tools as members of your legal team rather than invisible utilities.00:20:00 – "Supervisor or hostage?": The closing challenge to check your contracts, your data‑export rights, and your practical ability to "fire" a vendor. Resources Mentioned in the episode ABA Model Rule 1.1 – Competence (Technology Competence Comment) – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/ABA Model Rule 1.6 – Confidentiality of Information – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/ABA Model Rule 5.3 – Responsibilities Regarding Nonlawyer Assistance – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_responsibilities_regarding_nonlawyer_assistance/...
    Mehr anzeigen Weniger anzeigen
    21 Min.
Noch keine Rezensionen vorhanden