She Said Privacy/He Said Security Titelbild

She Said Privacy/He Said Security

She Said Privacy/He Said Security

Von: Jodi and Justin Daniels
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century. Management & Leadership Ökonomie
  • The Accountability Problem Behind AI Adoption
    Apr 23 2026

    Kristin Calve is the Editor & Publisher of Corporate Counsel Business Journal and the Co-founder of Law Business Media. She leads editorial strategy focused on AI governance, legal operations, and board-level risk, and convenes forward-leaning legal leaders through interviews, events, and industry analysis.

    In this episode…

    Controlled AI deployment is one of the most pressing challenges legal and business leaders face right now. New AI tools are often adopted quickly without the full understanding of how they're being used, where data goes, and who's accountable for the outcomes. Some teams explore AI without direction or intention. Others prescribe it with guardrails, defining who can use it and how. The gap between those two approaches is where risk lives. So, how can organizations deploy and use AI without losing control?

    Legal operations teams are often accountable for how AI is used in practice. They understand the regulatory landscape and manage contracts and deadlines. They're often involved in operations across finance, HR, sales, and other business functions, so they know how those processes work and why they were built that way. That institutional knowledge matters as AI is introduced into those systems. At the same time, prompt documentation, AI notetakers, and recordings are introducing new risks. Teams may not know what is being captured, where it is going, or how it could become discoverable. Supply chain exposure adds another layer of risk. Vendors might embed AI into the tools organizations already rely on, potentially affecting an organization's overall privacy and security posture.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Kristin Calve, Editor & Publisher of Corporate Counsel Business Journal and Co-founder of Law Business Media, about how organizations are navigating AI deployment and risk. Kristin explains how companies are deploying AI inconsistently and the challenge of controlling its use. She shares how regulatory requirements shape accountability and why legal operations teams often bear responsibility for what's permissible. Kristin also explains the risks of prompts and recordings becoming discoverable and discusses how AI increases speed and capacity, but does not replace the need for judgment.

    Mehr anzeigen Weniger anzeigen
    22 Min.
  • Advancing AI Fluency With Grit and Growth Mindset
    Apr 9 2026

    Gabrielle Kohlmeier is a lawyer, tech whisperer, and transformation executive in a lifelong love affair with growth mindset and sustainable innovation. From building a Fortune 30 legal and policy approach to antitrust, to navigating retail risk, to leading global legal AI adoption and outperforming teams. She helps organizations rightsize risk and turn disruption into strategic value.

    In this episode…

    Many companies are rushing to adopt AI tools and publish AI policies, yet far fewer are investing in AI fluency across their workforce. Knowing how to use an AI tool is not the same as understanding what it is doing, what data it collects and uses, and the privacy, security, and compliance obligations that come with using it. Without that level of understanding, organizations risk using AI without fully grasping its impact. So, what does true AI fluency look like in practice?

    Organizations spend time creating AI governance policies, and sometimes those policies are not operationalized. Governance then becomes "precious" when it is documented and published but not embedded into how teams actually work. That gap becomes more pronounced when teams lack the AI fluency needed to apply governance to their day-to-day use of AI tools. To be effective, governance needs to be lived, with clear accountability, ongoing feedback loops, and policies and processes regularly revisited as AI use cases evolve. It also requires establishing privacy and security guardrails that allow teams to experiment with AI responsibly, while right-sizing risks.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Gabrielle Kohlmeier, Legal and Innovation Executive, about building AI fluency and operationalizing responsible AI use. Gabrielle explains why AI fluency goes beyond simply using AI tools and requires a deeper understanding of the ethical and legal obligations that come with them. She shares how AI governance often breaks down in practice and what it takes to truly operationalize it, while enabling responsible AI experimentation with clear guardrails. Gabrielle also highlights numerous curated resources to help companies stay grounded as AI evolves and offers a practical privacy tip that applies to everyday internet and AI use.

    Mehr anzeigen Weniger anzeigen
    31 Min.
  • Why Every Company Needs a Trust Center
    Mar 26 2026

    Kelly Peterson is the Chief Privacy and Compliance Officer for Yobi AI, a company dedicated to building models based on consented data to democratize access to data in an ethical and privacy-respecting manner. As CPO, Kelly establishes the strategy for the company's compliance programs and advises on product development utilizing PbDD. She collaborates cross-functionally with key internal stakeholders and external partners to explain Yobi's unique approach to AI development.

    In this episode…

    Building trust around how companies collect and use consumer personal information has become a defining challenge. Companies need to be upfront with the types of personal information they collect from consumers, why they collect it, and how it is used. Making that information easy to access can help people better understand a company's privacy and security practices. And one way to do that is through a trust center.

    Trust centers do more than build credibility. They can also serve as an efficient sales and marketing tool that quickly answers questions about an organization's privacy and security practices. Building one often starts with an internal advocate. That advocate can work with sales and marketing teams to demonstrate how having privacy and security information in one place enables more effective responses to requests from organizations evaluating potential business partnerships. When building AI tools or other new products and features, companies should treat trust as a design choice and be transparent about how behavioral data is used and the benefits consumers receive from it.

    In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Kelly Peterson, Chief Privacy and Compliance Officer at Yobi AI, about building trust-centered approaches to privacy and security practices. Kelly explains the role trust centers play in demonstrating transparency to consumers and business partners. She shares how businesses benefit from building new products, features, and AI tools with trust in mind, and why demonstrating the benefits of using consumer behavioral data helps build trust. Kelly also discusses the challenges companies face when navigating overlapping privacy laws, AI regulations, and other privacy-adjacent regulations.

    Mehr anzeigen Weniger anzeigen
    27 Min.
Noch keine Rezensionen vorhanden