M365.FM - Modern work, security, and productivity with Microsoft 365 Titelbild

M365.FM - Modern work, security, and productivity with Microsoft 365

M365.FM - Modern work, security, and productivity with Microsoft 365

Von: Mirko Peters (Microsoft 365 consultant and trainer)
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network.

Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.Copyright Mirko Peters / m365.fm - Part of the m365.show Network - News, tips, and best practices for Microsoft 365 admins
Politik & Regierungen
  • Stop Building Apps, Start Engineering Control Planes
    Feb 21 2026
    Most organizations think more apps means more productivity. They’re wrong. More apps mean more governance surface area — more connectors, more owners, more permissions, more data pathways, and more tickets when something breaks. Governance-by-humans doesn’t scale. Control planes scale trust. This episode breaks down a single operating model shift — from building apps to engineering control planes — that consistently reduces governance-related support tickets by ~40%. This channel does control, not crafts. 1. The Foundational Misunderstanding: “An App Is the Solution” An app is not the solution. An app is a veneer over:Identity decisionsConnector pathwaysEnvironment boundariesLifecycle eventsAuthorization graphsWhat gets demoed isn’t what gets audited. Governance doesn’t live in the canvas. It lives in the control plane: identity policy, Conditional Access, connector permissions, DLP, environment strategy, inventory, and lifecycle enforcement. App-first models create probabilistic systems.Control planes create deterministic ones. If the original maker quits today and the system can’t be safely maintained or retired, you didn’t build a solution — you built a hostage situation. 2. App Sprawl Autopsy App sprawl isn’t aesthetic. It’s measurable. Symptoms:3,000+ apps no one can explainOrphaned ownershipDefault environment gravityConnector creepGovernance tickets as leading indicatorsThe root cause: governance that depends on human review. Approval boards don’t enforce policy.They manufacture precedent. Exceptions accumulate. Drift becomes normal. Audits require heroics. Governance becomes theater. 3. The Hidden Bill App-first estates create recurring operational debt:📩 Support friction📑 Audit evidence scavenger hunts🚨 Incident archaeology💸 License and capacity wasteThe executive translation: You can invest once in a control plane.Or you can pay ambiguity tax forever. 4. What a Control Plane Actually Is A control plane decides:What can existWho can create itWhat must be true at creation timeWhat happens when rules driftOutputs:Identity outcomesPolicy outcomesLifecycle outcomesObservability outcomesIf enforcement requires memory instead of automation, it’s not control. 5. Microsoft Already Has the Control Plane Components You’re just not using them intentionally.Entra = distributed decision engineConditional Access = policy compilerMicrosoft Graph = lifecycle orchestration busPurview DLP = boundary enforcement layerPower Platform admin features = scale controlsThe tools exist. Intent usually doesn’t. Case Study 1: Power App Explosion Problem: 3,000+ undefined apps.Solution: Governance through Graph + lifecycle at birth. Changes:Enforced ownership continuityZoned environments (green/yellow/red)Connector governance gatesAutomated retirementContinuous inventoryResults:41% reduction in governance-related tickets60% faster audit evidence production28% reduction in unused assetsSystem behavior changed. Case Study 2: Azure Policy Chaos Problem: RBAC drift, orphaned service principals, inconsistent tagging.Solution: Identity-first guardrails + blueprinted provisioning. Changes:Workload identity standardsExpiring privileged rolesSubscription creation templatesDrift as telemetryEnforced tagging at birthResults:35% drop in misconfigurations22% reduced cloud spendZero major audit findingsGovern the principals. Not the resources. Case Study 3: Copilot & Shadow AI Blocking AI creates shadow AI. So they built an agent control plane:Prompt-level DLPLabel-aware exclusionsAgent identity governanceTool-scoped permissionsLifecycle + quarantineMonitoring for drift & defectsResults:Full rollout in 90 daysZero confirmed sensitive data leakage events2.3× forecasted adoptionNot “safe AI.”Governable AI. Executive Objection: “Governance Slows Innovation” Manual review slows innovation. Control planes accelerate it. App-first scaling looks fast early.Then ambiguity compounds.Tickets rise. Trust erodes. Innovation slows anyway. Control planes remove human bottlenecks from the hot path. The Operating Model Self-service with enforced guardrails:Zoning (green/yellow/red)Hub-and-spoke or federated on purposeEngineered exception workflowsStandardized templatesIncentives for reuse and deprecationAnd one executive truth serum: 🎯 Governance-related support ticket volume. If that number drops ~40%, your control plane is real. If it doesn’t, you’re performing governance. Failure Modes Control planes rot when:Automation is over-privilegedPolicies pile without refactoringLabels are fantasyOrphaned identities persistTelemetry doesn’t existGovernance must be enforceable, observable, and lifecycle-driven. Otherwise it’s theater. Conclusion Stop scaling apps.Scale a programmable control plane. If this episode helped reframe your tenant, leave a review so more operators find it. Connect with Mirko Peters on LinkedIn for deeper control plane patterns.Become a supporter of this podcast: ...
    Mehr anzeigen Weniger anzeigen
    1 Std. und 32 Min.
  • The Context Advantage: Architecting the High-Performance Autonomous Enterprise
    Feb 20 2026
    Most organizations think their AI rollout failed because the model wasn’t smart enough, or because users “don’t know how to prompt.” That’s the comforting story. It’s also wrong. In enterprises, AI fails because context is fragmented: identity doesn’t line up with permissions, work artifacts don’t line up with decisions, and nobody can explain what the system is allowed to treat as evidence. This episode maps context as architecture: memory, state, learning, and control. Once you see that substrate, Copilot stops looking random and starts behaving exactly like the environment you built for it. 1) The Foundational Misunderstanding: Copilot isn’t the system The foundational mistake is treating Microsoft 365 Copilot as the system. It isn’t. Copilot is an interaction surface. The real system is your tenant: identity, permissions, document sprawl, metadata discipline, lifecycle policies, and unmanaged connectors. Copilot doesn’t create order. It consumes whatever order you already have. If your tenant runs on entropy, Copilot operationalizes entropy at conversational speed. Leaders experience this as “randomness.” The assistant sounds plausible—sometimes accurate, sometimes irrelevant, occasionally risky. Then the debate starts: is the model ready? Do we need better prompts? Meanwhile, the substrate stays untouched. Generative AI is probabilistic. It generates best-fit responses from whatever context it sees. If retrieval returns conflicting documents, stale procedures, or partial permissions, the model blends. It fills gaps. That’s not a bug. That’s how it works. So when executives say, “It feels like it makes things up,” they’re observing the collision between deterministic intent and probabilistic generation. Copilot cannot be more reliable than the context boundary it operates inside. Which means the real strategy question is not: “How do we prompt better?” It’s: “What substrate have we built for it to reason over?” What counts as memory?What counts as state?What counts as evidence?What happens when those are missing? Because when Copilot becomes the default interface for work—documents, meetings, analytics—the tenant becomes a context compiler. And if you don’t design that compiler, you still get one. You just get it by accident. 2) “Context” Defined Like an Architect Would Context is not “all the data.” It’s the minimal set of signals required to make a decision correctly, under the organization’s rules, at a specific moment in time. That forces discipline. Context is engineered from:Identity (who is asking, under what conditions)Permissions (what they can legitimately see)Relationships (who worked on what, and how recently)State (what is happening now)Evidence (authoritative sources, with lineage)Freshness (what is still true today)Data is raw material. Context is governed material. If you feed raw, permission-chaotic data into AI and call it context, you’ll get polished outputs that fail audit. Two boundaries matter:Context window: what the model technically seesRelevance window: what the organization authorizes as decision-grade evidenceBigger context ≠ better context. Bigger context often means diluted signal and increased hallucination risk. Measure context quality like infrastructure:AuthoritySpecificityTimelinessPermission correctnessConsistencyIf two sources disagree and you haven’t defined precedence, the model will average them into something that never existed. That’s not intelligence. That’s compromise rendered fluently. 3) Why Agents Fail First: Non-determinism meets enterprise entropy Agents fail before chat does. Why? Because chat can be wrong and ignored.Agents can be wrong and create consequences. Agents choose tools, update records, send emails, provision access. That means ambiguity becomes motion. Typical failure modes: Wrong tool choice.The tenant never defined which system owns which outcome. The agent pattern-matches and moves. Wrong scope.“Clean up stale vendors” without a definition of stale becomes overreach at scale. Wrong escalation.No explicit ownership model? The agent escalates socially, not structurally. Hallucinated authority.Blended documents masquerade as binding procedure. Agents don’t break because they’re immature. They break because enterprise context is underspecified. Autonomy requires evidence standards, scope boundaries, stopping conditions, and escalation rules. Without that, it’s motion without intent. 4) Graph as Organizational Memory, Not Plumbing4Microsoft Graph is not just APIs. It’s organizational memory. Storage holds files.Memory holds meaning. Graph encodes relationships:Who metWho editedWhich artifacts clustered around decisionsWhich people co-author repeatedlyWhich documents drove escalationCopilot consumes relational intelligence. But Graph only reflects what the organization leaves behind. If containers are incoherent, memory retrieval becomes probabilistic. ...
    Mehr anzeigen Weniger anzeigen
    1 Std. und 22 Min.
  • The Hybrid Mandate: Orchestrating Python Inside the Power Platform
    Feb 18 2026
    Most organizations misunderstand Power Platform. They treat it like a productivity toy.Drag boxes. Automate an email. Call it transformation. It works at ten runs per day.It collapses at ten thousand. Not because the platform failed.Because complexity was never priced. So here’s the mandate:Power Platform = Orchestration tierPython = Execution tierAzure = Governance tierSeparate coordination from computation.Wrap it in identity, network containment, logging, and policy. If you don’t enforce boundaries, entropy does.And entropy always scales faster than success. Body 1 — The Foundational Misunderstanding Power Platform Is a Control Plane (~700 words) The first mistake is calling Power Platform “a tool.” Excel is a tool.Word is a tool. Power Platform is not. It is a control plane. It coordinates identity, connectors, environments, approvals, and data movement across your tenant. It doesn’t just automate work — it defines how work is allowed to happen. That distinction changes everything. When you treat a control plane like a toy, you stop designing it.And when you stop designing it, the system designs itself. And it designs itself around exceptions. “Just one connector.”“Just one bypass.”“Just store the credential for now.”“Just add a condition.” None of these feel large.All of them accumulate. Eventually you’re not operating a deterministic architecture.You’re operating a probabilistic one. The flow works — until:The owner leavesA token expiresA connector changes its payloadLicensing shiftsThrottling kicks inA maker copies a flow and creates a parallel universeIt still “runs.” But it’s no longer governable. Then Python enters the conversation. The naive question is:“Can Power Automate call Python?” Of course it can. The real question is:Where does compute belong? Because Python is not “just code.”It’s a runtime. Dependencies. Network paths. Secret handling. Patching. If you bolt that onto the control plane without boundaries, you don’t get hybrid architecture. You get shadow runtime. That’s how governance disappears — not through malice, but through convenience. So reframing is required: Power Platform orchestrates.Python executes.Azure governs. Treat Power Platform like a control plane, and you start asking architectural questions:Which principal is calling what?Where do secrets live?What is the network path?Where are logs correlated?What happens at 10× scale?Most teams don’t ask those because the first flow works. Then you have 500 flows. Then audit shows up. That’s the governance ceiling. Body 2 — The Low-Code Ceiling When Flows Become Pipelines (~700 words) The pattern always looks the same. Flow #1: notify someone.Flow #2: move data.Flow #3: transform a spreadsheet. Then trust increases. And trust becomes load. Suddenly your “workflow” is:Parsing CSVNormalizing columnsDeduplicating dataJoining sourcesHandling bulk retriesBuilding error reportsInside a designer built to coordinate steps — not compute. Symptoms appear:Nested loops inside nested loopsScopes inside scopesTry/catch glued togetherRun histories with 600 actionsRetry stormsThrottling workaroundsIt works.But you’ve turned orchestration into accidental ETL. This is where people say:“Maybe we should call Python.” The instinct is right. But the boundary matters. If Python is:A file watcherA laptop scriptA shared service accountA public HTTP triggerA hidden token in a variableYou haven’t added architecture. You’ve added entropy. The right split is simple:Flow decides that work happensPython performs deterministic computationAzure enforces identity and containmentWhen orchestration stays orchestration, flows become readable again. When execution moves out, retries become intentional. When governance is explicit, scale stops being luck. The low-code ceiling isn’t about capability. It’s about responsibility. Body 3 — Define the Three-Tier Model (~700 words) This isn’t a diagram. It’s an ownership contract. Tier 1 — Orchestration (Power Platform) Responsible for:TriggersApprovalsRoutingStatus trackingNotificationsHuman interactionNot responsible for:Heavy transformsBulk computeDependency managementRuntime patchingTier 2 — Execution (Python) Responsible for:Deterministic computeValidationDeduplicationInferenceBulk updatesSchema enforcementIdempotencyBehaves like a service, not a script. That means:Versioned contractsStructured responsesBounded payloadsExplicit failuresTier 3 — Governance (Azure) Responsible for:Workload identity (Entra ID)Managed identitiesSecretsNetwork containmentPrivate endpointsAPI policiesLogging and correlationWithout Tier 3, Tier 1 and 2 collapse under entropy. Body 4 — The Anti-Pattern Python Sidecar in the Shadows (~700 words) Every tenant has this:File watcher polling a folderPython script on a jump boxShared automation accountPublic function “temporarily exposed”Secret pasted in environment variableIt works. Until it ...
    Mehr anzeigen Weniger anzeigen
    1 Std. und 20 Min.
Noch keine Rezensionen vorhanden