Folgen

  • Course 25 - API Python Hacking | Episode 4: Structures, Process Spawning, and Undocumented Calls
    Feb 25 2026
    In this lesson, you’ll learn about:
    • Defining Windows Internal Structures in Python
      • Representing structures like PROCESS_INFORMATION and STARTUPINFO using ctypes.Structure
      • Mapping Windows data types (HANDLE, DWORD, LPWSTR) with the _fields_ attribute
      • Instantiating structures for API calls to configure or retrieve process information
    • Spawning System Processes
      • Using CreateProcessW from kernel32.dll
      • Setting application paths (e.g., cmd.exe) and command-line arguments
      • Managing creation flags like CREATE_NEW_CONSOLE (0x10)
      • Passing structures by reference with ctypes.byref to receive process and thread IDs
    • Accessing Undocumented APIs and Memory Casting
      • Leveraging DnsGetCacheDataTable from dnsapi.dll for reconnaissance
      • Navigating linked lists via pNext pointers in structures like DNS_CACHE_ENTRY
      • Using ctypes.cast to transform raw memory addresses into Python-readable structures
      • Extracting DNS cache information, such as record names and types, through loops and error handling
    • Key Outcome
      • Ability to build custom security tools that interact directly with Windows internals
      • Mastery of low-level API calls, memory traversal, and structure manipulation for forensic or security applications


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    22 Min.
  • Course 25 - API Python Hacking | Episode 3: From ctypes Basics to Building a Process Killer
    Feb 24 2026
    In this lesson, you’ll learn about:
    • Interfacing Python with Windows API using ctypes
      • Loading core DLLs: user32.dll and kernel32.dll
      • Executing basic functions like MessageBoxW
      • Mapping C-style data types (e.g., LPCWSTR, DWORD) to Python equivalents
    • Error Handling and Privileges
      • Using GetLastError to debug API failures
      • Common errors such as "Access Denied" (error code 5)
      • Understanding how token privileges and administrative rights affect process interactions
    • ProcKiller Project Workflow
      1. Find Window Handle: FindWindowA
      2. Retrieve Process ID: GetWindowThreadProcessId with ctypes.byref
      3. Open Process with Privileges: OpenProcess using PROCESS_ALL_ACCESS
      4. Terminate Process: TerminateProcess
    • Professional Practices
      • Documenting code thoroughly
      • Uploading projects to GitHub to build a professional portfolio
    • Key Outcome
      • Mastery of Python-to-Windows API integration, robust error handling, and creating scripts that can manipulate processes programmatically.


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    20 Min.
  • Course 25 - API Python Hacking | Episode 2: Foundations of Windows Internals and API Mechanisms
    Feb 23 2026
    In this lesson, you’ll learn about:
    • Fundamentals of Windows Processes and Threads
      • A process is a running program with its own virtual memory space
      • Threads are units of execution inside processes, allocated CPU time to perform tasks
      • Access tokens manage privileges and access rights; privileges can be enabled, disabled, or removed but cannot be added to an existing token
    • Key System Programming Terminology
      • Handles: Objects that act as pointers to memory locations or system resources
      • Structures: Memory formats used to store and pass data during API calls
    • Windows API Mechanics
      • How applications interact with the OS via user space → kernel space transitions
      • Anatomy of an API call, including parameters and naming conventions:
        • "A" → Unicode version
        • "W" → ANSI version
        • "EX" → Extended or newer version
    • Core Dynamically Linked Libraries (DLLs)
      • kernel32.dll: Process and memory management
      • user32.dll: Graphical interface and user interaction
      • Researching functions using Windows documentation and tools like Dependency Walker to identify both documented and undocumented API calls
    • Key Outcome
      • Understanding of how Windows manages processes, threads, and privileges, along with the workflow for interacting with the operating system through APIs and DLLs.


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    21 Min.
  • Course 25 - API Python Hacking | Episode 1: GitHub Portfolio Building and Environment Setup
    Feb 22 2026
    In this lesson, you’ll learn about:
    • Building a Professional Portfolio
      • Creating a GitHub account and configuring it for public repositories
      • Initializing repositories specifically for Python projects
      • Uploading and organizing files to showcase practical work for employers
    • Setting Up a Windows-Based Technical Workspace
      • Installing Python 3 and verifying it is correctly added to the system PATH
      • Installing Notepad++ for code editing and pinning it for quick access
      • Preparing essential analysis tools:
        • Process Explorer (system monitoring)
        • PsExec (remote execution and administrative tasks)
        • Dependency Walker (PE file structure and reverse engineering)
    • Integrating Online and Local Resources
      • Combining GitHub portfolio with local analysis tools for a fully functional workflow
      • Ensuring readiness for practical scripting and system analysis exercises
    • Key Outcome
      • A professional online presence plus a configured virtual workspace ready for the course’s technical exercises.


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    19 Min.
  • Course 24 - Machine Learning for Red Team Hackers | Episode 6: Security Vulnerabilities in Machine Learning
    Feb 21 2026
    In this lesson, you’ll learn about:
    • The major security threat categories in machine learning: model stealing, inversion, poisoning, and backdoors
    • How model stealing attacks replicate black-box models through API querying
    • Why attackers may clone models to reduce costs, bypass licensing, or craft offline adversarial examples
    • The concept of model inversion, where sensitive training data (e.g., faces or private attributes) can be partially reconstructed from learned weights
    • Why deterministic model parameters can unintentionally leak information
    • How data poisoning attacks manipulate training datasets to degrade accuracy or shift decision boundaries
    • The difference between availability attacks (general performance drop) and targeted poisoning (specific misclassification goals)
    • Why some architectures—such as CNN-based systems—can appear statistically robust yet remain strategically vulnerable
    • How backdoor (trojan) attacks embed hidden triggers during training or model updates
    • Why backdoors are difficult to detect due to normal performance under standard conditions
    Defensive & Mitigation Strategies This episode also highlights why ML systems must be secured across their lifecycle:
    • Restrict and monitor API query rates to reduce model extraction risk
    • Apply differential privacy and regularization to limit inversion leakage
    • Validate training datasets with integrity checks and anomaly detection
    • Use robust training techniques and adversarial testing to evaluate resilience
    • Perform model auditing and trigger scanning to detect backdoors
    • Secure the supply chain for datasets, pretrained models, and updates


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    16 Min.
  • Course 24 - Machine Learning for Red Team Hackers | Episode 5: The Complete Guide to Deepfake Creation
    Feb 20 2026
    In this lesson, you’ll learn about:
    • What deepfakes are and how neural networks enable face, voice, and style transfer
    • The standard face swap pipeline: extraction → preprocessing → training → prediction
    • Why conducting a local dry run helps validate datasets before scaling to expensive GPU environments
    • The importance of face alignment, sorting, and dataset cleaning to reduce false positives
    • How lightweight models are used for parameter tuning before full-scale training
    • The role of GPU acceleration in deep learning workflows
    • Why cloud platforms like Google Cloud are used for large-scale model training
    • The importance of compatible drivers (e.g., NVIDIA drivers) in deep learning setups
    • How frameworks such as TensorFlow power neural network training
    • How frame rendering and encoding tools like FFmpeg compile processed frames into video
    • How training previews help visualize model convergence from noise to structured outputs
    Ethical & Professional Considerations
    • Always obtain explicit consent from anyone whose likeness is used
    • Understand laws regarding impersonation, fraud, and non-consensual synthetic media
    • Consider watermarking or disclosure when creating synthetic content
    • Be aware that deepfake techniques are actively studied in media forensics and detection research


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    14 Min.
  • Course 24 - Machine Learning for Red Team Hackers | Episode 4: Mastering White-Box and Black-Box Attacks
    Feb 19 2026
    In this lesson, you’ll learn about:
    • The difference between white-box and black-box threat models in machine learning security
    • Why gradient-based models are vulnerable to carefully crafted input perturbations
    • The core intuition behind the Fast Gradient Sign Method (FGSM) as a sensitivity-analysis technique
    • How adversarial perturbations exploit a model’s local linearity and gradient structure
    • The purpose of adversarial ML frameworks like Foolbox in controlled research environments
    • How pretrained architectures such as ResNet are evaluated for robustness
    • Why datasets like MNIST are commonly used for benchmarking security experiments
    • The security risks of exposing prediction APIs in black-box services
    • Why production ML systems must assume adversarial interaction
    Defensive Takeaways for ML Engineers Rather than attacking models in the wild, security teams use adversarial research to:
    • Measure model robustness before deployment
    • Implement adversarial training to improve resilience
    • Apply input preprocessing defenses and anomaly detection
    • Limit prediction confidence exposure in public APIs
    • Monitor query patterns to detect probing behavior
    • Use ensemble methods and hybrid ML + rule-based detection systems
    Why This Matters: Adversarial machine learning highlights that high accuracy ≠ high security.
    Models that perform well on clean data may fail under minimal, human-imperceptible perturbations. Robustness must be treated as a first-class engineering requirement, especially in:
    • Autonomous systems
    • Biometric authentication
    • Malware detection
    • Financial fraud systems


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    16 Min.
  • Course 24 - Machine Learning for Red Team Hackers | Episode 3: Evading Machine Learning Malware Classifiers
    Feb 18 2026
    In this lesson, you’ll learn about:
    • What adversarial machine learning is and why ML-based malware classifiers are vulnerable to manipulation
    • The difference between feature-engineered models like Ember and end-to-end neural approaches like MalConv
    • Why handling real malware (e.g., Jigsaw ransomware) requires a properly isolated virtual machine lab
    • How libraries such as LIEF and pefile are used to safely parse and analyze Portable Executable (PE) structures
    • The concept of model decision boundaries and detection thresholds
    • Why “benign signal injection” works conceptually (model blind spots and over-reliance on superficial features)
    • The security risk of overlay data and section manipulation in static analysis pipelines
    • The difference between gradient boosting models and deep neural networks in robustness and feature sensitivity
    • How adversarial examples reveal weaknesses in ML-based security products
    • Defensive strategies for improving robustness against evasion attempts
    Defensive Takeaways for Security Teams Instead of bypassing detection, professionals use these insights to:
    • Strengthen feature engineering to reduce manipulation opportunities
    • Normalize or strip non-executable overlay data before classification
    • Incorporate adversarial training to improve model resilience
    • Combine static and dynamic analysis to detect functionality, not just file structure
    • Monitor for abnormal file padding and suspicious section anomalies
    • Implement ensemble detection strategies rather than relying on a single model


    You can listen and download our episodes for free on more than 10 different platforms:
    https://linktr.ee/cybercode_academy
    Mehr anzeigen Weniger anzeigen
    16 Min.