Inside Responsible Annotation: Neurodiversity, Quality, And Ethics In AI
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
Want AI that works the first time instead of the tenth? We sit down with Andreas Schachl, co-founder of Responsible Annotation Services, to unpack the quiet truth behind reliable models: ethical, high-quality training data produced by people who take clarity and precision seriously. Andreas shares how a single internship sparked a company built around neurodivergent talent, turning data labeling from a churn task into a strategic advantage.
We walk through why annotation isn’t going anywhere, even with foundation models and smarter tools. When you’re training on private, business-owned data across text, images, audio, video, and LiDAR, you need a human in the loop and documentation you can defend. Andreas explains how his team co-authors rigorous annotation handbooks with clients, translating fuzzy goals into exact rules, edge cases, and review procedures. The payoff is real: higher consistency, fewer iterations, and a clear compliance trail for regulators and auditors.
Bias mitigation becomes a practice, not a promise. A neurodivergent lens exposes hidden assumptions and pushes for instructions that are unambiguous and testable. We explore practical systems—daily stand-ups, structured chat, and even “coffee calls” with agendas—that help people do their best focused work. We also confront the ethics of the global annotation supply chain and outline a different path: EU contracts, fair wages, social worker support, and leadership that values diligence over hype. From 2D images to complex 3D point clouds, we show how modern tooling plus human judgment builds AI you can trust.
If you care about responsible AI, data quality, and making models perform sooner with less guesswork, this conversation is your blueprint. Subscribe, share with a colleague wrestling with training data, and leave a review with your biggest annotation challenge—we’ll tackle it in a future episode.
Send a text
Support the show
Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com
Debra https://bsky.app/profile/debraruh.bsky.social
Neil https://bsky.app/profile/neilmilliken.bsky.social
axschat https://bsky.app/profile/axschat.bsky.social
LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
https://www.linkedin.com/in/neilmilliken/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh
