Monitor performance issues & errors in your code

Monitoring and auditing machine learning

Episode #261, published Sat, Apr 25, 2020, recorded Fri, Apr 17, 2020

Traditionally, when we have depended upon software to make a decision with real-world implications, that software was deterministic. It had some inputs, a few if statements, and we could point to the exact line of code where the decision was made. And the same inputs lead to the same decisions.

Nowadays, with the rise of machine learning and neural networks, this is much more blurry. How did the model decide? Has the model and inputs drifted apart, so the decisions are outside what it was designed for?

These are just some of the questions discussed with our guest, Andrew Clark, on this episode of Talk Python To Me.
Links from the show

Andrew on Twitter: @aclarkdata1
Andrew on LinkedIn: linkedin.com
Monitaur: monitaur.ai

scikit-learn: scikit-learn.org
networkx: networkx.github.io
Missing Number Package: github.com
alibi package: github.com
shap package: github.com
aequitas package: github.com
audit-ai package: github.com
great_expectations package: github.com
Episode transcripts: talkpython.fm

--- Stay in touch with us ---
Subscribe to us on YouTube: youtube.com
Follow Talk Python on Mastodon: talkpython
Follow Michael on Mastodon: mkennedy

Want to go deeper? Check out our courses

Talk Python's Mastodon Michael Kennedy's Mastodon