Don’t be wrong because you might be fooled: Tips on how secure your ML model

1. Miss-labeled/Confusing Training Data

2. Is this a “cat” or a “dog”?

3. Is this a “bird” or an “airplane”?

👉 Good data means a good model: spend some time investigating your data and try to identify if there are any systematic errors in your training set.

👉 Use explanation methods as a debugger, in order to understand why your model model misses certain groups of instances more than others

👉 Adversarial attacks are a cost-effective way to check the adversarial robustness of your model.

--

--

At code4thought we deeply want to help society address the challenges and injustices imposed by automated decision making technology.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store