AI | What CEOs, Boards, and Investors Must Keep Top of Mind
Human rights defenders across the world are fighting facial recognition surveillance
Researchers Find Racial Bias in Hospital Algorithm
Viral Tweet About Apple Card Leads to Goldman Sachs Probe
It’s time we faced up to AI’s race problem
Tell HUD: Algorithms Shouldn’t Be an Excuse to Discriminate
And I am afraid that the list above will keep growing and creating more concerns…
You see, those of us who believe technology can change the world for the better and are passionately involved with making algorithms that are explainable, transparent and accountable are feeling challenged by the increasing numbers of stories of rogue algorithms creating harm. It doesn’t have to be this way.
As I talk with our own customers and prospects, confer with other leaders in ethical AI, and listen to all the hot takes in the market, there are some simple AI truths that deserve more attention:
- AI is not a panacea.
- AI is essentially the combination of probabilistic software and data. Both bear great risks for an organisation when not handled properly.
- Controlling AI means controlling the way it is constructed as well as the way it is managed.
- The intricacies of AI should be made explainable and understandable especially to the non-experts (even to you, dear CEO). A simple graph will do.
- There is no objective or fair AI when it is trained only on historical data.
Not all rogue algorithms have the same impact though. If you’re shopping online and don’t receive an optimized list of suggested shoes, lemons, sofas — then no harm, no foul.
For some areas, though, unchecked algorithmic errors can be particularly dire:
- Autonomous Decision-Making with Social Impact (e.g. credit scoring, risk assessments for judicial purposes),
- Computer vision in autonomous driving and surveillance systems,
- Health,
- Cyber-Security & Threat Analysis,
- M&A Due Diligence.
Algorithm design and auditing even in the hands of wicked smart coders, with little to no experience in 1. Designing a bias-free system 2. Auditing to check for gaps, does more harm than good.
We need humans in the loop to ensure algorithms are as bias-free and transparent as possible. And those humans must have deep experience in auditing software systems via ML tooling (so ML for ML) and guided by humans deeply experienced in the auditing process.
With basic Ethical AI tenets in place, and humans in the loop, you can at least ensure that your company doesn’t become one of those headlines, or worse.
At Code4Thought, we are cautiously optimistic about the future of algorithms and challenged to make it happen.