On Algorithm Accountability: Can we control what we can’t exactly measure?

4 min readNov 3, 2020

During the last months, I spent (quality) time with people of diverse backgrounds and roles; from executives in the banking sector, founders of health or tech startups and translators to name a few, discussing the impact of technology and algorithmic decision making on their daily work. Not surprisingly the gravity of the deducted decisions as they perceive them (or cognitive insights in a broader sense), is growing very fast.

Interestingly also, most of the people I talked to, had an experience of a slight or serious bias in the deducted insight, that essentially they could bypass using their own intuition and experience. Thus, it makes perfect sense that they are all concerned on how algorithms work and how they can control them in order to ensure they form decisions that can be trusted.

And so, they formed a nice question for me to think in my spare time (although such a thing doesn’t exist with a kid, a dog, a cat and another kid in the making). Simply put this question is: “How can I be in control of this thing that instructs me what to do?”

Intuitively I’d say that this not an easy task; and I firmly believe that at least for now, Tom DeMarco’s famous quote “You can’t control what you can’t measure” is not applicable in its entirety.

You see, an algorithm, which typically can be measured and controlled in certain extent, is not making decisions for itself, but it operates within an organisational context which affects its creation. That context then, is not something that can be quantitatively assessed in a straightforward way.

Nevertheless, we should strive in order to be able to control both the algorithms and the organisations that create them. My view is that only trying to do that from both perspectives will help us getting to a significant level of accountability in case things are not working as expected.

In order now to simplify things we may say that an algorithm is essentially a piece of software that:

  1. Solves a business problem set by the organisation that creates it (the algorithm),
  2. Receives data as input that have been selected and most likely pre-processed either by a human or an automated process,
  3. Utilises a model (e.g. SVM, deep-learning, RF) which processes the data and ultimately makes a decision or suggests an answer/solution to the question/problem set by the organisation.

Subsequently then, what we need is to get insights for every aspect mentioned above.

For starters, the organisation creating the algorithm needs to cater and design for accountability. In other words they should define when and how an algorithm should be guided (or restrained) in the risk of crucial or expensive errors, or any form of bias (discrimination, unfair denials, or censorship). Defining such processes, they should be guided by principles like responsibility/human involvement, explainability (known also as interpretability although they differ), accuracy, auditability and fairness.

Regarding the input data we primarily need to know about their quality meaning their accuracy, completeness, uncertainty, as well as their timeliness and representativeness. It is also important to know how these data are being handled; what are their definitions, and how they are being collected, vetted and edited (manually or automated).

As for the model itself we would like to know what are its parameters, the features or variables used and whether they are weighted or not. We must also be in a position to evaluate its performance, select the appropriate metrics for this purpose and ensure we operationalise and interpret them appropriately. Last, but not least, we should be able to assess its inferencing, that is how accurate or error prone the model is. An important element here is the model creator’s ability to benchmark its results against standard datasets and standard measures of accuracy.

So, we may say that controlling an algorithm (to a certain extent) is not an impossible task but still requires some level of maturity for the organisation that creates (or utilises) the algorithm.

However, someone has to create the compelling reason for an organisation to cater towards accountability. And this someone, it’s us, either as citizens, clients, voters, news consumers, professionals or other roles whose lives are being affected by the decisions algorithms make on behalf of us.

Sources of inspiration for this blogpost were among others the following:

Beyond Automation, Thomas H. Davenport and Julia Kirby, Harvard Business Review, June 2015

Accountability in Algorithmic Decision Making, Nicholas Diakopoulos, Communications of the ACM | February 2016 | Vol. 59 | №2

The Black Box Society, The Secret Algorithms That Control Money and Information, Frank Pasquale, Harvard University Press, January 2015,

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html





code4thought is a technology company with a unique purpose: to render technology transparent for large scale software and AI-based systems.