Ethos 1.0, or the need to build software with intrinsic human values
The Black Box Society book was a source of inspiration for this article (among others).
For the last 14 years I have been conducting research and then practicising consultancy on software quality matters. I was merely trying to find answers to questions like: What defines good software? How can we measure it? How can we make its technical quality transparent? I wouldn’t be boasting if I say that together with my colleagues at Software Improvement Group have done and still do some good work in trying to answer these questions.
But it is the last years I sense some new frontiers are emerging on how we need to develop and evaluate software. And this needs to go beyond reaching functional goals and addressing technical problems. It needs to focus on the alignment of software to human moral values and ideals.
In other words we need software with ethos, software that will demonstrate wisdom, virtue and good will towards its users.
You see, software is not just eating, but it’s leveraging the world. For years the perception about software was that it is good in following rules, thus automating mostly repetitive tasks, but it is lousy in pattern recognition, thus unable to automate information processing tasks that cannot be boiled down to rules or algorithms. But the last few years software started surprising us. Now we have apps that can judge if a photograph is beautiful or not, or can diagnose diseases, listening and speaking to us, systems that can trade on behalf of us at lightning speed, while robots can carry boxes in warehouses, and cars can drive with minimal or no guidance.
And unlike the financial leverage that lead to the 2008’s financial crisis, this one needs to deliver. This time the outputs ought to help humanity flourish and to improve the human wellbeing. With an uneven distribution of wealth, a stagnating median income in most countries of the developed world, and unemployment rising, the stakes are high.
That is why we need new references and insights that will empower those responsible bringing software into this world to prioritise ethical considerations in the design and development of software systems. They can also lead to new models, standards, tools and methodologies in developing and evaluating how ethical a software system is, especially if it is an AI or an autonomous system.
Creating all these is not trivial. Models and standards need to be multidisciplinary and to combine elements from the fields of Computer Science and Artificial Intelligence (e.g. IEEE’s initiative for Ethically Aligned Design), Law and Ethics, Philosophy and Science as well as the Government and Corporate sectors.
Or, as I like to say triggered by this article, all these models and standards will help us ask and asnwer, these questions that aren’t Googleable and are relevant for the future of our world.