Ethics in Data Science:
Case Study - Autonomous Vehicles

Overview

Ethics in Data Science
And Case Study on Autonomous Cars

Ethics is a complicated subject because it is difficult to understand, not to mention study, all the different ethical principles and concepts. Ethics can be viewed from the perspective of different stakeholders – individuals, groups, societies, citizens, customers, employees and businesses.

As societies become more digital, large volumes of data are being created, stored, synthesized and shared. Traditional frameworks of governance and risk-mitigation have become insufficient in regulating the digital age. Data science introduces our world to totally new classes of risk, including the unethical use of data and amplification of biases.

In my research, one particular concept that struck a chord with me was the treatment of law and ethics as a competitive advantage rather than a constraint. This idea stems from how the law is always behind technology. While governments and societies have yet to consider all the ethical and legal facets of data science, they will eventually catch up. If businesses and data scientists can think ahead of how they can manage data ethically, and develop a sound ethical framework from the get-go, they can mitigate huge risks. One example that comes to my mind is the Facebook-Cambridge Analytica data scandal. If executives and data workers at Facebook had foreseen the backlash and consequences of their “oversight”, they would have acted more responsibly and ethically.

As a global citizen, I can relate to the territorial nature of law. It adds a layer of complexity for organizations that operate in multiple nations since they must comply with new and occasionally conflicting legal obligations. It becomes even more complex when operations are driven by computer and transmitted on the internet. Therefore, data scientists (and aspiring ones like myself) need to be aware of these legal complications.

A trending concept in the industry surrounds explainable AI (XAI). XAI seeks to establish a right to explanation, i.e. when a machine learning algorithm makes a decision, one has the right to ask why that decision was made. An AI future that we should hope for is one where algorithmic outcomes support human decision-making instead of replacing it. This distinguishes AI as a tool to make accurate and ethical decisions and policies from an AI that simply does all the thinking for us.

In a nutshell, there are many strategies and frameworks that data workers can adopt to manage data and stay compliant with the law. When responding to algorithmic conclusions, it is essential to accept them critically and this requires asking thoughtful questions about them. In order to maximize the value of algorithms, they need to be interpretable to a certain extent. However, understanding algorithms is only the first step and that is where the complexity begins.

Ethics Case Study - Autonomous Vehicles

(PS. I designed the entire presentation deck, including all the visualizations incorporated.)