It is Not Enough to Teach Ethics to Computer Scientists

Originally published on April 19, 2018


///


The recommendation on the computer screen reads “Release not recommended.” The judge briefly looks at the current charges and the predicted scores for new criminal activity and failure to appear. All three support the recommendation, which she decides to follow, likely with only a scant, if any, knowledge of how the algorithm arrived at the recommendation.

 

This scene is playing out in courthouses around the country. Artificial Intelligence (AI) and machine learning (ML) algorithms are widely used to inform decision making, and not just in courthouses. They are found in insurance companies, educational settings, financial institutions–to name just a few. The digital availability of large amounts of data combined with algorithms that can classify the data and predict outcomes has spawned a new industry of risk assessment software to aid practitioners in their daily work. There is controversy, to be sure. Key concerns regarding risk assessment algorithms, for instance, in sentencing are “opacity, bias and unreliability, and diverging concepts of fairness.”[1] There is little guidance or training on how to appropriately incorporate the output of these algorithms into the decision making process. The promise of increased objectivity and efficiency is pushing the use of these tools. And there is money to be made.

 

The commercial value of products and services powered by AI and ML is enormous. PwC predicts that “global GDP will be up to 14% higher in 2030 as a result of the accelerating development and take-up of AI.”[2] The promise of huge economic gains, in particular for first-mover businesses, increases the risk that products and services will be unleashed on the market before we can properly understand how they work and assess their impact. And it is not just risk assessment software. Many new products built on these powerful algorithms, such as autonomous vehicles or drones, are now hitting the market. And many more products and services that we cannot even imagine will become part of our lives in the not so distant future.

 

Every economic sector will see a major transformation. Current discussions already indicate that the technologies have outpaced societal understanding of their implications. Policies and regulations and ethics standards are largely missing, and practitioners are woefully underprepared to ensure the proper use of these new technologies. The fear of unintended consequences and even malicious intent due to “the dual-use nature of AI and [machine learning]”[3] is real. So is the potential for overreliance on these new technologies.

 

The speed at which these new products and services come to market increases the urgency to engage technologists, policymakers, industry leaders, and practitioners in robust discussions on the ethical risks of AI. Research should be encouraged (and funded) to develop an ethics and policy framework to promote the responsible use of these technologies. Industry standards will need to be developed that business and industry agree to abide by to build trust with the public and prevent intentional or unintentional abuse or misuse. We cannot wait until catastrophic failures result in a patchwork of government regulations that could stifle innovation.

 

Brundage et al.[4] call for “[p]olicymakers [to] collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.” While the authors recognize the need to “combine technical and nontechnical considerations,” they also note “a lack of deep technical understanding on the part of policymakers, potentially leading to poorly-designed or ill-informed regulatory, legislative, or other policy responses.”

 

Academic institutions are starting to develop new programs to educate a new cadre of researchers and policymakers who can navigate the complexities and unknowns of this rapidly evolving new technology environment. Think tanks are adding their intellectual capacity to these discussions. Many of the authors of a recent report by Brundage et al.[5] come from such places.

 

Brundage et al. suggest that “[e]ducational efforts might be beneficial in highlighting the risks of malicious applications to AI researchers” and that “multi-stakeholder conversations [could] develop ethical standards for the development and deployment of AI systems.”[6] Computer science programs are starting to respond and develop ethics courses, as reported on February 12, 2018 in the New York Times, “to train the next generation of technologists and policymakers to consider the ramifications of innovations–like autonomous weapons or self-driving cars–before those products go on sale.”[7]

 

Educating policymakers and AI researchers, however, is only the tip of the iceberg. There will be thousands of practitioners who will rely on AI and ML to inform their decisions. Very few of them will have the preparation to work side-by-side with these new tools. Simply requiring practitioners to take a course in machine learning will fail not only because of a lack of preparation, but also because knowing how to code and run a machine learning algorithm does not prepare one to apply policies or to interpret the output of an algorithm. Even aspects of such courses that would be useful, for instance, performance measures of machine learning algorithms, are often too abstract to be of practical use: Knowing that a risk score calculation has 70% accuracy leaves most practitioners at a loss for how to include this information into the decision making process.

 

And it is not just the practitioners on the ground. Managers will be asked to realize the promise of AI and ML to increase efficiency: An algorithmic risk evaluation is almost instant, whereas a caseworker may spend hours reading through files. These time savings translate into cost-savings if caseworkers can handle more cases. Managers must get guidance on how to balance the gain in efficiency with the potentially negative consequences of an increased reliance on these new technologies.

 

Colleges and universities are starting to incorporate some of the technical knowledge into programs that educate practitioners.[8] This is too slow to change the knowledge base of the workforce and does not address the training needs of practitioners and managers who are already in the workforce. Continuing education requirements for professionals could fill the void. Many professionals are already required to take those courses to meet their licensing or credentialing requirements. Professional organizations should engage with technical experts to develop short courses that discuss the ethical issues of these new technologies, and provide opportunities to practice the use of these new technologies through relevant case studies.

 

Ultimately, however, society as a whole must engage in conversations to grasp the negative aspects of these new technologies. This must be a shared responsibility and cannot be put solely on the shoulders of technologists. The conversations must weigh the pros and cons, be respectful of the different voices, and lead to actionable outcomes. Without this, we risk a future that could be more dystopian than utopian.

 

________________________________________

[1] Kehl, D.L. and Kessler, S.A., 2017. Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. (http://nrs.harvard.edu/urn-3:HUL.InstRepos:33746041; accessed on March 5, 2018)

[2] Sizing the prize: What’s the real value of AI for your business and how can you capitalise? PwC publication. (https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf; accessed on February 18, 2018)

[3] Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe et al. “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” arXiv preprint arXiv:1802.07228 (2018). P. 7.

[4] Ibid. P. 51.

[5] Ibid. P. 2.

[6] Ibid. Pp. 92-93

[7] Singer, N. “Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It. The New York Times, Business Day. (https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html; February 12, 2018).

[8] Parry, M. “Data Scientists in Demand.” Special Report. The Chronicle of Higher Education. (https://www.chronicle.com/article/Inside-the-Trends-Report/242676; March 4, 2018)