This article is published in the June 2024 issue.

Addressing Harms: Moving Beyond Intent


By Haley Griffin, Program Associate, CCC

The following was written by CCC’s Addressing the Unforeseen Deleterious Impacts of Technology (AUDIT) Task Force

Computing technologies of all stripes have brought enormous benefits to people’s lives, but also significant individual and societal harms. As these technologies become increasingly ubiquitous and powerful, we should expect the potential benefits and harms to grow as well. These shifts raise crucial questions about the foreseeability of impacts of the work of computing researchers and developers, as it is much easier to promote benefits and mitigate harms when they can be anticipated. We can ensure wide access (if beneficial), establish guardrails (if problematic), and much more, but only if we actually foresee how the computing technology will be designed, developed, and deployed in the real world [NASEM, 2022].

In some cases, it is easy to anticipate the impacts of a new technology. For example, the “first-order” impacts of a faster processor can usually be modeled and estimated. At the same time, more complicated impacts can be much harder to anticipate; for instance, we might encounter a Jevons paradox in which increased efficiency leads to increased utilization, thereby undoing the positive benefits of the efficiency gains. As a practical example, autonomous vehicles are likely to be more efficient per vehicle mile traveled, but if they lead to an increase in total vehicle miles traveled, then emissions could actually rise when autonomous vehicles are introduced [Kalra & Groves, 2017; Geary & Danks, 2019]. These complexities in anticipating benefits and harms only grow as the capabilities and sophistication of our computing technologies increase. And of course, matters are exponentially more complicated when we consider research on computing technologies, as the intellectual and temporal gaps between research and implementation can be vast [NASEM, 2020].

Despite these challenges, we have a societal and ethical obligation to anticipate and address the foreseeable impacts of our efforts to bring new technologies into the world. Companies and organizations typically explore ways to consistently maximize the benefits of technologies that they produce, but do not have the same record of anticipating the potentially harmful impacts of their new computing technologies. Consider a few different examples. Mortgage approval systems were deployed with an understanding of how they could increase profit for lenders, but not how they could increase inequality in access to financial resources. Many people failed to anticipate the ways that social media would change social interactions for the worse. Automated hiring systems have unintentionally codified sexist and racist practices. And many more cases of unforeseen harms and challenges. 

We might hope that failures to anticipate harms occur only because of the complexity of ways in which technologies can interact with and shape communities and societies. However, there are often incentives and institutional structures that create further reasons to avoid anticipating. That is, many problematic effects are arguably “willfully unforeseen,” rather than justifiably unforeseen. In such cases, we cannot simply point to our personal or organizational failures to anticipate harms in order to absolve ourselves from blame. We are responsible for the impacts that we should have foreseen, even if we did not actually foresee them in this particular situation. And so we need to recognize and address the barriers to actually understanding the impacts of the computing technologies that we create.

On the incentives side in industry, many companies and organizations reward people for “writing code” or other activities on the basis of solely “local” benefits, rather than more holistic assessments of all impacts. That is, the incentives for an individual employee or team all point towards a focus on potential benefits to the exclusion of other potential impacts. Meanwhile, in academia, tenure and promotion depend on publications and grants, where there is little incentive to emphasize potential harms or problems. The temptation to focus on benefits is also heightened by the typical distance between academic research and technology deployment. In all of these cases, it is little surprise that people do not spend much time thinking about what could go wrong. The harms are unforeseen, but not because they were unforeseeable. 

On the institutional side, whether in industry or academia, computing technologies–both research and development–are often created by people who are far-removed from key stakeholders. Many harms from new computing technologies are easily seen by the impacted communities, but not necessarily by those tasked with creating or researching that technology [NASEM, 2022; Gebru et al., 2024]. However, direct engagement with impacted communities, whether through minimal interactions such as focus groups or richer interactions such as co-design, is not systematically part of all projects to create some new computing technology. We need to be talking with those who will interact directly with the technology, but those connections can be rare-to-nonexistent in many situations.

One might despair at this point, as the challenge of anticipating the benefits and harms of computing technologies appears too difficult, whether technically or institutionally. But although we face a difficult task, there are various methods and organizational designs that are being developed and tested to help us all do a better job of understanding likely impacts [NASEM, 2022]. These approaches range from practices that identify possible harms (e.g., red-teaming), to changes in organizational cultures (e.g., naming Chief (Responsible) AI Officers to lead these efforts, or encouraging academics to engage with potentially impacted communities), to different policy or regulatory approaches (e.g., holding companies liable for certain harms). Of course, even the best-intentioned efforts might fall short, and so we should also consider ways to address harms regardless of whether they were foreseeable at all. Read more about that particular problem in the next entry of this series: Addressing Harms Through Design


Citations

Geary, T., & Danks, D. (2019). Balancing the benefits of autonomous vehicles. In Proceedings of the 2019 AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.

Gebru, T., Topku, U., Venkatasubramanian, S., Griffin, H., Rosenbloom, L., & Sonboli, N. (2024). (rep.). Community Driven Approaches to Research in Technology & Society CCC Workshop Report. Computing Community Consortium.

Kalra, N., & Groves, D. G. (2017). The enemy of good: Estimating the cost of waiting for nearly perfect automated vehicles. RAND Report RR2150.

National Academies of Sciences, Engineering, and Medicine (NASEM). (2020). Information technology innovation: Resurgence, confluence, and continuing impact. Washington, DC: The National Academies Press. https://doi.org/10.17226/25961.

National Academies of Sciences, Engineering, and Medicine (NASEM). (2022). Fostering responsible computing research: Foundations and practices. Washington, DC: The National Academies Press. https://doi.org/10.17226/26507.