Artificial intelligence is experiencing exponential growth, which generates excitement and fear, especially as it relates to the future of work. Generative AI for creative jobs is the biggest fear amongst content creators, journalists and writers — potentially exposing a new forage to hiring disruption.
The Brookings Institution’s Alex Engler calls this hiring trend “algorithmic creep,” which is the combination of increased algorithm use within different hiring stages and more firms using algorithms at each stage.
AI will lead to redesigning workplace business models and changing office culture and how we hire. To build ethical AI solutions, the tech sector needs a wider range of perspectives and diversity of thought, particularly to gain awareness of all the potential forces contributing to the (often unwarranted) success of the elite. We need practices and governance to ensure these changes are at the forefront; thereby, avoiding data bias and unethical practices.
Most technology companies need to become more familiar with the practice of design justice to disrupt data injustice. Practices such as design justice work to demonstrate how universalist design principles and practices erase certain groups of people. Incorporating this practice into the development and execution of AI could prevent negative stereotypes and bring ethics into the conversation.
Ethical AI is not just a tech issue
The benefits of AI are exciting, not only for technology but those in media and marketing. Yet, left unchecked, technology indulges in unethical outcomes, bias and cultural appropriation without regard. University of Virginia Professor of Practice in Data Science and Nonresident Senior Fellow at The Brookings Institution, Renée Cummings, talks about algorithms that are not accountable, transparent, explaining, or auditable, and explains how these algorithms undermine “the extraordinary possibilities of ethical AI” in her lectures. As marketers specializing in human-centered interaction design, we aim to create equitable user experiences.
Utilizing AI that weaves AI + HI (human intelligence) to move toward resolving ethical and cultural dilemmas in research is something we should consider. This process collates massive amounts of data and organizes it in logical pieces, allowing people to interpret it in specific contexts and enabling better directions and outcomes.
However, accessing and logically organizing massive data via AI that incorporates human intelligence requires more than just typing in words. It involves training in the language and literacy of AI — something for the media to consider when using ChatGPT.
AI governance will require multidisciplinary teams — tech can’t be the only one at the table.
In 2021, the Federal Trade Commission wrote a blog post about the benefits of AI while looking at the governance of bias. This post discusses and recaps the three laws, data analytics, algorithms and AI expertise that lead to seven approaches to ensure equity and inclusion. Experts should review these before launching AI into the marketplace, in addition to applying the White House Blueprint for an AI Bill of Rights: “The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats — and uses technologies in ways that reinforce our highest values.”
The issue of ethical AI will not go away on its own. Do we need to build a slow movement, per the efforts of Dr. Timnit Gebru, and get a sense of responsibility and balance to create technologies that work for everyone? Or do we continue to fast-track without looking back, making it an experience for some, not all?
Viewing AI as a social problem – not just a technological issue – can build more ethical practices.
If you like this article, sign up for the SmartBrief on Social Business email newsletter for free.