In each year, there are increased organizations investing money in the industry of artificial general intelligence. According to the Mozilla report, AI (Artificial Intelligence) is predicted to contribute 15.7 trillion dollars (about $48,000 per person in the US) to the global economy by 2030 (Internet Health Report 2022, 2022). Especially for those giant tech companies like Google, Apple, Microsoft, and Amazon spending billions to develop AI products and services. Elon Musk who is the founder of Tesla and titan of tech has donated $10 million to fund ongoing research at the non-profit research company OpenAI (The Future of AI: How Artificial Intelligence Will Change the World, 2022). From all the evidence, we can see that AI is gradually being developed into transportation, manufacturing, healthcare, and education, etc. However, what would happen if we did not care about ethics in the industry? Things like Google’s image labeling software classified an African American person as a “Gorilla”, Amazon disbanded an initiative that used AI to review job applicants because gender bias will keep happening. We cannot avoid all the harm that happens from AI systems, but we should make sure that we think through the situation. This is a project focusing on the area of ethics in AI. Through market research, in-depth interviews explore the current situation and barriers in the tech industry. Further ahead, designing the potential solution for driving a conversation around ethics of AI focusing on user-based wider systems in the AI team (AI engineers, designers, and AI researchers)
Kate Crawford (Crawford & Joler, Anatomy of an AI system 2018) deployed complicated AI systems into three main categories which are material resources, human labor, and data. From the map we can see a key difference between artificial intelligence systems and other forms of consumer technology: they rely on the ingestion, analysis, and optimization of vast amounts of human generated images, texts, and videos. When the machines get a certain amount of information, they start to label humans.
Value-added model is one of the examples for labeling which is commonly seen for evaluation. They promoted how effective it is by using scoring systems. From weapons of math destruction (O’neil, C., 2017), in Washington DC, some schools implement IMPACT evaluation to weed out the low performing teachers. Nevertheless, there are some teachers who get high reviews from students and parents are kicked out by the algorithm just because they did not pass the “evaluation of effectiveness” in teaching math and language. Data is not the only way to define “good teacher”. When we allow algorithms to make the decision for us, how can we make sure it does not marginalize any innocent users?
Although AI is difficult to understand, it is still important to drill down the roots of how the systems work to create a more transparent product.
Trustworthy AI guideline
To reduce the unintended consequence from the AI machine and being mindful to various situations. It is necessary to consider ethical values as a reflection of the project. In the EU, companies are being asked to prepare the ethical package via the guidelines provided by EU trustworthy AI guidelines before publishing the AI project. My targets will be based in the UK; therefore, I took the ethics principles from the EU (digital strategy.ec.europa.eu. 2021) and frameworks from isITethical? as a reference and organized all values into four columns provided by the ethics lab (AI Ethics Lab 2021) which are autonomy, non-maleficence, beneficence, and justice. However, the terms are still vague, and it is hard to generate discussion. Consequently, for the next step, I started to look at some case studies which show how organizations break down the theory and here are some examples:
IDEO has developed plenty of cards to lead people to think about users and how we can use AI to make things better for humans. They use the term “Augmented Intelligence” instead of “Artificial Intelligence” to emphasize the distinction. Data science is a tool to help us build a smarter world, but humans should remain the architects.
IBM claimed that “SOCIO” is important in AI systems because we need to rely on algorithms to decide. Therefore, people who feed the data to the machine will be essential. Due to the research showing that more than 80% of the AI engineers are young white male (Picchi, A., 2019), therefore, reflecting on the culture of the team and reduce the possibility of tensional and intentional bias are necessary.
From the interview with AI researchers, software engineers, I realized that the biggest problem in the AI industry is that we have not put ethics into practice enough. It does not mean companies do not recognize the importance of ethics in AI but how much they value it. The widespread problem is that all the team members have limited time for doing the project. Therefore, they assumed that ethics should not be their domain, and someone out there was supposed to take on the job. In addition, there is a lack of real cases showing that ethics can make an obvious positive impact for their project and none of the companies have really created their ethical framework for evaluation. Another point is the most important but also the most difficult one - common disciplinary language. In the AI project teams might include designers, engineers, AI researchers and project managers etc. They all use quite different disciplinary language, sometimes even different native language. How can we create a space for them to have a conversation around ethics with each other without language barrier?
From the primary research, I realized that
Beena Ammanath (2022), head of Deloitte AI institute, claimed that tech innovation is growing faster than law building so self-regulation becomes necessary.
Ethics is a word that sounds very intimidating. Therefore, some companies assumed that talking about ethics might go against business interests. For example, in recent years, Google fired the top AI researchers who tend to examine the downsides of google search product (Vincent, J., 2021) The case shows that company have not ready yet to face the issue does not align with business KPI and will be more cautious about the ethical issue being mentioned.
There is no right answer in ethical conversation instead it is the method to help us reflect on the team culture and product’s values.
How can we provide tech workers a unique way to “feel the AI machine”.
From the secondary research, I understood that
Most of the time ethical conversation in the private sectors becomes a one-time practice outsourcing to the organizations
Lack of documentation and continuity for the conversation.
We need a mechanism for team to trade-off and figure out the priority of the ethical values in their product.
“Not my domain” should not be an excuse for skipping ethical conversation.
It is believed that AI is the big wave for the coming generation. As a result, it should not be a slogan or an advertisement for only getting the customer’s attention. Instead, it is necessary to think through the downsides AI could bring to reduce or predict the potential harm. For the next stage, I will define the how might we question and analyze where the service should be implemented.
How might we drive a conversation around ethics of AI focusing on user-based wider systems for AI project teams?
Stakeholder Map & Persona
Facing the inequality in AI industry, I created a stakeholder map for this project to rethink who should be involved in ethics in AI. From there, I select three main personas - AI engineers/AI researchers/designers to become the direct targets of this service.
The service is aiming to provide tech workers with a different approach of “feeling the AI machine” Instead of focusing on technique, how can we sympathize and predict the problem by standing in the user’s perspective.
After mapping out the relation between AI product provider, organizations for ethics consultant and the AI products users. I found out organizations like isITethical can play a role to help companies think through the situation around users before developing the project. Therefore, from the structure we can see that I will be a service designer who creates the toolkit for organization to facilitate a workshop with tech companies.
Value for the service
A successful service will be creating a common disciplinary language for designers and engineers in the AI team to think broadly and deeply of the experience that users may come across while using the product. After going through the service, the team can have a way to document and develop more ethical discussion around users during the project to reduce and prevent the harm from AI systems.
To understand what kind of playful methods people used to generate ethical conversation. I collect the inspiration from the following toolkits and separate them into the categories of board game, digital game, and physical workshop. Following are the examples that inspires me more:
Judgement Call is a game designed by Microsoft. They follow the ethical frameworks to help their developers reflect on the project before developing
isITethical? is using the value at play process to design a game generates the discussion of ethics in AI. The game was a way to make this process available to others and invite others to participate and complete the discussion.
The tarot cards of tech are a series of cards designed for creators to consider the impact of technology more fully. They aim to provide an innovative approach for “move fast and break things” to “slow down and ask the right questions”
The thing from future is an imagination game that challenges players to describe objects collaboratively and competitively from a range of alternative futures.
Insights from the game design:
Many games have implemented the idea of “role play” to let players immerse themselves into the situation and think outside reality.
The methodology of fictional design has been widely used to predict the potential issues that might happen in the future.
Imagination and creation are important in the process to trigger players to think creatively.
Inspirations from the game research:
I want to develop the toolkit for my prototype because most of the games takes longer and the content is a bit overwhelming.
Using varied materials to increase the engagement. For example, painting, acting, and creating, etc.
Ideating the prototypes around human - “who create the algorithm” “who are the product’s users.” It is important to keep humans in the loop.
After the research, I I understand the following methodologies will be applied in the service toolkit:
Ethics through design
Research through design fiction
Value sensitive design
Here are the frameworks that through my design, I would like users to realize that:
Value at Play
In the whole journey of ideation, I applied the concept of Agile design (Sofeast. 2020) which is usually used for developing software in the industry that I learned from Design Future. To simplify, it is the idea of rapid prototyping that relies more on people’s engagement. The concept is to figure out the final product by making multiple simple prototypes. During each production, I will pick up the key elements from research, plan the design process, build the prototype, and review the feedback from people. Based on the methodologies, theoretical frameworks, and insights from research, etc., there are some key points that I aim to follow in my brainstorming section.
Common disciplinary language
Keep human in loop
Break down the ethical values
SOCIO is a playful thought-provoking tool AI teams facilitate ethical conversation before product development begins. It allows players to immerse themselves in different situations users would face while interacting with an AI product. The aim of the game is to collaborate and create a unique ethical framework for AI projects.
I have tested out the prototype online with the director and researchers in isITethical?, PHD candidate and professor in interactive AI in university of Bristol. Besides, I tested out the physical toolkit with designers and engineers. Here are some feedbacks that I get:
Value cards have too many limitations because one term in different projects could have various definitions. It is better to keep the conversation open and give freedom to participants to define their own words.
Test it with a group of people less inclined to think about ethical issues. Bedsides, ask people about how to frame the values and in which contexts they would use the toolkit.
Use the result as something that people can come back to at different points in their project to double check if it still applies to them or if they want to change it. They could play the game at various stages too.
The toolkit is clear to understand and the conversation around ethics is easy to engage.
I would like to introduce the toolkit into workshops for more people to try it out so let me know when it will be published!
The most important part of this toolkit is to not let it become a one-time discussion. Therefore, it is important to document and reflect on their framework at every stage of the project. The potential solution for continuity is to insert a toolkit into the dashboard. Everyone in the team could access the board and keep the framework in mind. In every stage, they could play around the toolkit and rearrange the values to improve the user’s experience.
The final prototype has been tested with 5 AI researchers, 2 designers and 1 data scientist. From the reaction and feedback, the toolkit has achieved the consequence that I expect it to be. No matter how many players are in the workshop, the procedure of the game would not be over than one and half hours. Besides, everyone seems enjoyed in the discussion and the pop-up unpredictable settings. Although, one observation that I have is that the level of engagement might be quite different depending on the background of the participants. For example, AI researchers are passionate about ethical issues. Consequently, they normally can generate a lot of interesting and meaningful conversations in the process. When I tested with designers’ group, they might not have deep knowledge of how AI machines work but they have creative imagination to create the innovative scenarios. However, for engineers, they need more motivation to think creatively because they used to only focus on using the code to solve the problems. Therefore, a facilitator will become necessary in the situation to lead the conversation and provide an assistant. Besides, the prototype is difficult to directly test with AI engineers who have the project in hand. The reason is that the non-disclosure agreement will limit the information that they can provide from the project. I hope through the connection with AI researchers, I can have the chance in the future to contact the more direct AI teams to test out the idea and iterate the toolkit. Lastly, I want to understand more on how people interpret ethical values. From there, to explore numerous ways of how to make all the ethical explanations more precise and accessible.
What is the initial intention of doing this project? When I worked in a healthcare company back in Taiwan, we widely used AI in our digital products just because it is a fancy new tech that every company is working on. However, the whole team seldom slows down and reflects on the idea of why we need AI in our product. Where is our data coming from? How can we make sure not to expose user’s information? These were all the questions that I can only put in mind back in time. It might be too unrealistic if we say AI will take over our world like the scenario in the science fiction film. However, we do not want to lose control of our life or be labelled by algorithms everywhere. Therefore, as a designer, I am always curious about the intersection between tech and our preferred future. The project about ethics in AI becomes a good opportunity to dip my toe into the tech industry based in the UK and to see how service designers can take the initiatives in ethics.
What I learn from the process? Cluster and filter the insights This is a research-based topic. At first, I was overwhelmed by all the information that I collected and struggled to narrow it down. However, it also gives me a chance to learn how to cluster and filter the insights. Hence, to find my way in the tons of paper and books, I followed the double diamond method which I used to do. Especially, I focus more on the cause and effect between all the insights, and this is where the “AHHA!” moment comes from. Consequently, I realized REFLECTION in each step is the key point to push forward the project. The reason is that it’s the part that I began to think about what I learn and how I can put the academic theory into real practice. Further ahead, my final prototype “SOCIO” is the combination of all the pieces of evidence and reflection. After the whole journey, I realized the importance of having a strong and high quality of research is the foundation of every project. Now, I can confidently prove that I am able to successfully transmit a problem solution from deep and qualitative research.
Facilitate the workshop Facilitator plays a significant role in the workshop. In this project, my stakeholders are those AI engineers who are not so active and talkative. They might need a push to think creatively. As a result, the facilitator will need to trigger them to increase the level of engagement by using more prompt or interaction. After hosting several prototype testing sections, I have learned a lot about how to make ice breakers for players to relax and have fun, let participants familiarize themselves with each other and avoid the awkward silence. In the end, I can complete the whole workshop independently and let everyone enjoy the experience. In the future, I would like to learn more skills on how to make the event be more inclusive which can make all participants share their opinion without pressure.
The ability of storytelling There are diverse ways of storytelling, but the main concept is to let my audiences understand what I am trying to say and where they should focus on. From my experience of explaining the prototype and making the instruction. I realized that it is not easy to convey my thoughts to people who have no idea in this industry. For example, I need to be cautious of the complicated words that I use, not throwing out all the overwhelming information, and clear guidelines for participants to follow in the toolkits. I have accomplished this by asking people to go through my instructions for the prototype. From there, I can see how people interpret my project. How can I make it easier to understand and attract their attention by exploring more.
How can Service Design be involved in this area? After 15 months training as a service designer, I have a solid knowledge of all the methodologies and toolkits that we used to clarify the problem, analyze the touchpoint, and figure out the potential solution to keep users in the center. Applying the same concept, algorithms are all about people. For instance, Engineers are the ones training the data and feeding the machine. Users are the targets of using the product. From this point, service design tools (stakeholders map, iceberg model, ecosystem map, etc.) and methodologies (Fictional design, critical thinking and so on) can provide creative ways to support responsible research innovation and prepare for the potential situation. This project is the beginning of my journey. In the future, I would like to explore how I can bring in more concepts of service design tools into the technology industry to create a more user-centered AI system.