2018 - PD manifesto for AI futures - Daria Loi ,Thomas Lodato, Christine T. Wolf, Raphael Arar & Jeanette Blomberg

De Dominios, públicos y acceso
Revisión del 20:08 4 may 2022 de Paz (discusión | contribs.) (Página creada con «== <small>'''Texto'''</small> == '''ABSTRACT''' A deep discussion and reflection on the implications surrounding the design, development, and deployment of what are being…»)
(difs.) ← Revisión anterior | Revisión actual (difs.) | Revisión siguiente → (difs.)
Ir a la navegación Ir a la búsqueda

Texto

ABSTRACT

A deep discussion and reflection on the implications surrounding the design, development, and deployment of what are being described as artificially intelligent systems is urgent. We propose that within this context, Participatory Design, not only has much to offer, it has a responsibility to deeply engage. This workshop brings together practitioners and researchers from diverse disciplinary backgrounds to discuss and reflect on the cultural, ethical, political, and social implications surrounding the design, development, and deployment of intelligent systems and to explore participatory design approaches, tools, and guidelines that should ground the design of intelligent systems.

CCS CONCEPTS

•Human-centered computing → Interaction Design → Participatory design.

KEYWORDS

Artificial Intelligence; intelligent systems; ethical implications.

CONTEXT

Intelligent Systems – those that leverage the power of Artificial Intelligence (AI) – are set to transform the way we live and experience the world. From domains as diverse as healthcare, manufacturing, transportation, finance, and agriculture, hardly a news cycle goes by without mention of novel AI applications offering the promise of a transformed future – one where lives are enriched through new knowledge and insights and relieved of the burdens of mundane and tedious activities of today.

While these AI systems already are having an impact on us through scripted automation and transactions (e.g. factory automation or heath/financial transactions), the need for and focus on assistive systems targeting more complex and diverse socio-political contexts is increasing. These sophisticated systems often rely on unscripted, autonomous transactions and play active roles in a person’s life. This poses serious challenges – challenges that increase even further when such systems include affective or human-like attributes (e.g. digital assistants) and when they become integral part of the environments we inhabit daily (e.g. home, office, school, vehicle, city, etc.). While

AI advances are purported to hold great promise for societal progress, they also beckon careful consideration to the ethical questions and challenges raised in their wake. For every piece extolling a hopeful future for AI, there is equal comeback debating the broader repercussions of these developments and how the impacts of AI will be felt across ecosystems of use.

Indeed, as these systems continue to be developed, overlapping concerns are accelerating and becoming mainstream—from fears of jobs replacement to the emergence of surveillance states to deeply unequal societies, to name a few. At the core of such concerns is the realization that AI may challenge, if not threaten, the fundamentals of human and social behavior and the very foundations of our society. As Bostrom and Yudkowsky point out, “although current AI offers us few ethical issues that are not already present in the design of cars or power plants, the approach of AI algorithms toward more humanlike thought portends predictable complication”. Additionally, while we could build a “superintelligence that would protect human values”, the “problem of how to control what the superintelligence would do” looks rather difficult and, within this context, designers and technologists have key roles, agencies, and responsibilities.

In brief, while intelligent systems will increasingly populate the global landscape, there are a number of unaddressed social, behavioral, political, decisional, and moral questions. Put simply, the design and development of AI-based intelligent systems present core dilemmas related to the fundamentals of human and social behavior. The focus of large-scale collaborations across academia and industry like DeepMind Ethics & Society, AI Now Institute, and the “Prosperity Through AI” Pillar of MIT-IBM Watson AI Lab, the ethical implications of AI (or “AI+Ethics”) presents one of the most pressing contemporary problems we face today.

AI and intelligent systems are in desperate need for ethical as well as design guidelines. While AI has greatly evolved from a technical point of view, it is in its infancy as far as ethics and design process goes. The challenge of AI is, therefore, not only a technical one, it is first and foremost a social, cultural, political, and ethical one. Jake Metcalf articulates the issues well when he states that “more social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people (...). There aren’t consistent standards or transparent review practices”. In other words, the complexity of these systems and the obfuscation of their processes makes critical reflection difficult for many, erodes considered public debate, and makes oversight hard, if not impossible.

At the root of the debate are a slew of questions about the roles, responsibilities, and accountabilities related to AI and intelligent systems. These include:

  • What level of autonomy/agency should these systems

have?

  • What level of transparency should be provided to end

users?

  • What social and behavioral contracts should underpin

human-machine interaction with such systems?

  • What methods should be used to design unobtrusive,

effective, accurate, respectful, intuitive and transparent systems?

  • What ethical considerations should drive designers and

developers when making technical and design decisions?

  • How will these systems impact societies and the

fundamentals of everyday life?

  • What is the responsibility of public institutions to

understand, disclose, govern, and regulate the use, reach, and actions of artificial intelligence and intelligent systems they and others create, employ, and leverage?

  • In what contexts and through what means are artificial

intelligence and intelligent systems deemed inappropriate?

  • In what ways can dialogues between diverse

stakeholders across AI ecosystems be fruitfully started, especially given the different stakes, experiences, literacies, values, and priorities of those involved?

The AI ethics debate has been— and still is—dominated by two forces: data science on one side, social science and humanities on the other. On both sides sit experts. Barocas and Boyd well articulate such a polarization, adding that “the gaps between data scientists and critics are wide, but critique divorced from practice only increases them”. Barocas and Boyd’s “practice” should be extended to include end users’ everyday life expertise. End users need ways to actively participate in the AI debate and related decision making. These two perspectives, though vital, overlook the means through which AI and intelligent systems get developed. As such, their focus is on the final form and use of such systems, rather than the relationship between this form and use and the terms and means of design.

Along these lines, the design community—and particularly, the participatory design community—has the professional, moral, and ethical responsibility to engage with the debate over AI and intelligent systems. As shapers of more democratic futures— regardless of their provisionality —participatory design practitioners must weigh in. Given their technical complexity, these futures are often illegible to many people who are directly affected by and implicated in such systems. A future enriched and enabled by intelligent-yet-trustworthy systems requires careful implementation of guidelines that govern the actions of those in charge of deciding what to design, how, why and what data to feed into a given system. This future requires more than expert critique, but also the involvement of those subject to its execution.

Though different in kind, the dilemmas presented by AI and intelligent systems are familiar to the strategies and theories of participatory design. With the consideration of the relation between designed products, services, and systems and the societies that produce them and they produce participatory design provides a domain and set of techniques that actively negotiate the use, bounds, and terms of AI and intelligent systems. As such, through participatory design, AI and intelligent systems stand to gain important form, function, and clarity.

GOALS

Within the above-described context, this workshop aims at identifying the roles that participatory design could play in the design of AI systems. The workshop provides a reflective and hands-on context to debate, prototype, and develop the place of participatory design in the creation of AI and intelligent systems. Possible questions to consider include:

  • What might a participatory AI design process look like?
  • How might we measure and evaluate the impact of

such a process, especially given the variable temporalities at play (e.g. user interaction stories and touchpoints; product development and releases; news and PR cycles; economic forecasting/labor earnings; legal/regulatory processes; professional/career timelines; etc.)?

  • What are PD practitioners’ roles and responsibilities in

this complex and challenging context?

  • What PD approaches may advance the design and

development of AI systems that are unobtrusive, effective, accurate, respectful, intuitive, and transparent?

  • What participatory approaches may help address the

described issues while enabling sustainable business models and technological development?

  • What is the role of PD in AI standards development?
  • What role can PD play in educational contexts to

ensure future AI designers are grounded in human- centric values?

A deep discussion and reflection on the implications surrounding the design, development and deployment of intelligent systems is urgent—we propose that within this context PD not only has much to offer, but it has a responsibility to deeply engage. The workshop aims at bringing together practitioners and researchers from diverse disciplinary backgrounds to discuss and reflect on the cultural, ethical, and social implications surrounding the design, development, and deployment of intelligent system and to explore PD approaches, tools, and guidelines that should ground the design of intelligent and affective systems.

OUTCOMES

As key outcome of this workshop, participants will design and construct an actionable PD Manifesto for AI Futures. The manifesto will be shared with conference attendees by displaying it in a walk-by location to be agreed on with conference chairs. Additionally, the manifesto and key insights generated throughout the workshop will be uploaded in the workshop website and advertised through diverse channels. Finally, since some of the organizers have recently run workshops on AI futures in diverse contexts and with different stakeholders, we anticipate that key insights from the varied experiences may be articulated in a scholarly article which will be shared with the community once published. Through this multi-pronged approach, we aim to advance the field on how these dilemmas are seen and addressed across diverse disciplines.

APPROACH, AUDIENCE AND SCHEDULE

The workshop adopts a hand-on approach to enable participants to co-design and co-construct a PD Manifesto for AI Futures. From an attendance perspective, the minimum number of participants is 6 and the maximum is 20, with flexibility to accommodate a few more if there is a substantial interest. Given its topic, objectives, and hands-on nature, the workshop works best as a full day activity. Organizers have prior experience running workshops, including PDC workshops [17, 18, 19, 20]. Recruitment will rely on a custom-website, designed to publicize workshop activities, outline steps necessary for participation and offer the call for participation as well as useful resources. Additionally, we will advertise the workshop via design, social science, and HCI mailing lists, including PHDDesign; AnthroDesign; PDWorld; Engineering4Society; EPICPEOPLE. Furthermore, we will directly reach out to researchers and practitioners in our organizations as well as professional networks and university departments that offer classes focused on AI, Machine Learning, Anthropology, Social Science, Ethics and Design.

In the call for participation, each participant will be asked to submit a short (1-2 pages) position paper that includes:

  • Outline of personal interest and experience in AI and

its ethical dimensions;

  • Initial response to the provocations and themes

outlined in the call; and

  • Perspectives on the role that PD could play in AI design and development.

We anticipate the following recruitment timeline:

  • March 20, 2018: launch website and start recruitment
  • May 15, 2018: position papers due
  • May 25, 2018: notification of acceptance to participants
  • August 2018: workshop takes place at PDC2018

Once workshop participants are confirmed, we will facilitate pre-workshop conversations among participants, enabling them to access each other’s position statements, through the workshop website. This will establish a sense of team and help participants feel more equipped to work collaboratively and engage in co-design activities during the workshop. Additionally, to ensure that debate and activities are grounded in prior thinking, participants will be equipped with pre-workshop materials, again through the workshop website.

The workshop will start with a facilitated discussion and sharing of all position statements. This first part will include a clustering activity to organize participants’ thinking and perspectives in thematic areas that may advance subsequent hands-on exercises. After this first part, facilitators will provide a range of design tools and provocations to engage participants in hands-on co- design activities throughout the rest of the day. Specifically, participants will first engage in a curated team activity focused on addressing challenges and provocations within a specific provided context (i.e., case studies circulated that focus on one of five themes: home, travel, work, healthcare, and learning). Provocations include reflections on the role of PD, AI biases, potential for AI-centric discrimination, and issues surrounding the design of collaborative human-machine interactions. This first activity is designed to collaboratively generate insights, identify gaps and opportunities, and to advance each participant’s understanding of the workshop themes. The knowledge and learning accumulated during the first activity will be then used by participants to co-create a manifesto with specific actions to be considered when designing AI futures with a PD-centric approach. The manifesto is meant to act as a practical list of actions (e.g. must-do, should-do, must- avoid), offered to PD practitioners who operate in AI-centric environments.

A tentative workshop schedule includes:

•8.45-9.15am – Introductions, ice-breaker and settle in •9.15-9.30am – Outline of workshop goals •9.30-10.15am – Position statements overview •10.15-10.30am – Break •10.30-11.30am – Emerging key themes clustering •11.30am-12pm – Intro to manifesto design activities •12-1pm – lunch •1-3.30pm – Co-design activities and manifesto development •3.30-4.30pm – Presentations and final discussion

WORKSHOP ORGANIZERS

Daria Loi (PhD) is Principal Engineer in Intel Labs. Her work focuses on the design and user experience of intelligent systems; multimodal, tangible and ambient computing; and smart spaces – with a specific passion for spaces that foster social connectedness.

Thomas Lodato (PhD) is Research Scientist at the Georgia Institute of Technology in the Institute for People and Technology (IPAT) and the Center for Urban Innovation. His research focuses on the future of work and smart cities as well as civic technology.

Christine T. Wolf (PhD) is a Research Staff Member at IBM Almaden Research Center. Primarily ethnographic in nature, her current work focuses on the incorporation of data analytics into organizational work practices.

Raphael Arar (MFA) is a Designer & Researcher at IBM Research, Adjunct Faculty at San Jose State University and Board Member of Leonardo/The International Society for the Arts, Sciences and Technology. His work focuses on the complexities of human-machine relationships.

Jeanette Blomberg (PhD) is Principal Research Staff Member at the IBM Almaden Research Center and Adjunct Professor at Roskilde University. Her research focuses on organizational analytics and the linkages between human action, digital data production, data analytics, and business or societal outcomes.

Contexto

Autoras

Fuentes

Enlaces

URL: https://dl.acm.org/doi/10.1145/3210604.3210614

Wayback Machine: https://web.archive.org/web/20220504200553/https://dl.acm.org/doi/10.1145/3210604.3210614

Daria Loi ,Thomas Lodato, Christine T. Wolf, Raphael Arar & Jeanette Blomberg