2020 - The AI Trust Manifesto for more inclusive artificial intelligence - Carolyn Herzog

De Dominios, públicos y acceso
Ir a la navegación Ir a la búsqueda

Texto

Artificial intelligence is already making key decisions in our lives – whether it’s your smartphone adjusting its lens to snap the ideal portrait, or your vehicle making an automated emergency stop – we need methods to identify and place limits on bias in computer algorithms.

The importance of inclusive AI

New applications for AI are created every day – an exciting frontier for technologists. But new developments in AI have also illuminated a novel problem: human bias reproduced in computer algorithms. At scale, these biases could contribute to an increasingly lopsided world where the benefits of a modern, digital society are not inclusive. As the General Counsel and lead for AI ethics initiatives at Arm, a foundational IP processor technology company, I spend a great deal of time thinking about technology, good governance and how AI could and should impact humanity. To realise the full benefits of AI, it must be built in an inclusive way and be trusted by everyone. Global governments have begun to explore these considerations, and the EU has even drawn up proposals for regulating AI in situations where there is risk of harm.

AI Trust Manifesto

We’re calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles through the establishment of an AI Trust Manifesto. A key principle in the manifesto states every effort should be made to eliminate discriminatory bias in designing and developing AI decision systems.

Women are the largest underrepresented group as a whole in the world, which means we will need to have an inclusive team of people – including women of diverse backgrounds and women of colour – involved in engineering AI. According to STEM Women, the UK saw little to no change in the percentage of woman engineering and technology graduates from 2015 to 2018. In fact, only 15% of graduates between those years were women. That brings up an important consideration – AI is programmed to mimic human thought and rationale. If programmed by a nondiverse workforce, it can seriously hinder widespread technology development and implementation. One example of this is facial recognition. If trained on only Caucasian faces, for instance, that oversight could result in AI misidentifying minorities during facial recognition scans. There is wide acknowledgement that the careful use of training data is crucial in ensuring that discrimination and bias do not enter AI systems to the extent that the implementation of such data may be illegal or unfair.

Engineering change

If we are to give machines the ability to make life-changing decisions, we must put in place structures to reveal the decision-making behind the outcomes, providing transparency and reassurance. Companies must take the lead by setting high standards, promoting trust and ensuring they maintain a diverse staff trained in AI ethics. We must continue to explore different solutions to the complex issue of ethical AI decision-making. One possibility is building a review process that incorporates the key pillars of AI ethics, including issues of bias and transparency, to ensure products and technologies available in the marketplace receive appropriate prior approval for adherence to ethical standards. This type of system would help consumers trust that the technology has been anti-bias trained and produced with fairness and inclusivity methodologies. To fully realise this reality, it’s critical for girls, women and the greater technology industry to use their voices and networks to increase female participation in STEM and AI. In all its forms, AI has the potential to contribute to an unprecedented level of prosperity and productivity. To do that, it must be built on a foundation of trust by the diverse range of people for whom the technology will ultimately be catered – including women.

Arm’s technology touches 70% of the world’s population and together with thousands of partners, we are bringing AI to processor technology in trillions of devices to make them smarter and more trustworthy. We believe the technology sector has a responsibility to ensure AI development follows certain principles for public trust.

OUR GUIDING OBJECTIVE AI Must be Ethical by Design

Arm believes that the technology sector has a responsibility to ensure that the development of AI1 is guided by certain principles in order to generate public trust in the underlying technology and its applications. Further, Arm believes that unless the technology sector gains this trust, then it will fail to deliver the full potential that we are confident AI technology can bring to individuals, businesses and societies across the globe.

Arm would like to see the technology sector come together to create an ethical framework to ensure that AI is developed in a fair and responsible way. Without such a framework, there is a risk that regulation will become onerous and fragmented, and will not allow AI to succeed.

Creating the right ethical framework requires collaboration and dialogue. We have drawn up some principles – and some challenges – which we believe could help drive the debate. We welcome others to join us in doing so.

We believe that ethics should be incorporated in the key design principles for AI products, services and components. However, at present there is no defining set of ethics to follow and so we call for the formation of an industry-wide working group to define and standardize a set of ethics that can be adopted by anyone deploying AI technologies.

Since ethics are so critical to AI, it is essential that anyone working in the field has a solid foundation in the issues. We call on all universities and colleges that teach AI to include mandatory courses on issues relevant to ethics in AI at undergraduate and graduate level. Further, we believe that all businesses developing AI technologies must ensure that their staff complete mandatory professional training in the field of AI ethics.

Ethical Principles of Trust in AI Systems

There are many issues that must be addressed in the development of an ethical framework for AI that enhances trust. As a starting point, Arm proposes the following principles.

1/ WE BELIEVE ALL AI SYSTEMS SHOULD EMPLOY STATE OF THE ART SECURITY

Given the risk of cyberattacks on critical AI systems causing major disruption, all AI deployments should take advantage of task-specific advanced hardware and software security. Only by ensuring an end-to-end chain of security can we truly trust AI technology and the data, actions and insights that the technology will create.

2/ EVERY EFFORT SHOULD BE MADE TO ELIMINATE DISCRIMINATORY BIAS IN DESIGNING AND DEVELOPING AI DECISION SYSTEMS

The technology sector is global and the AI industry, in turn, is global and serves a community of diverse customs, values and perceptions about what is ethical. There is wide recognition that the careful use of training data is crucial in ensuring that discrimination and bias do not enter AI systems to the extent that the implementation of such data may be illegal or unfair. Concerns about this are already playing a role in the debates about using AI in areas as diverse as criminal justice and facial recognition technology. Standards must be developed to both assess the quality of training data and to enable traceability so that systems can be linked to the data sets on which they were trained.

3/ WE BELIEVE AI SHOULD BE CAPABLE OF EXPLAINING ITSELF AS MUCH AS POSSIBLE: WE URGE FURTHER EFFORT TO DEVELOP TECHNOLOGICAL APPROACHES TO HELP AI SYSTEMS RECORD AND EXPLAIN THEIR RESULTS Where appropriate, the way an AI system works should be capable of being transparent, and the decisions that result from it should be explainable to a human interrogator including non-specialist users of AI.

4/ USERS OF AI SYSTEMS HAVE A RIGHT TO KNOW WHO IS RESPONSIBLE FOR THE CONSEQUENCES OF AI DECISION MAKING

In future, AI systems will make life-threatening decisions such as in autonomous vehicles or robotic surgery. The industry needs to work with standards and regulatory bodies to develop commonly acceptable frameworks for mapping liability where it is a concern.

5/ HUMAN SAFETY MUST BE THE PRIMARY CONSIDERATION IN THE DESIGN OF ANY AI SYSTEM

In general, if AI technologies are not able to demonstrate they can operate as well, if not better than humans, in instances where human harm is possible, they should not be deployed.

6/ WE WILL SUPPORT EFFORTS TO RETRAIN PEOPLE FROM ALL BACKGROUNDS TO DEVELOP THE SKILLS NEEDED FOR AN AI WORLD

Opinions differ as to the likely impact of AI on jobs, but most agree that the nature of work is going to change, and the tech sector cannot be complacent. All stakeholders should look afresh at the skills training provided in schools, colleges and elsewhere, and help our training providers to respond fast to the demand for new skills. This needs to include training people in appropriate skills across the whole of society.

Contexto

Autoras

Fuentes

Enlaces

URL: https://www.glam-readytolead.com/post/the-ai-trust-manifesto-for-more-inclusive-artificial-intelligence https://www.womeninstem.co.uk/breaking-stereotypes/why-ai-needs-female-developers/

Wayback Machine: http://web.archive.org/web/20210419162352/https://www.womeninstem.co.uk/breaking-stereotypes/why-ai-needs-female-developers/# http://web.archive.org/web/20210417075904/https://www.glam-readytolead.com/post/the-ai-trust-manifesto-for-more-inclusive-artificial-intelligence