2019 - The AI Manifesto - Malcolm Harkins

De Dominios, públicos y acceso
Revisión del 03:29 3 may 2022 de Paz (discusión | contribs.) (Página creada con «== <small>'''Texto'''</small> == We live in a time of rapid technological change, where nearly every aspect of our lives now relies on devices that compute and connect. Th…»)
(difs.) ← Revisión anterior | Revisión actual (difs.) | Revisión siguiente → (difs.)
Ir a la navegación Ir a la búsqueda

Texto

We live in a time of rapid technological change, where nearly every aspect of our lives now relies on devices that compute and connect. The resulting exponential increase in the use of cyber-physical systems has transformed industry, government, and commerce; what’s more, the speed of innovation shows no signs of slowing down, particularly as the revolution in artificial intelligence (AI) stands to transform daily life even further through increasingly powerful tools for data analysis, prediction, security, and automation.1

Like past waves of extreme innovation, as this one crests, debate over ethical usage and privacy controls are likely to proliferate. So far, the intersection of AI and society has brought its own unique set of ethical challenges, some of which have been anticipated and discussed for many years, while others are just beginning to come to light.

For example, academics and science fiction authors alike have long pondered the ethical implications of hyper-intelligent machines, but it’s only recently that we’ve seen real-world problems start to surface, like social bias in automated decision-making tools, or the ethical choices made by self-driving cars.

During the past two decades, the security community has increasingly turned to AI and the power of machine learning (ML) to reap many technological benefits, but those advances have forced security practitioners to navigate a proportional number of risks and ethical dilemmas along the way. As the leader in the development of AI and ML for cybersecurity, BlackBerry Cylance is at the heart of the debate and is passionate about advancing the use of AI for good. From this vantage point, we’ve been able to keep a close watch on AI’s technical progression while simultaneously observing the broader social impact of AI from a risk professional’s perspective.

We believe that the cyber-risk community and AI practitioners bear the responsibility to continually assess the human implications of AI use, both at large and within security protocols, and that together, we must find ways to build ethical considerations into all AI-based products and systems. This article outlines some of these early ethical dimensions of AI and offers guidance for our own work and that of other AI practitioners.

The Ethics of Computer-Based Decisions

The largest sources of concern over the practical use of AI are typically about the possibility of machines failing at the tasks they are given. The consequences for failure are trivial when that task is playing chess, but the stakes mount when AI is tasked with, say, driving a car or flying a jumbo jet carrying 500 passengers.

In some ways, these risks of failure are no different than those in established technologies that rely on human decision-making to operate. However, the complexity and the perceived lack of transparency that underlie the ways AI makes its decisions heighten concerns over AI-run systems, because they appear harder to predict and understand. Additionally, the relatively short time that this technology has been used more widely, coupled with a lack of public understanding about how, exactly, these AI-powered systems operate, add to the fear factor.

Continuar editando....

Contexto

Autoras

Fuentes

Enlaces

URL: Wayback Machine: