Diferencia entre revisiones de «2013 - Data and Control--A Digital Manifesto - Wolfgang Pietsch»

De Dominios, públicos y acceso
Ir a la navegación Ir a la búsqueda
(Página creada con «== <small>'''Texto'''</small> == This is a plea for democratic supervision and regulation of the large data sets that are currently being collected all over the digital wo…»)
 
Línea 112: Línea 112:
  
 
== <small>'''Contexto'''</small> ==
 
== <small>'''Contexto'''</small> ==
 +
 +
Aparece en https://www.digitalmanifesto.net/manifestos/138/
  
 
== <small>'''Autoras'''</small> ==
 
== <small>'''Autoras'''</small> ==

Revisión del 00:58 31 mar 2022

Texto

This is a plea for democratic supervision and regulation of the large data sets that are currently being collected all over the digital world — a plea driven not by fears for the privacy of the individual but by worries that a privileged knowledge of the mechanics governing the social world could allow for a one-sided and largely unrecognized control of the masses.

In the nineteenth century, in an age of unprecedented and seemingly limitless technological and scientific progress, scientists had a dream — a vision of a social science as systematic and powerful as the most developed of the natural sciences, a science of man modeled after physics. Like celestial mechanics, such a social science could be reduced to a few principles from which, under sufficient knowledge of the boundary conditions, everything else becomes derivable. It would allow for the prediction of the future course of the major actors in society just as celestial mechanics allows for the determination of the trajectory of Venus or Mars. The dream was never realized.

With obvious frustration, methodologists of the nineteenth and twentieth centuries, great and penetrating minds like the philosopher and economist John Stuart Mill, blamed this failure on the complexity of the social world and the fact that social phenomena do not fit neatly into a laboratory and are thus exempt from systematic experimentation. The social scientist cannot just exchange a democracy for a dictatorship, run history again, and record the effects of the change. Observing the fate of man in society rarely leads to causal knowledge allowing for prediction and control over social phenomena. Data collected in the social world can only disclose correlations — of which no one can be sure if and where they will hold again — not causal knowledge geared toward prediction and control. Or so it seemed.

Then came the digital age. In a world of big data and increasingly detailed recordings of the boundary conditions under which social phenomena happen, the known islands of causal knowledge are becoming larger, and the sea of ignorance retracts. Almost certainly, these islands will become continents in one not so distant day. This is not to claim that everything in the social sphere will become predictable, but prediction and control will be possible in ever more areas. After all, we are not dealing with a problem of principle; it is just a matter of quantity of data and of processing power. Complex phenomena require extensive experimentation; incredibly complex problems require incredibly large data sets. For a scientist in the nineteenth century, it was impossible to imagine how mankind could ever process or even possess such data sets. But it is no longer so.

It was beyond the imagination of the social scientist of the nineteenth century that one day the world would be populated by little machines equipped with various kinds of sensors and instruments, autonomously collecting information to transfer it to central archives, where the data would be analyzed and scanned for reliable relations. Our computers and laptops are such machines. And as the digital age rises further, the world will become ever more populated with sensing objects — phones, cars, screens, lampposts — that record an increasing variety of information. As the network of these parameters grows tighter around our daily lives, we are becoming, of necessity, increasingly predictable and controllable by those in possession of these data sets. Crucially, the data need not be personalized to a name; it suffices to know the relevant classes to which a user belongs. In this respect, most traditional privacy debates go wrong.

The social physics of the digital age turns out a creature much different from the one imagined by our scientific forebears. It is not at all a celestial mechanics, not a neat and tidy construct, a hierarchical structure with a small number of fun- damental principles. The social mechanics that is being recorded in the bodies of humming server farms is not even a coherent, connected structure but a pluralistic entity, a patchy cluster of loosely related laws, arbitrarily overlapping, sometimes contradictory or even fully disconnected. More often than not, these laws hold only locally within a highly restricted range of boundary conditions; universal laws of sufficient generality are almost entirely lacking. It is thus not a scientific theory that fits into textbooks; it is an overwhelming collection of phenomenological data sets that can only live inside the large data storages of the computer era. It is a scientific theory that is never to leave its digital cage. But even though this creature is not what most scientists expected, the theory is there and growing from day to day. And as it is getting more powerful we need to deal with it. The beast must be tamed.

Two kinds of control, two systems of laws, govern the social world. One of them is well documented in the codes of law that were built up over centuries and millennia of human coexistence in complex social and political environments. These normative laws are largely fixed by convention delimiting the freedom of the individual in his or her social interactions. They can be consciously broken by accepting the consequences laid down in law books — fines, prison sentences, or even death. Our modern democratic societies have developed a highly complex and well-regulated process for how these normative laws can be implemented and changed.

The other kind of laws is much less visible; they are veiled behind the illusion of unlimited human freedom and a mental reconstruction of the human condition, according to which all our actions are a result of rational and conscious decisions. Psychologists have long pointed out the fallaciousness of this view, but some mysterious preprogramming of our minds prevents us from internalizing this fact. This other kind of laws much resembles the causal relations known from the natural sciences. Until the digital age, no considerable body of these laws was known, only some rather isolated social and psychological facts, exploited mainly by politicians and advertising companies. These laws were often weak and unreliable and in little need of regulation. With the digital age, the situation is changing; the body of known reliable causal laws is constantly growing. Unlike the normative laws, which largely concern situations in which we decide consciously, the causal laws govern the unconscious side of our thoughts and actions. Whoever knows these laws can control us, often without being noticed, while in hindsight we may even construct a fabricated story of conscious and deliberate decision making. The dangers are obvious.

A colleague of mine recently complained about baby strollers popping up on his computer screen and distracting him from the work he was supposed to be doing. Of course, he has a young child, and he was recently on the lookout for strollers on the Internet. Almost certainly, these are just the clumsy beginnings, some indication of how the direction of your thoughts and steps may be controlled in a future world by placing triggers that increase your probabilities to think and behave in certain ways. And as the boundary between the digital and the physi- cal dissolves, so does the boundary between advertising and unbranded reality. We are facing nothing short of the total commercialization of the social world. Gone are the times when publicity was forced into frames from which you could look away or which you could even switch off, when publicity was spatiotempo- rally constrained. Large Internet companies like Google or Yahoo use their causal knowledge to more or less unrecognizably steer your ways through the Internet. No matter how strong-willed and self-conscious you are, they can increase the probabilities that your stroll through the digital world takes certain turns — turns tacitly intended by these firms and their clients. The money that currently goes into the Internet advertising business is proof enough that the principles work. What will happen when this knowledge is used for other things beyond making money? What damages are already inflicted on society by the commercialization of ever more areas of our social lives, including the intrusion of advertising into our working environments? What will happen when the Internet and its principles will extend to the object-world, when the Internet of Things will be realized?

There are no easy answers; all wholesale solutions are inappropriate. Open data leads to uncontrollable anarchy, while the current laissez-faire attitude, leaving Internet firms to handle data at their will, leads to a knowledge oligarchy dangerously dominated by an arbitrary lot of a few large companies. We need a debate.


Contexto

Aparece en https://www.digitalmanifesto.net/manifestos/138/

Autoras

Fuentes

Enlaces

URL: https://read.dukeupress.edu/public-culture/search-results?page=1&q=Data%20and%20Control--A%20Digital%20Manifesto&fl_SiteID=1000071

Wayback Machine: