DEV Community

Artinte
Artinte

Posted on

Artificial intelligence, procedure, and fiction in "Time Machines".

Image description


Artificial intelligence, procedure, and fiction in "Time Machines".
Time Machines
<!--more-->
Abstract
While today there is much discussion about the ethics of artificial intelligence (AI), less work has been done on the philosophical nature of AI. Drawing on Bergson and Ricoeur, this paper proposes to use the concepts of time, process, and narrative to conceptualize AI and its normatively relevant impact on human lives and society. 
Distinguishing between a number of different ways in which AI and time are related, the paper explores what it means to understand AI as narrative, as process, or as the emergent outcome of processes and narratives. It pays particular attention to what it calls the “narrator” and “time machine” roles of AI and the normative implications of these roles.
 It argues that AI processes and narratives shape our time and link past, present, and future in particular ways that are ethically and politically significant.


Introduction

Artificial intelligence (AI) and data science, in particular machine learning and its contemporary applications that enable automated classification, prediction, decision-making, and manipulation in many domains of human activity and society, has sparked much controversy during the past decade: there are many concerns about the (potential) ethical and societal impact of AI. In philosophy and related academic fields, 

this has been accompanied by a wave of publications on the ethical and political aspects of AI (Bostrom, 2014; Boddington, 2017; Eubanks, 2018; Benjamin, 2019; Zuboff, 2019; Danaher, 2019; Couldry & Mejias, 2019; Coeckelbergh, 2020; Véliz, 2020; Berberich et al., 2020; Mohamed et al., 2020; Bartoletti, 2020; Crawford, 2021; Santoni de Sio & Mecacci, 2021; Cowls, 2021).
However, less attention has been paid to the philosophical nature of AI. What are we talking about, if anything, and how can it best be conceptualized?
One way to conceptualize AI and data science and to understand how they impact our lives and societies, is to focus on time and related concepts from the philosophical tradition such as process and narrative. While the topic of time and technology has already received some discussion in philosophy of technology, for example in the Heideggerian tradition (Hui, 2012, 2017; Stiegler, 1998), analytic philosophy of robotics (Seibt, 2017), and earlier in the work of Virilio (1977) and Mumford (1934), and narrative approaches to technology have been proposed based on Ricoeur (Kaplan, 2006; Reijers & Coeckelbergh, 2020; Reijers et al., 2021), more work is needed on concrete contemporary technologies such as AI, and on the precise relations between the concepts of AI, time, process, and narrative.

This paper addresses this need by (1) exploring what it means to take a process approach to AI, by (2) introducing a distinction between 3 different ways in which AI and time are related, and by (3) showing how each of these conceptualizations have normative implications for how we evaluate the impact of AI on our lives and societies. One of the relations and conceptualizations I will propose concerns AI’s role as a “time machine”: I will argue that via classification, prediction, and recommendation, AI links past, present, and future in particular ways, which has normatively significant consequences. Moreover, throughout the paper I will explore what it means to see AI not as a thing or agent, but as process, narrative, and emergent outcome of processes and narratives.

 I will conclude that this process-oriented and hermeneutic approach has also implications for how we understand the human and for the way we (should) do ethics and politics of AI.

AI as Object Versus AI as Process and Narrative


We are used to think of technologies in terms of objects. When we think of technology, we imagine material objects such as a hammer or immaterial objects such as data and software. When we think of AI, then, we imagine a computer, a robot, software, a self-driving car, or other things.

 This way of seeing AI, which is in line with a long-standing tradition in Western metaphysics since Plato that sees the world as a collection of objects or substances, is reflected in philosophy of technology and has not changed in contemporary times.
 On the contrary, after the so-called empirical turn (Achterhuis 2001), philosophers of technology focus on artefacts, for example in the tradition of postphenomenology: what matters is what ‘things’ do (Verbeek, 2005). Information and informational technologies have also been conceptualized in terms of objects. For example, Floridi has argued that the world is the totality of informational objects dynamically interacting with each other (Floridi, 2014).

However, in Western metaphysics we also find another tradition, process philosophy, according to which the world is not a collection of objects but a process of becoming (rather than being).

 This tradition finds inspiration in Heraclitus’ doctrine of radical flux (consider the famous slogan panta rhei: everything flows), has been developed in German idealism (Hegel) and pragmatism (James, Dewey, Mead, Peirce), and is also to some extent present in Heidegger, but has found its most famous elaboration in the process philosophy of Bergson and Whitehead, which has then influenced contemporary philosophers such as Deleuze, Simondon, and Latour. 
The French philosopher Henri Bergson argued that individual intelligence emerged in a process of evolution that expresses a life force (élan vital) (Bergson, 1907). He used the term ‘duration’ (durée) to talk about time: already in his doctoral dissertation and in his debate with Einstein, he distinguished between time as we experience it (lived time) and the time of science, which is conceived of as discrete, spatial constructs (Bergson 1889).
 Bergson argued against what he saw as Kant’s mistake to think time in terms of space, and emphasized intuition (Bergson, 1896) rather than formal knowledge and its conditions. However, duration is not just subjective, psychologically experienced time (as Einstein thought). Duration is something real, which can be experienced or transformed into something spatial.
 Thus, according to Bergson, there is not first objective time and then our experience of that objective time. Bergson attempted to go beyond the subject-object divide of Cartesian and Kantian philosophy: time cannot be isolated from our experience of it and, more generally, from ‘the living structures in which it manifests itself’ (Landeweerd, 2021, 26–28). What we call objective time is produced by our instruments: by technology. Physicists such as Einstein misleadingly make metaphysics out of this time-making. But the only metaphysics we need, Bergson argued, is one that recognizes duration and stresses emergence.
 The English philosopher Alfred North Whitehead, who thought at Harvard, used the term ‘process’: actual existence is a process of becoming (Whitehead, 1929). Whereas Western philosophy has traditionally privileged being over becoming, process philosophy reverses this. Moreover, like Bergson, Whitehead wanted to go beyond the subject-object divide: he sought to fuse the objective world of facts with the subjective world of values. In his process metaphysics, entities and experience are both part of becoming.

What would it mean to conceive of AI as process, lived experience, and becoming? This is difficult to imagine, since we are used to see and imagine AI as a some-thing, a substance. For example, we may observe the results of a statistical model (which we see as a thing), see a computer with AI software, or imagine a car that is driven by an AI system.

 This way of perceiving AI already bifurcates the world into a perceiver and some-thing that is perceived. We may also observe AI at different times. Time is then constructed in a spatial way: as a succession of discrete moments. At time t1 “the AI” does x, at time t2 “the AI” does y, and so on. This scientific way of understanding AI can be contrasted with our personal experience of the technology.
 For example, driving in a self-driving car — or rather being driven by an AI system — can be experienced as a flow, rather than a succession of distinct moments. Seen from a traditional modern view, these different concepts of time clash: there is a gap between the technology and the lifeworld, between objective and subjective knowledge. However, with process philosophy, we can radically question the metaphysical basis for such a gap, and see AI, and how we relate to AI and experience AI, as a process rather than an object, and more specifically a process that fuses “objective” and “subjective” elements, science and lifeworld, scientific time and lived time. In the next section, I will further show what this means.

Moreover, if AI becomes part of the lifeworld at all, this is always mediated by our (human) interpretation and narration. This leads us to another interesting tradition in philosophy that is concerned with time and experience: hermeneutics, and in particular hermeneutics that focuses on time and narrative.

 Here Paul Ricoeur’s narrative theory, which goes back to Aristotle, is most relevant. Ricoeur relates temporality to narrativity: our understanding of time and the world is mediated by language, in particular by narrative (Ricoeur, 1980, 1983). Ricoeur, too, is interested in bringing together different ways relating to time, but this time not process but narrative does the work: he argues that ‘time becomes human to the extent that it is articulated through a narrative mode’ (1983, 52). This takes the form of emplotment. Inspired by Aristotle’s Poetics, he describes how the plot of a narrative configures characters, motivations, and events into a meaningful whole.

Recently, Ricoeur’s theory of narrativity has inspired work in philosophy of technology. Kaplan (2006) already argued that, while Ricoeur’s own view of technology is not so interesting since it belongs to a tradition that equates technology with domination, his work on narrative can nevertheless be used in philosophy of technology: technology, like a text, gains meaning from a background (use-context) and is open to multiple interpretations (49). 

Technologies also figure into our lives in different ways; we can tell stories about this (50). Emphasizing the more active narrating role of technology, Reijers and Coeckelbergh (2020) have used Ricoeur’s narrative theory to show how technologies, similar to texts, have the capacity to tell stories and configure our lifeworld. Based on Ricoeur and in dialogue with the virtue ethics tradition, they have proposed a hermeneutic approach to ethics of technology that combines a focus on practices with narrativity theory: the way technology configures practices is linked to narratives in a community and to normative ideals.

What does this hermeneutic approach mean for AI? In what way could AI be like a text, or tell stories? And what does it mean when AI configures our lifeworld and is linked to broader narratives and norms?



One example of how to do this, which also shares the aim of trying to close the gap between technology and lifeworld, is offered by Keymolen (2021), who uses Ricoeur to analyse the Alpha Go AI as a story. By engaging with the story, she argues, we can interact with its power even before it is part of our everyday life (252). 
In her analysis of a documentary about Alpha Go, she shows how the game Go pre-structures the story of the AI and gives specific roles to the participants, how the story is embedded in a history of games between humans and technologies (259), and how ‘tragic-like emplotment’ plays a role, giving us an aesthetic experience (261–262).
 Explainability, then, is not just about opening the black box (giving technical insight into AI) but also about helping people to interpret and gain access to AI in a narrative, interpretative way.
 In other words, AI is not only a technology but also a story. This enables us to critically examine AI from a hermeneutic point of view and connect to (other) stories available in our culture, which give meaning to AI. It also helps us to analyse the structure of the stories (in this case a game structure).

Another example is the use of AI in contexts and practices of medical diagnosis. AI, by offering a diagnosis based on image recognition, can be understood as shaping the narrative of patients and doctors, giving them roles in a story about images and probabilities and influencing meaning making in this particular con-text. One could also understand this as a re-shaping of a medical practice, which is itself linked to narratives in a particular medical community or patient community, and which is subject to ethical and political norms and ideals (as for example materialized in medical ethics codes and laws).

Having explored some ways in which AI can be seen in process and narrative terms and having sketched the theoretical background of this paper, let me now propose a more general conceptual framework for thinking about the relations between AI and time, which combines some insights from process philosophy and narrative theory, and which also has normative implications.





Three Relations Between AI and Time, Described by Using the Concepts of Process and Narrative


Let me distinguish 3 relations between AI and time, which I name as follows:

1.The time of AI
2.AI in time
3.AI-time

The Time of AI: Narratives About AI

The first relation, titled “the time of AI”, concerns the narratives we tell about AI. This can take place at a “macro” level. Consider for example the transhumanist accelerationist and Singularity narrative (Bostrom, 2014; Kurzweil, 2005; Moravec, 1988), according to which we are on the way towards a situation in which superintelligence surpasses human intelligence, takes over control, and spreads into the universe, or the Marxist history of AI-capitalism, which predicts a future without humans (Dyer-Witheford et al., 2019). 

AI is then not just a technology but also a story: a story about civilization, about a particular society, about capitalism. AI is not only developed and used, but also narrated and interpreted. This can be done by putting AI in the light of these macro narratives. But narrating AI can also be done at the “micro” level: for my life. It can take the form of a particular story about me and AI.
 For example, someone could tell a personal story of how an AI did not recognize her, that this was really offensive to her, and that it shows how racist her society is — which in turn may lead to telling other stories, for example stories of discrimination in context of police intervention. In both cases, the meaning of AI takes shapes and evolves, against a background that helps to give it meaning. To pick up Wittgenstein’s term Ricoeur draws attention to in his article on Wittgenstein and Husserl (Ricoeur, 2014): like a text, the narration, construction, and interpretation of AI take place against the background of a ‘form of life’, which helps to give meaning to AI and which is in turn constituted by the specific meaning-giving activities of people — concerning AI and otherwise.


Again this means that AI is not just a “thing” (although it certainly has material aspects — see for example Crawford, 2021), but also a particular narrative or collection of narratives, which are linked to other narratives. The stories are not just “about” AI, as if AI were a fixed thing that is not influenced by the stories we tell about it.
 By telling stories about AI, humans also constitute what AI “is,” or rather, what it becomes as a result of the stories. This is literally the case, in the sense that for example transhumanist narratives might influence the actual research and development of AI. 
But if we consider AI as use and practice, narratives about AI also shape how we interact with AI, how we think about AI, how we talk about and to AI, and so on. One could also say that there is no AI-in-itself (to use a Kantian phrase); what AI becomes is shaped by narratives: the narrative about the particular AI, but also “grand narratives” about the history and future of humanity.

Consider for example Harari’s (2015) narrative that re-tells the history of humankind in a way that predicts a future in which humans will be obsolete when intelligent machines and “Dataism” takes over. Harari’s narrative is a contribution to the mentioned transhumanist grand narrative about humans that enhance themselves and create intelligent machines, which leads eventually to a so-called Singularity and intelligence explosion: new, artificially intelligent entities take over and spread into the cosmos. 

No need for humans, with their limited intelligence and limited data processing capacities. Specific AI technologies and technological events (so-called “breakthroughs”, for example, or demonstrations of AI’s power as in the case of Alpha Go, GTP-3, or Wu Dao 2.0) are then seen in the light of this grand narrative: as steps on the way towards the Singularity. A narrative about a particular AI is understood as part of a larger narrative, which gives meaning to particular technological breakthroughs.

Note also that these narratives are structured in a particular way. The Singularity and intelligence acceleration and explosion narrative, for example, borrows its “grammar” from Moore’s Law, which concerns the rate of growth of computer hardware and thereby speed: the observation that the number of transistors in an integrated circuit doubles every two years. 

The grand narrative of the future of humanity and the universe is structured by this: the larger narrative is itself based on a “smaller” narrative about hardware development and computer power, which is a very specific way of perceiving/constructing time and AI.


AI in Time: AI and Data Science as Process

The second relation, titled “AI in time”, is again about AI as a process, rather than a thing, and refers to the use and development processes that take place in time, for example AI as data science process. “Time” here can refer to two different times in which AI takes place or develops: scientific-objective time and the time of the lifeworld, the lived time Bergson conceptualized as durée. 

however, in process, the two kinds of time merge: AI in time is then both measured/controlled and lived. It is a duration that is both experienced by humans (lived) and rendered “objective” and produced by measurements, technologies, and management techniques. The best way to understand how this happens is to consider data science processes. These processes have various steps, such as data collection, data analysis, modelling, and so on. 
This way of perceiving/constructing the data science process belongs to what twentieth century philosophers would call “objective” time or modern-scientific time. It is about management and control. The steps divide up time in a way that renders it spatial. The different steps are different boxes, marking discrete chunks of time. But every step involves humans, who experience, act, and interpret. There is not only the time as shaped by the technological and scientific process; there is also human experience and human experience of time. In data science as a practice, both the technological-scientific process and the lived time are at work. Conceptually both kinds of times can and must be distinguished. But in process and in practice they combine.

What AI “is”, then is this process, or even the outcome of the process. It is impossible to say what AI “is” a priori, before or outside the process. AI and the data science processes are connected. AI cannot be “lifted out” of time, and neither can it be disentangled from what humans do and experience. The process can be described in spatial terms (in terms of steps), but knowledge of the process is always at the same time lived. Moreover, at the same time AI also leads to the emergence of human subjects: the measurer and controller are an outcome of the measurement and control process. The data scientist is shaped by the data science process.

AI-Time: How AI Shapes Our Time and Functions as a Time Machine

The third relation, titled “AI-time”, differs from the previous ones in at least two ways:

First, here AI is neither a thing or object, nor just (part of) a narrative or process, but becomes itself a meaning-maker, a writer of the story: AI becomes a co-narrator, rather than just an object or even actor in the story. In this role of narrator, AI shapes our time and is a “time machine”, which links and shapes past, present, and future.

What does that mean? Let me unpack this “time machine” and more active “narrator” role. 

First, by making classifications based on historical data, AI processes may fix us in the past, thus shaping particular presents and futures. For example, if historical data from job interviews are used to train a hiring algorithm, then past ways of thinking — including potential bias — will shape present hiring and thus the future of the company and the story of the people who are (not) hired.
 Second, by means of prediction, which then influences human action, AI processes shape the present and future. For example, if AI predicts that there will be more crime in a particular area, then police forces may focus their activities there and prevent more crimes in that area, which changes the present and future. AI then creates a self-defeating prophecy. Third, by making decisions and by manipulating people, AI shapes the present and future. As a decision-maker, AI becomes a character in the story.
 As a manipulator, it becomes a co-narrator of the story. For example, if it takes on the role of a judge deciding about parole, it is an actor in the story (an automated judge) and it co-writes the story of a particular prisoner and, in the end, it shapes the history of the decisions of that particular court. This in turn feeds into future uses of AI. And if AI is used for manipulating people, for example for nudging them to buy certain products by making recommendations based on their statistical profile (consider for instance Amazon), then it changes the story of that particular consumer.

AI does not play this role as a “thing”, but rather as (1) a process that is structured in particular ways and as (2) a narrator of human lives. While AI is not a narrator in the same way as human beings can be narrators, since AI is not an intentional and sensemaking being (and indeed no being at all but rather a process), AI co-shapes our narratives. Given the inherent structure and functions of AI and data science as processes and narrators, AI works as a time machine. If we only use terms such as “artefacts”, “objects”, and “things”, we cannot conceptualize this: we need a process-oriented and narrative approach, which relates AI to humans and their activities, structures, and culture.

Second, however, if we further radicalize this approach in a process philosophy direction, there is no longer an opposition between AI and humans as fixed relata in processes and stories; instead, they emerge from the process itself. 

I already suggested this with regard to the “AI in time” relation, but here this becomes even clearer. First there is the story, the process, the relation. We do not start with fixed entities; what we call “AI” and “humans” emerge from the process, they become. For example, in the manipulation case what “AI” is, becomes clear in and through the data science process that leads to the manipulation; it cannot be defined separate from that process and is rather the result than an ingredient or tool. It is the cake, not (just) the mixer. 
Similarly, the human in this process and story is not fixed from the beginning but becomes what she is through the process and by having received a role in and through the story: she starts with the idea that she is an autonomous individual, perhaps, but is then made into a manipulated consumer in and through the process. AI as (part of a) process and narrative (for example a marketing process and a capitalist narrative) gives her that role. And if she resists, protests, and so on, then she starts a new process and story, which connects to the existing story and may or may not lead to a different outcome. (I will soon look further into the normative significance of this analysis.)

Interestingly, this means that we are not fully in control of AI: not in the sense that AI does things without our intervention (consider again the self-driving car), but in the sense that it can get hermeneutically out of control and perhaps always is out of our full control in that sense. 

We do not fully control the meanings and roles that are the outcome of AI and data science processes. The developers may have one interpretation of what their AI “is” and “does”; but what it becomes can be very different since other interpretations are possible and since the outcome of a process is not always fully predictable.

Is what AI does here and means here unique? Is it different from other technologies? And keeping in mind the Ricoeur-inspired hermeneutic approach: is it different from text?

Yes and no: One the one hand, these relations and roles of AI are not so different from what text does. Text is also a technology we can talk about, a process, and meaning-maker. It also has emergent properties, and we are also not necessarily in control of the meanings and roles that emerge. The author has long been proclaimed death (Barthes, 1967); the author does not fully control the meaning of the text.

 This seems also true for the developer, whose intentions may clash with what (end-)users do with the program. And as we know from the tradition of thinking about writing technology from Plato to Stiegler, technologies also constitute a kind of memory. In the Phaedrus, Plato already worried that people would cease to exercise memory because they would rely on writing. Printed text can be seen as an extended memory (Ong, 2012).
 Like text, AI and data science processes fix knowledge of the past. Once it is on the page (text) or in the dataset and processed by the algorithm (AI), there is no real-time change anymore. Just as in text we might get bewitched by the thoughts and stories of the past, data science processes may prevent social change by perpetuating biases of the past. At the same time, however, there is no determinism. We can offer different interpretations of the text and we can change the algorithm, the data, and (in principle at least) human  behavior.

On the other hand, there are at least three differences with AI. First, AI produces a different kind of knowledge: not text (say linguistic knowledge) but numbers, in particular statistical knowledge, for example probabilities, correlations, etc. AI and data science processes are therefore not an exteriorization of human memory, as Stiegler (1998) saw it, but amount to a different kind of memory altogether. 

AI and data science processes produce their own kind of knowledge, which is then memorized in technical ways (databases, models). Whereas the Platonic model of writing presupposes some pre-existing memory in the human, which is then exteriorized through writing and materialized on paper, AI and data science transform human thought and experience into data, and produce statistical knowledge about these data, which the humans involved do not already have and (especially in the case of big data and complex models) cannot have or produce. AI thus creates its own “memories”, which may be quite different from the content of human memory, which is based on human experience and not on data.

While it is well-known in the phenomenological tradition (Husserl, 1973; Merleau-Ponty, 2002) that there is ‘sedimentation’ in the sense that past experiences influence present experiences of the same phenomenon, in the case of AI processes both the past and the present reliance on that past take a very specific shape, which is not about human experience but about the production and use of statistical knowledge.

 While humans are part of the process of AI and data science, the transfer from past to present itself is done without human experience intervening. There is no ‘sedimentation’ in the traditional, phenomenological sense of the word: instead, there is calculation. At best, there is interpretation by humans afterwards (which then can be the object of sedimentation).

There may be ‘sedimentation’ of the technology in the sense that the use of technology itself may recede into the background (Rosenberger and Verbeek, 2017; Lewis, 2021). This phenomenon is well-known since Heidegger and Merleau-Ponty (and later Dreyfus). In the case of AI, we may for example use a search algorithm but not be aware that it relies on AI. 

(It is not sure, however, that this can be described in terms of sedimentation, since what happens there is different from the incorporation and creation of embodied knowledge that for example Merleau-Ponty and Dreyfus describe, and there is no gradual receding into the background since on the part of the user, the technology may never have been in the foreground in the first place: we were never aware of it, it was hidden.)

But here I consider a different phenomenon, which has to do with transfer of knowledge from past to present. In the AI and data science process, there is no sedimentation in the sense of human experience that provides the basis of further experience. The knowledge produced and used by AI is not directly based on human experience. It is only very indirectly, via data, that human experience plays a role.

 Text, writing, and narrative seem to offer a more direct access to human experience, albeit never totally direct and always mediated. But this way of putting it is also not right: it misleads us into thinking that there is always a pure human experience in the first place. Instead, the technology of writing co-shapes the experience; it does not fully pre-exist as a fixed kind of thing that is pure and untouched by technology and/or its linguistic and narrative expression.
 Similarly, both the AI knowledge and the interpretations by humans do not pre-exist but become in and through the AI and data science processes in their context of application. Just as text is not just the mirror of knowledge that is pre-existing — one could say that the meaning and knowledge becomes in the process of writing — AI knowledge becomes during the process. 
Yet this becoming is not as open as what happens when humans write. During the data process, considered only in its technical aspects, there is no sedimentation in a phenomenological sense since this is not about human experience in the first place; at most, there is a technical process of memorization. The algorithm and the model themselves do not involve interpretation, although the result does. This leads me to the next point.

Second, we are used to see writing and text as something that requires interpretation and hermeneutic dialogue and communication: between reader and text, between readers. The meaning of the text is not limited what the author intended, and the writing itself was already a hermeneutic process: the meaning of the written text evolved in “dialogue” with other texts and meanings available in the language and culture. 

AI, by contrast, is seen as an instrument, and tool, a thing, that is hermeneutically neutral: we (the developer and the user) are the ones who give meaning; AI and other technologies are supposed not to be “hermeneutically active” or “hermeneutically creative” themselves. But as I have argued, this assumption is mistaken. AI technologies, like digital technologies in general (Romele, 2020), are interwoven with meanings and also

Know more Click here

Top comments (0)