A VERY long interview with Annet Dekker about my Dark Archives project, on view at the mo at Het Nieuwe Instituut, Rotterdam, and online; it was understandably too long for them to publish in full which is why I’m putting it here. Short/ edited version here: Archiving our (Dark) Lives: Interview with Erica Scourti
videos here (best viewed on smartphone)
[END- text by Linette Voller, read by Kati Karki]
Could you briefly explain what the Dark Archives project is and how you set it up?
The Dark Archives project consists of two main phases, each producing a series of images, videos and texts. The first part involved research into and experimentation with auto-editing and archiving apps like Magisto, resulting in a series of short videos, shared as they were made over summer 2015. Inspired in part by me wondering what footage or pics these algorithmic edits chose to leave out and why, the second phase addressed the idea of ‘missing media’. Here, I uploaded my full media archive, going back 15 years, to Google photos and then commissioned five writers – Jess Bunch, Christina Chalmers, Sandra Huber, Linette Voller and Joanna Walsh all complete strangers recruited by advertising for writers with experience of working with tarot, data narrativisation and story-telling in a broader sense – to search it with keywords of their choice; words like weapon, gift and love, to give a few examples.
They were then asked to speculate on and caption what they imagined to be the missing set of media for that search term: the photos and videos that somehow evaded classification. These captions were then matched with existing media from my archive, creating a new series of videos optimised for mobile viewing (which viewers can access online), along with a slideshow of all the speculative archive images, screened continuously at Het Nieuwe Instituut.
[an image from ‘LUCK’- text by Jess Bunch]
In the first stage of the project I produced a series of ‘automatic’ videos using auto-edit app Magisto, reflecting my research into the automation of jobs once considered irreplaceable by machines. Editing, a laboriously-gained skill that was until recently considered, like many so-called ‘creative industry’ jobs, to be safe from the threat of automation, can now be outsourced to an algorithm. Within the film industry, editors were usually female, so this echoes trends of labour gendered as female being replaced by automation, and apps especially, as Helen Hester and Sarah Kember have spoken of in the context of automated assistants like Siri, Cortana and so on.
[documenting withdrawal- one of my auto-videos, 2015]
Responding to users’ need to keep unruly and ever-growing media archives in order, auto-archiving apps like Carousel, and now Google, have rushed to fill the gap. These apps seem clearly oriented towards not just sorting photos, but sharing them, echoing the social media imperative to ‘share’ and the idea that experience, once captured photographically or in video, only has real value once it’s validated by others in the network. However, if the work of sharing becomes more of a chore- because you’ve got too much stuff or can’t be bothered to make 15 sec Instagram edits- than the supposedly fun, self-affirming activity users are meant to experience it as, the platforms would be in trouble . Magisto makes its role as enabler of content-sharing explicit by offering an Instagram function, helping users to circulate themselves within a network more effectively. Google’s auto-classifying similarly creates groups, like selfies and food, whose significance reflects a ‘sharing-eye’ mentality, i.e. pics taken for or understood mainly through the expectation of their being shared. Inadvertently hinting at what Kate Crawford called ‘surveillant anxiety’, the app also selects from users’ smartphone media to makes its own videos and sideshows, a supposedly helpful service which feels pretty creepy: a fluffy version of the less fluffy reality of living with a vague sense of an unknown, nonhuman gaze.
You explore the subject’s construction, or better your own, in the networked regime of the World Wide Web by means of looking into the invisible structures. What are you looking at, or for?
I would say that I’m observing myself as subject aware of her own entanglement in sociotechnical infrastructures, using my own personal experience as a starting point. So, while my work reflects the particularities of my own specific identity, it attempts to link this to broader collective experience- without, of course, claiming some kind of universal human experience or subject that I, or anyone else, can speak for. For example, almost everyone living in the West has a relationship, even if it’s a discordant one, to photo archives, or online platforms. Exploring my own experience of them, as I’m doing in the Dark Archives project is a way of addressing themes that most people will at least relate to but from my own, specific perspective which reflects my own social and political context.
As for ‘looking into invisible structures’, again, for me this phrase problematically suggests standing apart from the issues being observed, and reporting back on the results from a removed position of supposed neutrality. Not only does this assume a critical distance that I believe is untenable, but it also hints at a superiority of the looker- as if artists were able to peer into structures and see things- invisible things, even!- that other, presumably more naive people, cannot. Drawing on the observer effect in science, in which the conducting of an experiment necessarily affects it, I’m more interested in fully acknowledging that I stand within the systems being investigated, and that any insights I glean are necessarily partial, and incomplete. My thinking here echoes the work of many other feminist writers like Donna Harrway (who speaks of situated knowledge) and Karen Barad, who also stress their own embodiment within the research they undertake.
[an auto-gif courtesy of Google photo]
Also, part of my current research, including for the Dark Archives project, specifically grew out of an awareness of the limitations of strategies of ‘making visible’ and/ or exposing ‘invisible structures’. While I’d agree with the argument John Durham Peters makes about what he calls ‘infrastructuralism’, namely that ‘revealing the invisible supports that hold up the world […] is clearly allied with the feminist project of revealing unpaid and unappreciated labor’ this motif has become the default explanation of almost any artwork dealing critically with technology/ surveillance etc. Trevor Paglen’s work is often described this way, and I myself used this phrase when discussing Life in AdWords (arguing that it ‘makes visible the commodification of the subject in Web 2.0’) and in my thesis on the Female Fool, where I argued that strategies of subversive mimicry in feminist ‘make visible’ the performativity of gender identity. A quick google of these phrases brings up a few Rhizome articles, a couple of conferences, some press releases: infrastructural critique often relies on metaphors of unveiling, uncovering and exposing.
Despite their currency, rhetorics of exposure have a long heritage in Western critical thought, as Eve Kosofsky Sedgwick’s classic essay on paranoid and reparative reading makes clear. Drawing on the ‘hermeneutics of suspicion’ Ricouer noted in thinkers like Nietzsche, Freud and Marx, she discusses the ‘paranoid’ epistemology, which places ‘an extraordinary stress on the efficacy of knowledge per se- knowledge in the form of exposure’. Paranoia invests gestures of uncovering with agency, as if simply uncovering and making visible the extent of state surveillance or racial discrimination, let’s say, is enough to make these long-entrenched systems wither away.
More recently, Wendy Chun warns of ‘code fetish’– the idea that by getting to, and then exposing, the ‘code’ lying beneath reality, various social and political problems will simply fall into place. This approach assumes that there exists an objectively verifiable reality or meaning prior to its uncovering, that just needs the right critical or artistic tools to get it out there, as if the act of uncovering itself was not entirely contingent on the person and method of doing it. As Irit Rogoff argues, while using the tools of critical analysis promises that the hidden meanings of cultural circulation can be laid bare, ‘there is a serious problem here, as there is an assumption that meaning is immanent, that it is always already there and precedes its uncovering.’ [‘Smuggling’ – An Embodied Crticality.]
With this in mind, rather than making certain structures visible, I would argue that every act of uncovering or making visible is making anew, is a form of knowledge generation that is necessarily subjective. Why not explore other forms of making which foreground faults, leaks and fictionalised spaces that haven’t been fully concretised, rather than this incessant uncovering?
In terms of artworks, particularly those that operate as infrastructural critiques on aspects of networked life- like surveillance, or data tracking- I’m also interested in the position from which the artist is making things visible. Experience has shown that a white, cis male is less likely to reflect on his position as the ‘uncover-er’, its implications of a God’s eye view perspective, or his identity in proximity to the institutions being studied, be it the NSA or global finance. Similarly, a white feminist intending to make visible female oppression is more likely to make assumptions about the ‘universality’ of women’s experience, thereby glossing over the specific forms of oppression that women of colour experience- as numerous example sin the music industry attest to. I think it’s important to be aware as an artist who you think you’re speaking for and also where are you shining that light from.
Tung-Hui Hu also draws on Sedqewick in his recent book A Pre-History of the Cloud, arguing that our ‘faith’ in all sorts of data visualisations is a manifestation of “a paranoid worldview in which everything is hopelessly complex but, with the right (data) tools, can be made deceptively simple and explainable: a master key or representation that explains everything”. This was part of my inspiration for the dark archive project- rather than attempting to ‘expose’ the inner logic of Google’s photo database, or trying to uncover that master key that makes sense of it all, I wanted to adopt a more speculative angle that imagines flaws and elisions in the system, and by extension asks how inclusions and exclusions to any archive happen. Catherine D’Ignazio, discussing feminist methods of data visualisation, suggests they could ‘invent new ways to represent uncertainty, outsides, missing data, and flawed methods’, which certainly resonates with my approach. Her article is illustrated with Map to Not Indicate, 1967, by the art collective Art & Language, and her caption- ‘The map depicts only Iowa and Kentucky and then proceeds to list the many things that NOT represented on i’t- echoes the thinking behind my project, which asks others to imagine what is missing from my photo archive.
In general the purpose of a dark archive is to function as a repository for information that can be used as a failsafe during disaster recovery. Could you elaborate what this means to you with regard to your current project Dark Archives?
The dark archive, as I understand its use in librarianship and archiving, functions as a back up copy- not necessarily of the actual contents but of the metadata, so that if natural or man-made disaster destroyed the archive, its contents could still be readable. In this way, it points to the future of archives and of a post-anthropocene (or, ‘Capitalocene’- an expression which attempts to apportion responsibility more precisely than vague or universalizing notion of the Anthropos) encounter with Western civilization’s huge quantity of data, which is being stored in what are, after all, physical locations that are vulnerable to attack or decay. Tsui notes the overlap between atomic waste storage companies and data centres (like Iron Mountain), suggesting an affinity between the contents hoarded in these facilities, which must be protected at all costs, now that data centres store bank records, entire companies, and other matters of life and death. Both also require specific geosocial attributes, like low temperatures, relatively remote locations and ‘stable’ governments, which foregrounds the materiality and fragility of networked life, and how vulnerable it is to climatic and social fluctuations. From his book, plus films I’d been watching, like Into Eternity, about attempts to build a future-proof nuclear waste bunker, I imagined a bleak scenario where current Western civilization’s most durable archives are either deadly energy waste products, or the material remains of data.
[Apple’s data servers]
Another aspect of the dark archive relates to accessibility, since one of its main attributes is being publicly inaccessible, and therefore, relatively ‘invisible’. In the context of my project, the media archive which I uploaded in full to Google’s photo service is a dark archive of sorts; accessible to me, but not to anyone else without a password- except, of course Google. I was interested in where this places it- is it a visible (and therefore ‘bright’) archive, in so far as its contents are both accessible and intelligible to Google? Another way of putting this is to ask where the lines drawn between the public and private sphere in the age of corporations taking on supranational powers. Alternatively, it could be considered a dark archive, since only I can access it. When ownership of digital assets is being replaced by access to them- captured in the increasing requests for ‘permission to access’ by apps, but also in the move towards what Jeremy Rifkin calls Jeremy Rifkin has called the “age of access”, where licenses and rental economies take over from singular ownership- the question of who can access what archive becomes key.
Another meaning of the dark archive refers to contents that cannot be located or retrieved and are therefore, functionally invisible: a nested archive within the main one, which nevertheless exerts a (possibly negative) force upon it. For example, Amazon could be seen as a very ‘bright’ archive; their business model is based on retrievability, which means that everything within it can be easily found and accounted for. Amazon has to battle against the ‘forces of darkness’ such as spam, algorimthically-churned similar products (e.g. t-shirts), and different products with very similar titles (for example, search Amazon with ‘the game’…) all of which threaten to obscure the contents with ‘actual’ value by making them unfindable. So, things must be retrievable otherwise the content of the archive can fall into a void- and the more stuff the archive gets, the darker it becomes…
[Lee Lozano, General Strike Piece, begun 1969]
Of course a dark archive may also be one that is kept intentionally secret, echoing an artistic approach of working with ‘methodologies of encryption’ (as a conference in 2014 put it). These play with gestures of unreadability, obfuscation and withdrawal from the viewer, as a way of avoiding instrumentalisation, of not being readily accessible to cooption by either market or institutional forces. Gestures of extreme withdrawal which can not be recuperated for future gain, like artist Lee Lozano’s dropping out of the art world completely and permanently in General Strike Piece (begun 1969), continue to appeal partly because of their refusal of the games of readability, accessibility and visibly- all of which are expected of artists today.
So, these different valences of the dark archive all fed into my thinking around this project, addressing visibility and darkness in relation to archiving.
Doing some Google image searches for you, I am amazed how many images come up. In an earlier interview you also said that you were ‘obsessed with documenting’ (Furtherfield), already at an early age being the one walking around with the camera. What does the image mean to you?
I have an almost ritualistic attachment to images, or at least to their collection, reflecting a lifelong desire to capture, track and mark the days of life as it passes, before it passes away. Perhaps the more photos and documents there are, the more coherent the narrative of one’s life becomes, the more readable you are to yourself as the protagonist within it. As the ‘pics or it didn’t happen’ logic of social mediated existence suggests, it’s the capturing of this life story and its sharing with known and unknown others that really validates it as a story worth telling.
However, there is an increasing amount of image traffic sent under the radar, and people are abandoning Facebook in droves, suggesting the waning of sharing affects; I have thousands of photos that I have never made public, or only swapped privately, for example through Whatsapp or email. What value do images that do not publicly circulate have- or is this a moot point, since they still technically have ‘value’ to the platform corporations they move within (remember, Facebook owns Whatsapp….).
I’m also intrigued by the recurring science fiction trope based on the idea that if every single moment in a person’s life could be captured, that life could be recreated, or simulated, in another person or machine’s mind. This fantasy of disembodied, downloadable consciousness can be traced to the beginnings of cybernetics, and its trans-humanist or futurist desire to escape the body. As Katherine Hayles explains in How We Became Posthuman, the assumption that consciousness (mind) can be separated from the body, as if the material support- the medium- plays no active role in it, has its foundations in a long tradition of Western, Cartesian thought, which separates body and mind, as well as a whole host of other binaries (male/ female, black/ white etc).
Do you think the image still relates to memory in a system that is all about consumption, distribution and circulation? In other words, do you see a different function of images in a networked computational culture?
The idea of images having a ‘function’ at all is an interesting one- of course Visual Culture as a discipline has long argued that images have agency, as carriers of political propaganda, adverts and social norms, or historical narratives to name a few, but there is an emerging sense that images now also function as and for machines– by-passing human culture (if you can even separate the two…). Ben Bratton has argued that ‘machine vision is arguably the ascendant ‘ocular user subject’, not the human,’ with many more images being made by and for machines, rather than humans and their affective or aesthetic registers. Trevor Paglen calls these images ‘operationalised’, in the sense that their agency goes beyond shaping behaviour and actually impacts real-world situations through quantification, tracking, targeting and prediction. Personally I’m more interested in how these modes intertwine, i.e. images that traverse both human and machinic realms- and I would argue that we’ve never had ‘strictly’ meat eyes. As humans we’ve always had some kind of if not inscriptive then at least communicative or mediating technology. [thanks to Emily Rosamund for some of the insights here!]
So, while digital images do still relate to memory, they’re embedded within wider systems of functionality and value generation; the reason Google’s photo service is free of charge probably has something to do with the pure potential millions of personal photos represent to act as excellent training material for their visual analysis tools. Images- especially ones rich with metadata like geolocations, timestamps and camera brands- are incredibly valuable, not, as the users may perceive them, as reminders of times past, but as nodes within a commercial network, whose full value may take years to emerge. Maybe humanity’s current image deluge is facilitating the advancement of future intelligences, whose neural networks are being honed on the daily feed of both private and public image-sharing. In this sense, the images are ‘functional’ not as memory aides but as tools for next-generation visual analysis algorithms.
Previously you asked someone to write your memoirs, The Outage , based on your digital footprint, it became a strangely personal yet distant narrative… In a way the new work builds on that, rather than asking a human person you use computer programmes to create a narrative, a history, or an archive, of your online activities. This makes your work very layered. Although at first it may look random, for me, the associations that emerge between texts, image and text (from meta-data to comments), or in the juxtaposition of images, there is a subtlety, humour and self-mockery that seems natural, but being aware of the learning process behind, creates a eerie feeling or tension between human and machine agency. Asking the question, how much influence do you still have? Do you think that this is still a collaborative process – meaning working together towards a common goal, based on more or less equal terms – or does the one gain in agency over the other?
The tension between human and machine agency, and in the increasing impossibility of making a distinction between them has played an important role in many of past works; as Cary Wolfe argues, ‘the human never was, never is, never will be human’ since the very condition of possibility of the human subject coming into being is a technology; as he says ‘it’s a technology variously called social behaviour, symbolic behaviours, language, communication, the semiotic in the broadest sense.’ That is, as mammals of the great ape family we have been always already entangled with semiotic technology- language, the symbolic- and this is precisely what allows the possibility of a subject to emerge.
So writing is a technology, and like every new development, its effects on humans was the subject of much hand-wringing, famously from Plato, for whom its heralded the decline of memory and its replacement ‘by means of external marks’, as he put it. However, the devices and wider networked infrastructures most people in the West live with are also actively archiving citizens, often unintentionally or without explicit consent, with the results feeding into consumer and social classifications as profiles, which does seem like a qualitative difference.
[Think You Know Me, live predictive text/ programmed performance 2015]
So, my work explores these tensions, using processes like predictive text, or profiling at a personal level to question in broader terms what non-human perception, agency and intelligence could be. The dangers of apportioning too much agency to machines can be clearly seen in the context of warfare, where delegating responsibility to drones glosses over the entirely social, political and human logic driving their calibration, and also in things like automated forms which determine what benefits, health care or housing citizens are entitled to.
A lot of my work includes working with other people in some way, almost to a point that they do things that might normally be done by a machine, or that could not be done by a machine. For example, in the project So Like You, while I initially used my images to search online, I then asked the people whose pictures came up as ‘similar’ to go through their own archives to find a similar image, thereby asking them to act almost like human search engines. With the Dark Archives, by inviting the writers to speculate on and imagine what is missing from a particular archive, I am similarly asking them to embody the algorithm and its operation, to work out what it includes and excludes.
You have stated elsewhere that the “devices we share so much intimate time with are actively involved in shaping what we consider to be our ‘selves,’ our identities” (Rhizome interview), reflecting back on some of your previous works and the current one for Het Nieuwe Instituut, I can imagine, even though there is a privileged position from which you are doing them, that these processes affect you, as an individual?
Yes, some of my projects have had some pretty unexpected outcomes, what I think of as their ’emergent phenomena’: the affective and emotional outcomes that were not designed into the experiment. Of course there are wider emergent properties of a technologically-mediated world; for example, the affective responses to life on Twitter (anxiety is a common one!) were not necessarily anticipated beforehand and only emerged through use. Most of my work deals with these ‘psychotechnical vulnerabilities’, often performing of gestures of risk in relation to them. For example in The Outage, giving my private data and online presence to a ghostwriter to fashion into my fictional memoir, could be seen as confronting a wider societal fear of digital trespass, identity theft and personal data leaks.
What emerged from this project was how self-conscious it made me feel: a sense that I had been objectified, made into an image that I wasn’t in control of. As the book’s narrative involves a sort of death, there was a feeling that a version of my mediated self had been killed off, and in fact my first response to the text was a feeling of reading my own obituary, which is the one piece of text you will never, ever be able to control or correct or manage. This may sound a bit hysterical now but the dissonance was very real and sent me into a tailspin that lasted well over a year; I also got together with the writer which had/ has its own complex narrative, since folded into the wider story of the project.
[The Outage, 2014]
With the change to digital archives, similar to traditional archives sources may remain intact, but their existence is constantly changing and dynamic. This is something that is clearly visible in your project. So, what does this mean for one of the main tasks of an archive – a place to store memories, what happens when a memory vault, changes into something fluid and processual? In other words, what does it mean when archives are thought of in terms of (re)production or creation systems instead of representation or memory systems?
In a sense, archives, like knowledge and autobiographies, are also potentially performative, as opposed to strictly descriptive; Derrida suggests as much when saying ‘the archivisation produces as much as it records the event.’ Perhaps every document creates (rather than describes or illustrates) the event; every search creates an archive, and every archive gives rise to a different reality. Search queries both create an archive and are potentially archival material in themselves (as the still ongoing fascination with Google’s auto-complete attests to) and as Derrida says, the archiving itself is productive of events, historical and otherwise.
This also relates to my interest in intimate data, the archive of our personal information, which is constantly expanding or contracting, and also mutable, depending on what search is undertaken. One thing that is obvious to search with now may not be fifty years from now: every historic era creates new search terms, new lenses with which to read the past. The cycles of music and fashion, the threads that carry through and are picked up years later attest to the unanticipated interpretation of contemporary life through the eyes of future generations. Every archive could be said to nest potentially limitless archives within it, lending it an unfinished or semi-fictional quality.
The dark archive seemed to encapsulate many of these ideas: a hidden, yet existing archive, whose contents may be retrieved at some future point but for now are inaccessible. What agency do these- and by extension, all other- unintelligible and obscured entities exert, if any? Is there a force in that which cannot be captured, quantified, translated? And if there is, how do we acknowledge that refusing visibility and capture is itself a privileged position, grounded in an almost Romantic/ heroic (i.e. usually coded as white, male) ideal of seeking that which can never be represented, commodified, put into words (and sold back to us as knock-off t-shirts/ lifestyle signifiers…). This in turn resonates with a yearning for the untouched, ‘virgin’ territory of an assumed authenticity- a concept as problematic as it was in colonial times as it is now. As a Greek I have found myself bristling at with regards to the portrayal of Athens as a sort of lawless, yet crucially authentic site of political ferment, unrest and all-over ‘realness’ (read: poverty). Moreover, from certain perspectives, being under the radar and escaping legibility itself depends on a privileged position of being socially average enough to disappear in the first place, as numerous critiques of so-called normcore pointed out late year.
Again Sedgwick points out that for many disenfranchised minorities, it’s precisely their unwanted, unasked for level of visibility that constitutes their ‘problem’- and the lack of agency over regulating their visibility to authority. As blockchain technologies become more widespread, meaning that a permanent record exists of any transaction made, it could be that contrary to past fears of data being tampered with, new issues will arise out of the impossibility of deletion. If every digital asset or transaction- including identity- can be traced, there are obvious political implications around visibility and the right- or at least, desire- to be forgotten, or unseen.