Around nine months after the Empathy Deck went live as part of the Wellcome Collection’s exhibition ‘Bedlam: The Asylum and Beyond’, the live ‘twitter bot with feelings’ received a mini-makeover. To accompany its relaunch I wanted to give a little background on the project, so I set out to write what was meant to be a short essay about the subjects it touches on. Unfortunately, my first attempt came to a halt, partly due to an overwhelming sense of dread that I would fail to cover the concept of empathy from an adequate number of angles, or in sufficient depth.
What people mean when they invoke the word empathy can differ greatly; it appears frequently in journalism and cultural production, in genres as diverse as self-help, marketing, spirituality, neuroscience and politics, not to mention art and literature. It’s meaning has also shifted numerous times in the hundred or so years since it first came into English usage via eighteenth century German aesthetic theory, where the term Einfühlung, literally meaning ‘feeling-into’, described the viewer’s active participation in a work of art. In 1908 it entered the English language as the neologism ‘empathy’, coined from the Greek ‘em’ for ‘in’ and ‘pathos’ for ‘feeling’; by mid-century it was increasingly associated with interpersonal relationships, and has been a popular topic of psychological research ever since. While I don’t go into any depth on the history of the word, (Madgalena Nowak’s essay is a great primer in it) my text reflects the most widely-understood current meaning of empathy as a cousin of compassion: a broadly positive psychological mechanism that enables feeling for, and with, others.
For those not on Twitter, or who don’t follow it, the Empathy Deck (@empathydeck) is a bot, made in collaboration with programmer Tom Armitage, that responds to its followers’ tweets with a one-off ’empathy card’: a digital image combining my hand-made collages with text drawn from my diaries, plus self-knowledge and advice literature like astrology, personality tests, and dating types. A matching emoticon, key-worded descriptively or according to its healing properties and other attributes (amethyst for money problems, passion-flower for insomnia) completes the picture, creating a culturally-contingent backend of remedies and expressions. The text’s tone is somewhere between an overly-literal friend, always eager to share (or compete) with a ‘me too’ anecdote and a smattering of advice and a demented version of the motivational quotes that feature on tea bag tags, posters, skin (as tattoos) and social media, especially within female-dominated platforms like Pinterest.
The bot’s card format also brings to mind tarot decks and their new-agey cousins like oracle, goddess, healing and angel cards, often written by self-help authors like Doreen Virtue; my own ‘guidance’ cards, which I’ve been making over the years as meditation aids and as support for bad times, partly inspired the project. Drawing on these existing templates for self-advice in a broadly female-dominated marketplace, the Empathy Deck emerges from what Lauren Berlant calls an intimate public that would be recognised as part of women’s culture, which binds together disparate strangers through affective ties and a shared worldview. Magazines, chat shows, blogs and other forums all help to bolster this culture, creating a space for sharing experiences that are both particular and generic: heartbreak, family struggles, body issues and so on.
Instead of producing variations of ‘you go girl’ typical of liberal feminism and platforms like Pinterest, however, the Empathy Deck is more prone to communicating what Sianne Ngai has termed ‘ugly feelings’, signaling a refusal of the shiny, happy people of wellness culture who are normally conjured up by images of meditation, yoga, and self-help. Promoting a me-first, individualist self-care ethos, the pursuit of wellness often, problematically, puts the responsibility for feeling good (and bad) in personal, rather than in social structures. At the same time, anything that helps you live with and through bad feelings seems worthwhile, or simply necessary. The writings of Audre Lorde and Sarah Ahmed propose self-care as a radical act of self-preservation in a racist and patriarchal system that favours and promotes the flourishing of some bodies over others; as Ahmed says, “We have to work out how to survive in a system that decides life for some requires the death or removal of others.”
As symptomatised by In Goop Health, Gwyneth Paltrow’s latest foray into monetising self-care, wellness lifestyles usually assume and require privileged access but tend to obscure the fact that feeling good is so much easier if you already have the time, money and entitlement to follow through with the recommended rituals and products. As some of the examples below suggest, I wanted to embrace the feelings normally banished from the wellness dreamworld, in the lineage of writers like Ngai and Ahmed, while still making something that might have a therapeutic side effect and help shake a shitty mood when it pops up in your Twitter feed.
“Sooner or later everything ends,
Why the fuck would he want to be friends”
“With Uranus well angled, you may be amazed at the positive answer you get,
But this limbo is testing my patience, I can feel it”
“My first email, my 1st tweet
I felt my heart pounding
An actual pain there, as I realised I cannot compete”
Despite my ambivalence towards self-help, the Empathy Deck‘s source text is derived from a practice very much associated with it: keeping a diary, in the tradition of Julia Cameron’s morning pages. For the makeover, I added in another 40,000 words of diary text- a particularly painful process given how awful the last few months have been. In between trying to not hurt the people it may seem to refer to or hurt myself through exposing unmanageably embarrassing or painful bits, while leaving enough in for it be a gesture of vulnerability worth making, redacting the diary text was one of the project’s most challenging aspects for me.
Unlike most bots, the Empathy Deck doesn’t reconstitute the source text but quotes directly from it. Its steady stream of verbatim diary text could therefore be seen as a collaborative autobiography or outsourced experiment; now that the cards are in their thousands, anyone who cared to could read them through as a randomly-ordered version of my internal monologues, 140 characters at a time. My experience with The Outage proved that outsourcing one’s autobiography to an actual human is a lot more complicated and emotionally risky (there isn’t much chance I’d fall in love with a bot, however wonderful its code). Unlike a human, however, it can’t be depended on to tactfully leave out the bits that make me sound stupid, or to intelligently weave together a narrative through an understanding of chronological time. Pulling from a span of six years of writing, the bot’s temporal randomness has actually served to shield me from shame, since it’s impossible to know which period of my life it’s taken from.
While the bot generates the text, it is triggered by and therefore dependent on its followers, whose tweets prompt the scanning of the diary and the extraction of text: if nobody followed it, it wouldn’t say much. And if people tweeted about things I never write about, like football or childcare, it wouldn’t talk back much. However, if they talked about hating childcare or loving football, it probably would respond: as a ‘bot with feelings’, it’s triggered especially by emotive, affective content, which I weighted by ‘favouriting’ particular words. The outcome could be seen then as a sort of codependent, collaborative writing experiment which implicates the bot’s followers (many of whom overlap with my own) but is guided by what I’ve deemed to be worth responding to.
Despite the bot’s friendly demeanour, the continuous, silent scanning of tweets could also be harnessed for repressive surveillance ends, especially when the trigger words are hidden from the users. Even well-intended deployments of tweet-monitoring have been badly received by social media users, most notably Radar, a free online app launched in 2014 by the suicide prevention charity Samaritans. Alerting Twitter users if it spotted keywords that suggested that someone they knew ‘may be struggling to cope’, Radar was rapidly taken down after complaints about data protection and privacy issues. Despite some similarities between the two applications, as I explain in more detail later, the Empathy Deck screens out words associated with serious mental health struggles, in acknowledgement that unlike Radar it was never intended specifically as a therapeutic tool. It’s acceptance by Twitter users also reflects people’s tendency to trust artists’ benign intentions or to put faith in art works as broadly contributing to cultural good.
Throughout the Empathy Deck‘s lifetime, I’ve been tagged and mentioned numerous times in its followers’ tweeted responses, as if the bot is a sort of version, stand-in or proxy for me. As with many of my previous projects, like The Outage and Dark Archives, the Empathy Deck explores the distance between artist and artwork, performance and performer, which is a question for all artists in era of publicly-performed profiles, but perhaps especially for those like me who often appear ‘as themselves’ in their work. Sharing many of the same followers, and speaking in similar ways, is the Empathy Deck account just a lightly fictionalised version of ‘Erica Scourti’? Perhaps a better question is why the @erica_scourti account – and by extension anyone’s Twitter account – would be considered ‘real’, not fictional, staged or artificial. For Bertolt Brecht, the theatre audience’s tendency to uncritically believe in and identify with the illusory characters on stage was facilitated by the intoxicating effects of empathy. Instead, he sought to prevent empathy, allowing for the adoption of a critical attitude toward the play- and by extension, wider sociopolitical concerns- through a series of ‘distancing’ techniques like direct address, using placards and songs. These strategies foregrounded the artifice of the theatrical medium and crucially, the distance between actors and characters, so that spectators would be less likely to feel kinship with them as ‘real’ people.
Translated into the ‘social medium’, spectators suspend disbelief about the artificiality of speaking to a potentially limitless public as if addressing a friend with nobody watching; like the characters in a theatre production, the Twitter self is often taken to be ‘real’ despite its being obviously presented on a social stage for others to watch. As a version of me, Empathy Deck signals the extent to which all Twitter avatars are a performance, despite the audience’s habit of uncritically accepting them as unmediated expressions of selfhood. Perhaps my lack of control over it means the Empathy Deck is actually more ‘truthful’ than my own social media performance; it’s certainly more liable to make me cringe, especially when sharing anecdotes of drunken puking or petty professional envy to someone I’m trying to impress. Becoming aware of social media as a medium also helps recall its corporate nature. As is well known, but easily overlooked, Twitter and Facebook are profit-making enterprises that assume users will accept the exchange of a free service for the monetisation of their participation and information.
In an era of endless digital duplication, Hito Steyerl has highlighted the economy of presence which increases the value of artists’ physical presence at face-to-face gatherings like the talks, Q&As and workshops that make up both the events programmes of most art institutions and the revenue streams of many artists (including me). Steyerl suggests that the proxy holds out the potential for withdrawal, or strike, or subterfuge; a proxy self could allow provide decoy cover for a withdrawal from regimes of visibility, self-representation and promotion, allowing the artist or citizen to be both nominally present and physically absent. If I was to become permanently absent through death, as a proxy of me the bot could carry on tweeting out bits of my interior monologue forever, in a twilight of digital immortality. TV dramas like Black Mirror have explored this potential for a proxy self simulated from digital archives and it’s easy to see how this could translate into a future service targeting the bereaved; as Oreet Ashery has spoken of in relation to her own work, death, and the eternal digital afterlife, are the new horizon of capitalist expansion.
Standing-in for me as an always-on, automated proxy on Twitter, the Empathy Deck almost spares me the ‘work’ of being present on social media: responding to friends, professional acquaintances and total strangers, as well as coming out with unprompted tweets. For many people, especially those whose profession involves being a ‘cultural commentator’ of some kind, as it does for many artists, writers and curators, maintaining a social media presence can feel like real labour. Now that essay-length comments are acceptable Facebook etiquette, those who engage in lengthy, exhausting and demoralising Facebook arguments- very often people of colour- inadvertently provide a personally taxing, and unremunerated, public service by educating friends and other lurkers. Even those who avoid comment battles often feel self-conscious and cautious about their voice, or lack of it, leading to cycles of quitting and reactivation. Combined with the FOMO and self-comparison apparently endemic to social media, this constant low-level performance anxiety that suggests that the ‘work’ of being a coherent communicative character online is now a stressful, boring and time-consuming undertaking, rather the fun social activity it is sold as.

Crystal knows…
Identifying the maintenance of online presence and communication as a new area of labour, apps and software are attempting to automate aspects necessary to it, such as personality, tone and ‘voice’. Email assistant Crystal claims to speed up the process of writing nuanced emails by personality-testing both you and your recipient before suggesting what tone to use. Attempting to automate the work of authentic communication, it quantifies human interactions and tries to find a formula to replace them with. The new suggested responses when you reply to gmail emails on your phone, and predictive text apps that I’ve worked with like Swiftkey, which log into users’ Gmail, Evernote, Twitter and Facebook accounts in order to learn from them, fulfill a similar desire to automate effective communication.
Empathy is also increasingly sought after by corporations trying to make interactions more ‘human’ at scale. Reflecting a growing awareness that empathy is a key tool for improving profits, an empathy ranking tool, encourages business to view it as a quantifiable skill, particularly in their customer-facing communications and PR. In practice this often means ‘cutey-fying’ the interface through adorable avatars, sprinkling emojis across copy and adopting a chummy or emotive tone for even the most tedious correspondence. Mail Chimp for example promises ’empathetic automation‘ through delivering ‘messages that feel handcrafted and authentic to the recipient’ (along with plenty of emojis). If as Eva Illouz argues, affect is made an essential aspect of economic behaviour within ‘emotional capitalism’, it makes sense for corporations to mobilise the production of ‘authentic’ emotion at scale. However, scaling up ‘true humanity’ for business presents monumental challenges, as the CEO of TeleTech points out, presumably since humanity itself- and qualities aligned with it, like affect, empathy and emotion- cannot yet be replicated for scaling.
As a kind of automated mail art project, the Empathy Deck also asks whether one-to-one, ‘human’ size formats can be meaningfully scaled up. In previous mail art and gift-giving projects I sent postcards, gave away drawings on Freecycle and emailed friends and strangers short Instagram videos in exchange for their adding metadata; the backgrounds used in the deck also resemble the handmade cards I give to friends and family, which are like artworks in their own right. By automating both the card-making process and their sending out to thousands of recipients, the bot attempts to replicate micro-gestures of personal gift-giving at scale, so that instead of one friend receiving a unique card, 2000 Twitter users (including a few other bots) now do. But what is lost when gestures of friendship, giving and interaction are scaled up? Is it the one-to-one, singular quality of the exchange which is valued, because of its assumed ‘authenticity’ and even ‘humanity’? As Erika Balsom says, the authentic is aligned with the human and positioned against the machine, and is valued accordingly. Reflecting a ‘persistent phobia in the machinic copy in Western bourgeois culture deeply rooted in principles of private property and individual authenticity’ [Balsom, After Uniqueness: A History of Film and Video Art in Circulation], whatever can be created to formula and rolled out en masse, whether package tourism, coffee shop design or customer service scripts, is denigrated as inauthentic, robotic or ‘fake’. Generated by a bot and capable of endless iterations for a limitless audience and time-span, the Empathy Deck embraces the fakeness of the machinic copy. However, its use of personal diary content comes dangerously close to providing a template for businesses to ‘humanise’ their automated interactions.
Despite being a bot, i.e. a machine, people’s interactions with the Empathy Deck, especially retweets and replies, also suggest that it’s somewhat capable of creating moments of relatability or connection. Maybe it’s the unstaged quality of automation which is charming; unlike humans, you couldn’t really accuse a bot of trying to seduce, flatter or manipulate you. Some of the humour also derives from the bot’s occasionally bizarre confabulations of text, or the complete misreading of the original tweeter’s meaning, which creates a bit of distance from the tweet that is itself potentially therapeutic. Perhaps there’s also comfort to be had in the knowledge that bots just don’t get it, and therefore aren’t going to be replacing (all) human jobs quite so quickly. However, jobs previously seen as innately human – like therapy, care and journalism – and the skills or attributes they require – like attention, empathy and creativity – are increasingly being automated. As Helen Hester has argued, digital assistants of all sorts are being drafted in to do the work of secretaries, mums and wives, i.e. the affective and emotional labour of reminding, maintaining relationships and patient listening, traditionally gendered female. Drawing on Nina Power’s work, she points out that trying to automate this work renders these skills both so valuable they need loads of money to be spent on replicating and scaling them, and so worthless that ‘even an app’ (or bot) could do it.
Although therapy may seem like the ultimate human job, like other forms of care work, it is not beyond the reach of automation. The original chatbot Eliza, which responded very sparely to people’s typed musings and questions, was herself a sort of therapeutic AI, or at least was perceived that way by the people who interacted with her. At a time of increasing cuts to mental health services, automated therapy has been proposed as a way to alleviate financial strain on health services, replacing costly human advisers with bots like LittleShift, Wobeot, designed by a clinical psychologist at Stanford and Tess, launched by therapeutic AI company X2AI. Described in her bio as a ‘psychological artificial intelligence’ promising ‘mental healthcare for everyone’, Tess is accessible for conversation via a smartphone app. The that fact that Tess, renamed Karim for the purpose, was beta-tested in refugee camps in Beirut suggests, however, that automated therapists will be administered to service users without the means to pay for more costly, longer-term therapy; as LittleShift’s press says, ‘Therapy is expensive. Give yourself the tools to feel less anxious.’ If trials prove successful, its not hard to imagine bot therapy eventually replacing Cognitive Behavioural Therapy, the treatment most widely prescribed on the NHS, that was adopted as standard in 2007 by the UK government, because of its apparent speed and success in getting people back to work. Both LittleShift and Woebot are based on automated CBT, seemingly turning one of the main critiques of this form of therapy, that it offers ‘a manualised, one-size-fits-all method that is imposed “top-down” on the client’, (i.e. that it’s mechanical) into a selling point.
The automation of palliative care, and care of the elderly, brings up similar ethical questions, and at least in the West, there is little appetite for actual robots replacing human carers. In Japan, however, lifelike humanoid robots (like my namesake Erica) are already being used in the care of the elderly, as well as in other roles. In his book The Sound of Culture: Diaspora and Black Technopoetics, Louis Chude-Sokei argues that the greater unease around anthropomorphic robots in the West is directly related to the history of slavery and the treatment of Black people, which has bred un underlying fear of an exploited, enslaved race rising up to take revenge against its oppressors. Drawing on author Karel Capek’s sci-fi play R.U.R. (Rossum’s Universal Robots, 1920) which popularised the word ‘robot’- from robota meaning ‘serf labor’, and figuratively ‘drudgery’ in his native Czech – he also points out the parallels between the ethical questions AI and robots raise now, and the questions once asked of slaves. Do they have souls? Do they feel pain? Can we fall in love with them? And perhaps most importantly will they revolt against their creators? While attitudes towards AI can illuminate wider ethical issues, the continued mistreatment of actual humans by other humans continues to be a more urgent consideration, particularly when decisions of life and death are increasingly outsourced to machines, from military drones to computerised housing benefit assessments.
Attempting to build a rudimentary ethics and consideration into the bot, the Empathy Deck‘s coding follows rules that allow it to operate autonomously in the public, non-art environment of Twitter. For the relaunch, I worked with Tom to tweak the bot’s original behaviour to make it more responsive to particular users, for example by ensuring that new followers get more responses to begin with, as a heads up that it isn’t broken or ignoring them. A degree of throttling has also been introduced, so that long-term followers and prolific tweeters will experience a tailing off over time, to prevent over-familiarity fatigue; while the bot pays attention it also demands tweeter’s empathetic attention, particularly if, as I suggested earlier, the bot is seen as a stand-in for me. Despite being the least immediately visible aspect of the project, the linguistic constraints coded into the Empathy Deck as its ’empathetic framework’ are to me one of its most important. This comprises a ‘kill list’ of names of friends, frenemeies, plus current and past employers that help avoid awkward revelations, for example, if a boss’s name popped up on one of the cards in the context of a damning anecdote, or a friend’s name exposed a secret disclosed to me in confidence.
More importantly, the kill list also contains a huge collection of offensive words that I never want the bot to come out with. This is more of a precaution, since I can be absolutely sure that I haven’t used any of it in my own diaries, and as the other texts the bot uses are self-helpy, they’re unlikely to be a source of racist, misogynistic, transphobic, homophobic, ableist or otherwise offensive language. But you never know, and rather than risk a freak conjuncture of words, the kill list acts as a precaution against the bot ‘accidentally’ saying something that I would not be OK with saying myself. Despite the fact that huge corp Microsoft didn’t think this through before letting loose their own AI Twitterbot Tay, which swiftly learnt off its followers to tweet offensive racial slurs ‘with abandon and nonchalance’, to me this is an obvious move. As other bot coders like the guy behind Appropriate Tributes bot (@godtributes) have already compiled these (horrific) lists and made them freely available, there really isn’t a justifiable reason to not utilise them.
Buttressing the kill list of words the Empathy Deck will never say, and duplicating much of its content, is another collection of stop words that it will never respond to. This covers very sensitive words like suicide, plus particular place names, and a comprehensive list of hate speech. Not responding to hate speech seems obvious to me, but as one audience member suggested, a racist epithet may have been used ‘as an example of what not to say’ or in order to express disapproval of, rather than agreement with racist attitudes. By blanking apparently well-intended tweets like this one, the argument goes, the bot is coded to assume the worst of people. To my mind though, using racist language as an ‘example’, normalises it and the political attitudes behind it, framing it as ‘just another word’ or ‘just another opinion’, under the guise of ‘free speech’. So I would prefer the bot to assume the worst.
Given the rampant misogyny and racism on Twitter, I would also assume people aren’t using this language in order to disavow it but more likely to harass and intimidate the people who those words target. Responding would not only literally duplicate the word’s circulation and visibility on Twitter but also would implicitly cast the tweet as one worth engaging with. And while I do believe there is real value in challenging people’s attitudes where possible (if they are definitely not trolls) a bot couldn’t be trusted to do this, or to understand the nuances of context; somebody could be discussing the alt-right without being alt-right, but a bot wouldn’t know the difference. This is why responding to retweets, and to linked images or articles is blocked: even if it was feasible to read the linked content, it couldn’t correctly grasp its political leanings, let alone come out with an adequate response.
The alternative could be to fall prey to the ‘liberal trap’ that Suhail Malik articulated while introducing the ShutdownLD50 discussion at Goldsmiths earlier this year: namely the far right’s formula for duping progressives into supporting them by crying ‘free speech’. As he suggested, this works by their making outrageous statements, for example about immigrants, then appealing to liberals’ assumed commitment to ‘free speech’ as a fundamental value of liberal democracy when pulled up on it. Casting those who correctly call it out as hate speech as the ‘real fascists’, intent on shutting down free speech and by extension democracy (this usually where book-burning analogies come in), the far-right leaves liberals in a quandary, or trap, of being unable to articulate a position of both supporting free speech and having a line to draw. By ‘coding a line’ at hate speech, the Empathy Deck at least tries to build in the position that refusing to debate with or hearing the ‘side’ of racists and fascists is perfectly consistent with a commitment to free speech, and to empathetic relations. Of course, refraining from the use racial slurs is the setting the bar extremely low in terms of challenging and ending structural racism, and can only be considered an absolute starting point- which all the more reason to ensure that this glaringly obvious step is implemented when coding linguistic, text-based bots.
The stop words list also has a function called Tragedy Mode (which is always on) that prevents the bot responding to subjects considered too sensitive. As mentioned before, these include words like RIP, bomb, suicide, disaster and specific names and places, which get updated to reflect unfolding events. While this measure attempts to convey respect for genuine suffering, the list of place names also reaffirms the West-centrism of social media, and of news reporting in general; thousands of deaths and tragedies occur worldwide outside of Europe and America without being widely reported or shared (Facebook’s ‘mark yourself safe’ button has been criticised for the same reason). And despite the many inbuilt precautions, the bot also has an emergency stop function, which has been deployed a few times in response to tragic events; however ‘personalised’, automated responses are clearly not appropriate for all situations. Of course, as a bot primed to respond to feelings, it’s arguable that these are exactly the sorts of situations it should be on hand for. But despite the bot’s ability (and partial aim) to contribute to its followers wellbeing, as the legalese attached to the project says, this is not meant to be a therapeutic tool, in the sense of an evidence-based approach to supporting a grieving, depressed or anxious person. And while it may seem that I keep pointing to the bot’s limitations, and what it can’t do or can’t be trusted with, to me this is the real work: the totally human decisions which shape any automated agent or AI, and the real humans who will be affected by it as a result.
With all these constraints and precautions built in, the bot starts to look a bit like an institution, or even a corporation; like any other entity that operates within a public space that is not exclusively art-related, the Empathy Deck upholds certain ethical and legal standards. Many artists disavow this kind of constraint, using the ‘mirror argument’ that claims art is meant to reflect society, not judge it; this can, however, mean passively reflecting and reinforcing existing attitudes, stereotypes and power relations rather than challenging them. Others champion an Artaud-inspired cruelty, believing that artists should shock the public by transgressing standards of decency. Depending on the artist’s intentions, I have some sympathy for these approaches, which can actively challenge the hegemony of oppressive institutions and assumptions, as in the work of Jean Genet for example. However, an artist’s desire to transgress accepted norms can also shade into replicating hostility towards already vulnerable subjects, especially if it’s the ‘liberal’ art world who they aim to shock. While some gallery goers may be genuinely shocked by white supremacist attitudes being ‘ironically’ reproduced in a gallery context, white supremacists themselves would probably just be happy.
As artists increasingly start experimenting with automation in generative artworks, I think it will become more important to ask questions around accountability and responsibility. These are not new questions, since Mallarmé’s dice-throwing, John Cage’s use of the I-ching and even Sigmar Polke’s ‘poured pictures’, such as Séance (1981) all hand over a degree of authorial agency to an external process or automated method. And just as these artists or writers claim authorship over the outcomes, I believe that artists have a responsibility for what results from the scores, protocols and chance-based methods they make use of- whether that be deciding to quote verbatim from an existing cultural source, or coding a bot or live video.
Questions of responsibility are further exacerbated by the flow of words and images outside the art-world and into wider publics via the internet. Who’s responsible for what circulates if it’s an automated agent or generative method that creates the work? It appears to me that despite more artists experimenting with both robotics and generated art-forms (Jordan Wolfson, Ian Cheng, Jonas Lund to name but a few- quite often male, as it happens) the ethical, social, political, and gender considerations of employing automated agents requires more scrutiny (for example as offered by these responses to Wolfson’s work). With this text, I hoped I’ve sketched out some of the reasons I think these are important topics for present and future discussion while outlining some of my own research that working on the Empathy Deck has lead to.