Maybe I Am Secretly A Robot
When I wrote my first fiction piece, I took the contents of my brain, distilled elements of things I have thought at different times and places, and formed them into one encounter. An Ur-encounter. And as I have OCD, the thoughts and feelings I distilled were OCD thoughts and feelings, creating an Ur-OCD-experience, an immersion into a bad OCD day. I sent it to a friend, a Sci-Fi enthusiast, who kindly read it and said, “This character is interesting, but they don’t seem to have any idea how to interact with humans. Are they supposed to be a robot?”
I was delighted and amused by this response. In part, “death of the author” is rather different when you are the author. But also I have never felt robotic, and no one has ever called me robotic. I am one of the gesticulators and the fidgeters of the world. If we are in a meeting, you are more likely to be frustrated with me for an excess of expression than a lack of it. Such behaviour is often coded as “outgoing,” gregarious, a mark of being a “people person.” Certainly not a robot. Yet there is a contradiction here. I also have a chronic anxiety condition, a deep-seated internal churn that is, for me, fundamentally connected to the difficulties of interpersonal interactions. I might act like I can deal with people, but I don’t always feel like I can.
Anxiety also does not seem like a natural fit for robots. Entirely mechanical creatures like robots, human-shaped robots like androids, and their mechanical-biological cousins, cyborgs, are typically framed as lacking emotion, as being calculating and rational. Their precursors, premodern automata typically powered by water or clockwork to perform basic movements, seem even less capable of a heightened emotion like anxiety. Robots and their kin are proverbially unemotional
Anxiety is the opposite, an excess of emotion, like the panic attack, when the sufferer falls into a state of collapse from an excess of feeling. Moreover, my anxiety and OCD live inside my brain alongside bipolar, perhaps the stereotypical state of excessive emotion. Put in Sci-Fi terms, when my anxiety and mood disorder is evident to outsiders I might seem like Star Trek: Next Generation’s empath Deanna Troi. Yet internally, perhaps, I am sometimes more like the android character Data.
What my friend reacted to was my description of awkwardness in the face of the unexplained presumptions of human interactions. Take the scenario of going to someone else’s office for a meeting, and dealing with chairs. Should you immediately sit, on the understanding that you have planned to talk for some time and sitting is practical? Or should you always wait to be invited to sit? Is the first option presumptuous and rude? Is the second stilted and overly formal? Such neurodivergent reactions to the unvoiced rules of human society, bathed as it is in neurotypicality, can seem robotic.
Prior to my friend’s comment, the link between neurodivergence and robots had not occurred to me. It took another friend pointing out that robots are often ways of making spaces for neurodivergence in Sci-Fi narratives for this to click. I remember that autistic people often write about themselves as robots, see themselves in depictions of robots and cyborgs like Data.
I remember watching Data as a teenager, as he tried to understand odd human customs. I remember liking Data. I remember thinking his human co-workers seemed like a bunch of jerks, snickering at his literal inability to understand, his total removal from their ways of doing things. I have, I remember, something of an affection for robots.
Remembering Data, I can now rewrite the chair scenario. It is only too easy to imagine this as a Star Trek: Next Generation scene. Data would go to Picard’s office to give a report, and would ask if he should sit or stand, as Data is curious and considerate and wants to understand what the correct, human, action is in this scenario. Picard, grumpy, as he so often is, snaps at Data that it is unimportant, he can stand or sit, it is all the same to Picard. And Data accepts this comment, and proceeds to stand, explaining that standing does not fatigue an android. Picard tersely responses that this is fine, and abruptly moves on to the issue of the report, as if the question of sitting or standing does not matter. Yet it does matter. Picard can dismiss the issue as being insignificant because, as a human versed in neurotypicality, he already has a sense of the right answer. Data, a robot with no automatic connection to the unspoken rules central to neurotypicality, needs a direct answer as a way of accessing the neurotypical world. I do not feel myself to be robotic, but some of my internal monologue can be easily reframed from a human’s anxiety to a robot’s need for rules.
I know this, in part, as I teach about robots. Not the new shiny ones. Rather, the premodern automata that have become the subject of more and more scholarly works. In my course on the history of science and technology we talk about fictional ancient world robots like Talos, a humanoid drone patrolling the shores of Greece in Argonautica. We learn about “Monkbot,” a sixteenth-century Spanish praying monk powered by clockwork. And we watch a video of the famed eighteenth-century Peacock Clock owned by Catherine the Great, a massive gilt construction that is almost uncannily articulated.
For that matter, I also spend time in that course on Sci-Fi. We watch A Trip to the Moon, perhaps the first filmed Sci-Fi movie, from 1902. We read Sultana’s Dream, an early twentieth-century Feminist sci-fi story. We discuss Black Panther and Afrofuturism. And I try and concisely explain posthumanism, the idea of finding humanity in the non-human and extra-human characters of Sci-Fi worlds, including aliens and robots. I would not consider myself a Sci-Fi buff or a robot expert. But I am aware of them, I read about them, talk about them, teach them. And still it took another person’s reading of my work for me to start thinking of myself in terms of robots.
And this, then, is a little unsettling. Some people self-identify as robotic. Others are aware that they are viewed by others as robots. There is a level of consciousness in both those scenarios. To suddenly discover a robotic part to yourself is something different, bringing with it the shock of the unexpected self-discovery. How could I not see the secret robot within myself?
This is a particular oversight on my part, as I am familiar with this trope. In many Sci-Fi worlds, there are secret robots and cyborgs. Battlestar Galactica. Blade Runner. Even Buffy the Vampire Slayer had more than one robot-that-seems-human plotline. In any number of Sci-Fi worlds, specialist hunters track down robots, communities search their members for cyborgs, complex tests are developed to weed out the real from the pretended human, all out of a fear of robots pretending to be humans, robots externally presenting as humans. The humans in these worlds are terrifyingly protective of their sole claim to humanity; robots-as-humans are interlopers and to be punished or destroyed.
In a number of those worlds, the secret of the robots is secret even from themselves. The classic example is Blade Runner, where the human protagonist must carefully administer a series of intricate tests to distinguish human from replicant, the movie’s version of a cyborg. The protagonist, Rick Deckard, administers such a test to the character of Rachael, eventually concluding that she is a replicant, but believes herself to be a human, the result of implanted memories. So robots and cyborgs can have a hidden, secret robotic nature that they themselves do not automatically know.
And here is the symbolic value of secret robots to neurodivergent humans living in a neurotypical world. The disorders commonly considered a part of neurodivergence are often diagnosed after a large gap from the onset of symptoms. My diagnoses, OCD and bipolar, both come with a typical lag of more than a decade after symptoms begin, and my own diagnosis was one of the late kind. Did Blade Runner’s Rachael feel like a pre-diagnosis neurodivergent person? Did she have the sense of feeling disjointed from the world, yet not have a narrative and a framework to understand that difference? In the case of Blade Runner, we get no answer to this, because, as is often the case, we approach the robot-as-neurodivergent character squarely through the human-as-neurotypical gaze.
This is common. When we get neurodivergent characters in fiction, whether explicitly acknowledged as such or clothed as robots, we are often shown them through a neurotypical gaze. Pop culture analysis often makes use of the feminist theory of the male gaze, when a work of fiction looks on women in a way that frames them primarily or exclusively as sexual objects for cishet male consumption. Cishet men are not the only ones who gaze. The essence of any gaze is the framing of a character in terms of their meaning to a community to which they do not belong, and by which they are stereotyped and restricted. The neurotypical gaze frames neurodivergent characters as plot devices for neurotypical characters, props that complicate their lives, or from whom they learn something.
We can see this neurotypical gaze in various pop culture products. The show Atypical, about an autistic teenager, was reviewed negatively by many autistic adults for focusing on the effects of autism on the neurotypical family members, and because the show did not consult autistic people during production. Community created a wonderful neuroatypical character in Abed, but we often approach Abed through the viewpoint of his neurotypical friends, and in such scenes we are then presented with a view of Abed as a limitation and problem in their lives. In contrast, Pure, a show about the main character being diagnosed with OCD, does give us a neuroatypical viewpoint as we are immediately and often uncomfortably located inside the mind of Marnie, a woman with a form of OCD that presents as mental images. Slowly, we are getting more explicitly neuroatypical characters, but only sometimes do we get their perspective on their neurodivergence.
Returning to our robot stand-ins, there is one interesting recent example of an interior view of a robot. For one episode in season two of Star Trek: Discovery, we are taken inside the mind of the cyborg Airiam, up until then a background character. Airiam’s back story is fascinating – born a human but transformed into a cyborg after an accident that otherwise would have killed her – but the visible effects of this back story are minimal. How would it feel to be so substantially transformed? What were the effects of this trauma on Airiam? How can a former human relate to humans? We get frustratingly little of Airiam’s experience. Airiam the human-turned-cyborg and Rachael the replicant-with-human-memories both come tantalisingly and frustratingly close to exploring the experiences of late-diagnosed neuroatypicals through robot stand-ins.
Wondering about the symbolism of secret robots is one path to take. Another is to explore my own version of Masahiro Mori’s uncanny valley. The fictional Blade Runner test for replicants is playing with an actual concept, the famous Turing Test based on the original work of the late great Alan Turing. We will have achieved true Artificial Intelligence, so the simplest explanation of this goes, when we cannot tell the difference between a real human and an AI in their answers to certain questions.
Except the original Imitation Game proposed by Turing, and the later Turing Tests inspired by it, are not so simple. In Turing’s original conception, the Imitation Game was a competition between a human adult man and a computer to appear to be a human adult woman by interacting with an observer who could only receive their responses, and not observe them directly. As a woman, I can’t pass or fail the original Imitation Game, because my role is not to compete, but rather to be the model for the game itself.
This aspect of a competition has been lost in more recent Turing Tests. The ones we all interact with most frequently are the CAPTCHA tests, Completely Automated Public Turing test to tell Computers and Humans Apart. One version has us find all the pictures featuring a bridge. This presents immediate problems to any number of humans. People with visual impairments struggle to pass such tests. The commonly used alternative, to give the subject a jumbled audio to transcribe, presumes that someone who cannot complete the visual puzzle must be able to complete the audio, which is not necessarily the case. They are, in the literal sense, inaccessible, as they do not make space for disabled people. Ironically, these tests designed to approve humans and block bots, can be incredibly hard for both disabled and non-disabled humans to solve. They are then inaccessible in a more colloquial sense, too.
The proliferation of everyday Turing Tests like CAPTCHA leads to rather philosophical questions, like one from a 2019 article from The Verge: “what is the universal human quality that can be demonstrated to a machine, but that no machine can mimic?” A different question would be, what kind of human do you have in mind? Turing’s Imitation Game had the competitors compete to appear to be a human woman. Did he envision that hypothetical woman as neurotypical, or neuroatypical? I presume that Turing had in mind a neurotypical hypothetical woman, and for that matter a neurotypical human male competitor. I do this not because I see Turing as neurotypical – he may have identified as neuroatypical – but because neurotypicality, like white supremacy and patriarchy, is the all-pervasive waters in which we all swim. To escape any of them in our thinking requires a conscious effort. Did Turing make such a conscious decision to escape neurotypicality in his work? Do any of his modern followers?
I wondered, in an earlier draft of this essay, if I would pass a Turing test. In the case of CAPTCHAs, I do pass them all the time, although sometimes with difficulty. I could not pass the original Imitation Game, as it was never designed for someone like me to actively participate in. My external, high-functioning, self can often pass a CAPTCHA, even as it is excluded from the original Imitation Game. But what of the internal anxious voice that seemed potentially robotic to my friend? Would that part of me pass as human? In the most literal sense, it has already failed to do so, as, stripped of my high-functioning exterior, my internal voice came across as inhuman.
What, then, is the moral to this story? That the line between humans and robots has never really been about robots, but rather has always been about humans, both neurotypical and neurodivergent. Which returns us to a very human problem. Any attempt to drill down to the essence of humanity makes presumptions about what is common and central, and so also what is uncommon and marginal. As has been demonstrated many times by scholars of technology, encoding such presumptions in technology solidifies existing prejudices. My inability to consistently present as a human does not demonstrate a limit to my humanity, but rather a limit to our own conceptions of what really counts as being human.
Back to Top of Page | Back to Creative Nonfiction | Back to Volume 14, Issue 4 – December 2020
About the Author
Clare Griffin is a historian of early modern science and medicine living with OCD and bipolar. Originally from the UK, she is now Assistant Professor at Nazarbayev University, Republic of Kazakhstan. You can find more of her work on her website, http://www.claregriffin.org/.