
Algorithmic Historicity: The past is being reprogrammed, and we’re all doom-scrolling through it
Text Maria Dragoi
Back in 2021, after a year of pandemic, I was, like most, fully sluiced in technological fluid, and I started thinking about the problem of engaging with the material world through a social media timeline. Reckoning with the detached voyeur this made me into, I wrote (about seeing something online):
Something else was happening in 2021 – AI image-making models, which were still largely GANS (Generative Adversarial Networks), were becoming available for experimentation through Discord servers and Hugging Face. The robotic gaze was coming into its own, and shut up in our rooms and houses, we were coaxing it, training it, out of its infancy. The year 2016 has been cited as a turning point, “which upended traditional ideas of individuality and selfhood and exposed previously unimagined applications of personal data” by writers like Shumon Basar. It was also the year Instagram switched from a chronological timeline to an algorithmic one. Over the past decade, Meta has slowly refined its immaculately tailored black box, aided and abetted by the countless hours of screen time a world under lockdown furnished it with.
After nearly ten years, the effects are inescapable, and the world is ever more shaped by what I can only call ‘algorithmic historicity’.
Brexit and the first Trump victory were macro versions of what would become micro occurrences, slowly re-writing the landscape of how we think about the past. The Cambridge-Analytica scandal involved the non-consensual harvesting of data from millions of Facebook users, creating targeted political advertisements which partly manufactured the outcome of both votes. History was decided on, paid for, and produced through an algorithm. The victims were under the illusion that their vote was an exertion of their individual opinion, but they had been unknowingly coerced into a group which would produce a trend and yield a political result. The weaponisation of the ‘age of you’ is one of techno-capitalism’s most powerful weapons, and it doesn’t just affect the future, it warps the past too.
Outside the pub last week, I was talking to Jonathan Saha, a Professor of South Asian History at the University of Durham, about how students’ motivations for studying history have been changing. Saha was telling me about the rise of ‘identity motivated study’ – students taking a specific course or module because they see themselves reflected in it. For example, young women choosing to study medieval and early modern histories of witchcraft because they themselves are practising witches. Several weeks later, I had a similar conversation with Declan Colquitt from MOTH Studios, and he told me about several discussions he had had regarding how third person writing is disappearing. Everything is auto-fiction, auto-theory. Everywhere lies the burning desire to see the self reflected back with immediacy. There are obviously very positive effects of this trend, and being able to reflect lived experience in media widens diversity and threatens normative narratives. However, I think there’s also a less desirable side to this tendency and the way it shapes cultural understandings of empathy.
Whilst having these conversations, I noticed my instagram feed being flooded by a slew of AI generated videos of the ilk “You wake up as a British soldier during the revolutionary war” or “You wake up as a peasant farmer in 1200CE”.

These videos, generated by accounts like @histairy , @historypovzone, @the_pov_lab, and @salvagedhistoryai aim to give the viewer a subjective, first-person experience of different historic scenarios. If I want, I can quickly jump between waking up as Columbus ‘discovering’ America, a British prisoner brought to Australia in 1788, a London resident during the Blitz, or a Karen in the USA. A few scrolls later, I wake up as “The disciple who betrayed Jesus in 33 A.D.” (maybe mentioning the name Judas is bad for the algorithm?) Interspersed are also completely imagined scenarios, like waking up in ‘Cheeseland’, or as a cockroach.
The videos are rapid repetitions of a formulaic model with different key prompts subbed in. Toenails adorned with Starbucks logos become covered in mud. Trainers get swapped out for puttees. Every time, the ‘POV’ is the same: you get up out of bed with a yawn and wander dazed through some streets, sit down to eat a meal with rigid hands, and eventually turn in for the night. The timeless universal human experience synthesised in 30 seconds. It’s the apple-juice-ification of encounter, the illusion of choice.
I return to the thought I had in 2021 – if a digital experience cannot be truly digested and cemented in time because of its lack of sensorial fixity, what happens if we ingest parodies of history on an algorithmic timeline? The answer is the paradoxical flattening effect of hyper-individualism. I think of Gîles Chatelet’s To Live and Think Like Pigs. In a fantastic review of the book from 2019, Liam Gillik writes:
“In our endless paralysis anticipating the future, we have forgotten how to speak with the past, too paralysed to look backwards as well as forwards, forever stuck in the yawning flat of the now, which is yesterday and tomorrow also.”
Chatelet wrote his scathing social critique in 1998, but the sentiment was prescient. As we anticipate endless micro-futures, each individual and instantaneous, bracing for each with a new micro-aesthetic, or ‘core’ which might carve out a fresh individual identity, we rewire the way we engage with time as well as each other. This trickles down through how we think to how we speak, predicted by Chatelet in what he felt was the snobbery and faux-engagement of phrases like “it really resonates with me’,…[and] ‘in my opinion, personally…”. Combine this proleptic inward turn with the generalised content of AI models and you get a ‘nothing burger’. The brand of empathy created by the ‘history’ videos is ‘self-insert’ a-la wattpad fanfic of yesteryear, and just as fictional and fleeting.
Yasmin Rufo wrote an article about these kinds of videos for the BBC a couple of months ago in which she spoke with historians about their opinions – the worry of Elizabeth Frood, a Professor of Egyptology, was that the videos “tend to homogenise complex ancient worlds”. The AI history video is the antithesis of the archive. Black box algorithms mean there’s no way of knowing what the model used to generate the video was trained on – it could be drawing from genuine medieval manuscripts uploaded to a website like the Internet Archive, or it could be drawing from someone’s fantasy sketches on DeviantArt. Rufo spoke to a creator of the content, asking him about processes and motivations. The entire research process was facilitated by AI. Chatgpt, perhaps the most famous large language model (LLM), was used to conduct the preliminary research, which was then fed into a variety of image, animation, and sound-generating models. In this case, the output is an entirely mechanical hallucination, a model dreaming in the dark, stabbing at what the past might have looked like and holding it out for us to inspect through the screen.
Helpful here is the metaphor of the “Stochastic Parrot” used to describe the theory that LLM’s, aside from their huge environmental cost and potential biases, produce an “illusion of meaning” . The models themselves don’t actually understand what they ‘learn’, they simply regurgitate a generalisation. Generative models are ‘trained’ according to human moderation, which produces a feedback loop. An image model doesn’t know what a dog is, but cautiously it extends a few pixels out for a developer’s approval. The model is either rewarded (the developer decides that the pixels are indeed a dog), or asked to try again. Not only is the final outcome endowed with meaning through interpretation, but the whole process which undergirds the model’s function is a mirror of what an individual has deigned to be ‘real’, or at the very least recognisable. AI content is easy to appropriate because it does away with a concrete other, a tangible creator or source with which to contend.
Of course, our feeds are not (yet) completely made up of artificial videos. Interspersed between “brainrot” and “slop” content are videos of real life and real people, and this further problematises the algorithmic feed. The situation becomes especially difficult when videos and images of people enduring an active conflict, or a genocide, become embroiled in the timeline. Primed to digest through the stomach of selfhood, how do we make sense of a very real other, especially if the other is suffering?
Since October of 2023, most feeds have reflected the atrocities of the unfolding genocide in Palestine. Traditional media failed Palestinians, so socials became a vital tool for the dissemination of information. Joining the wave started back in 2020 by the BLM movement, grassroots organisation and activism has been made possible by online platforms. However, as social media becomes the primary vehicle for discourse around Palestine, we need to pay attention to the dangerous system which becomes its mediator. The algorithm itself cannot be trusted, Meta’s “aggressive over-moderation” of content means it cannot be used as a reliable source to archive information. Information shared is at the risk of “algorithmic historicity”, a system-embedded memory which warps the documentation of unfolding history. As users, we are also at risk – our ability to digest, understand, and empathise is shaped by the system too. Take seeing a confronting video of the genocide: initially, the content shocks us out of the fugue state of the scroll, but our trained response is to extend an empathy mediated through a construction of our own identity. What media we share and engage with is, even when well intentioned and effective, a form of signalling. Just like we ascribe meaning to AI content, we project our trained and crafted micro-identities onto the lives of others, affirming their agency through our own.
Uploaded to a parade of miscellanea, the power of the people in ‘real’ videos becomes contingent on a series of cryptic processes. Diffused into a sprawling digital web, they loop their content ad infinitum, visible only at the mercy of the individual user – the ‘I’ which ‘engages’. This has become part of the tactics disseminating information, especially media related to the ongoing genocide in Palestine. Some Palestinians camouflage their videos behind 3-second starting clips of a viral video, cutting to the intended content when the viewer has been lured in by the attention hook. Others make ‘day in the life videos’ in a bid to appeal to “the attention economy and the algorithm [which] want things that are entertaining”. Agency must contort itself to fit the confines of the timeline, and the historical documentation which unfolds in the present must mediate itself or else fall to the whims of a problematic platform and its black box algorithms.This appeal to a robotic gaze, necessitated by the structural systems of world hegemons consistently failing Palestinians, sets a dangerous precedent for how information about a genocide gets created and remembered.
Sometimes, even when real content exists and demands immediacy, AI content becomes its mimetic and supersedes it. Nearly a year ago, the ‘All Eyes on Rafah’ image went viral, with over 46 million shares on Instagram. The image presented “an aerial view of a camp set out in orderly rows of tents, nestled between what look like snowy peaks. In the middle, some lighter-coloured tents are arranged to spell out [the slogan]”. The tidiness of the image was a total contrast to the reality in Rafah, where an Israeli attack on a tent camp killed more than 45 displaced Palestinians.

There’s an unsettling flattening of time and space happening; an empathic drive curtailed by the desensitisation thanks to technological mediation. What does it mean to consume history as it happens? What does it mean to manufacture history for consumption? When I started thinking about these ideas back in 2021 I coined the word ‘apotopic’, meaning literally ‘away from skin’, to define the non-space of the algorithmic timeline. It’s this distance – from the sweat and stink of the skin of the real – which allows imagined histories to proliferate on social media. It’s this distance which represses genuine documentation of the real and the now. Any conflict, no matter how violent, can be re-packaged and re-imagined. I affirm my identity through my appropriation of others’. POV: I wake up as Judas.