“Technology can only take us so far,” said anyone who has ever lacked imagination. The depths of human creativity and resourcefulness should not be underestimated.
The use of artificial intelligence to recreate the quality of human presence and interaction is farther along than many of us would expect. Everyone always talks about how robots will replace humans in menial jobs, but the fundamental point of this idea is that robots can do certain tasks better than humans because they are unburdened by a life outside of productiveness; their sole purpose is to execute a function or complete a set of tasks, and they run on a simple formula that is always (barring any mechanical failure) the sum of its parts. In other words, they have no distractions from what they have been programmed to do. Why would a capitalist want to create a carbon copy of a human, with all the nuances, imperfections, and frustrating unpredictabilities that come along with it?
One: marketing.
Virtual influencers are characters created through artificial intelligence to look like a unique human and act online the same way a human influencer would act. Is it a symptom of how vapid the profession is that an influencer can be replaced by a computer-generated simulacra of a person and accomplish the same purpose? Perhaps.
The first mainstream example is Miquela Sousa, known online as lilmiquela. “She” is a Brazilian-American forever-nineteen fashion icon and celebrity activist from Unremarkable™, California. With three million followers and a slew of fashion endorsements, she is doing everything that we once thought was sacred, limited to the effortlessly attractive and endlessly airbrushed personalities that grace our social media feeds. What, even the creative jobs aren’t safe?
Curation is replicable, it seems. Why have a real face when an agency can do all the work anyway? Virtual influencers have a built-in audience who tune in for the novelty of a fake person with carefully constructed features and personalities that are just quirky enough to seem real at first glance. Whether people are paying attention out of morbid curiosity or true feelings of aspiration, it’s likely that their interest will waver the same way it usually does for most living internet celebrities; but in the meantime, virtual influencers require no salary, put up no fights, and are uncomplicated by the vagaries of life in the real world. They represent a clean slate that can espouse any programmable opinion and they have no troubling Twitter history that will result in a years-late cancellation. If a marketing misdeed leads to criticism, no real people are harmed in the public eye.
Marketing agencies are adept at manufacturing a profile to come across as authentically human. Real people are flawed, emotional, sometimes inconsistent — and these traits are tempered by the natural desire to portray oneself on social media as a more perfect person. Generation Z in particular seems to have mastered the art of toeing the line between being #real and not revealing too much; and if teenagers can do it, certainly an entire agency (even one composed of millennials) can do it too. Lil Miquela is coquettish, tactfully candid, image-obsessed to the point of parody — in other words, a cliche. Despite the adherence to established rules of conduct online, simply knowing that she is a character introduces a feeling of transgression into the minds of many followers. Being explicit about pretending to be someone who doesn’t actually exist takes one aback; on this topic, honesty is both tired (we can blame stale platitudes about not comparing ourselves to our Instagram feeds) and disconcerting. Something is awry, eerie … uncanny.
The uncanny valley describes the area between truth and falsehood, the creepy feeling elicited by entities that are human enough but just shy of being completely convincing. With current renditions of virtual influencers, which are almost astoundingly realistic, we definitely feel less violated by their portrayal; but that doesn’t mean the phenomenon is just harmless fun. Virtual people are a spillover from traditional escapism, where instead of real people entering a fake world, fakeness permeates real life. It’s less of a leak and more of an infringement: at least one real person facilitated the escape of a fictive entity from their original medium and solidified their presence in reality through manufactured interaction. Miquela has interviewed and been interviewed, collaborated with real content creators, and been featured on magazine covers. She’s even dated a real boy — step aside, Pinocchio.
And fallout from her interactions only affects the real people (and companies) that work with her; when she was pictured kissing supermodel Bella Hadid for a Calvin Klein ad, the company apologized for queer-baiting. Miquela losing followers and getting hate comments was a farce compared to Hadid’s experience, especially since it brought more attention to the lilmiquela empire. For virtual influencers, there truly is no such thing as bad publicity — at worst, scrap the account and create a new person.
Speaking of creating a new person: these influencers aren’t caricatures of real people. At best they’re an amalgamation of ordinary and quirky personality traits, or they could even be truly original characters (if there is such a thing). They’re simulacra (sing. simulacrum): representations of something real that by their very nature distort in some way our understanding compared to if we were viewing the thing itself. This is not inherently a bad thing; painting and photographs are simulacra of people, too. But an effigy that comes so close to the real thing that it fools us into thinking it is the real thing may have psychological impacts. Imagine reaching into a McDonald’s ad and pulling out that perfect, mouth-watering burger only for you to blink and be holding the dull, squashed, obviously inferior real thing (analogy credit to Aleks Eror).
The presence of these obvious simulacra in our lives as if they were more than just digital creations blurs the lines of reality. We may know in our heads that Miquela is just a three dimensional “anime character” so to speak; but that doesn’t mean we give her sponsorships and advertisements any less credit. She serves essentially the same function for fans as a human influencer — except that we know the agency behind the virtual influencer was paid to post a picture with their new super-soft award-winning Casper mattress; with human influencers, we can at least pretend we are looking at an authentic endorsement until we see the #ad.
Since virtual influencers serve the same purpose as traditional (i.e. human, I just got tired of typing human because it made me feel weird) influencers, it’s unsurprising that their followers feel a human connection to them. In this phenomenon of parasocial relationships, fans feel a unidirectional bond to public figures; they feel as if influencers, who do not even know they exist, are their friends on a personal level. Ardent fans of virtual influencers take this bond to the extreme, because the object of their affections is not even an actual human being with any attachment to the posts they have made.
A virtual influencer is either an indiscriminate megaphone, a medium through which to advertise different products based on whomever pays well, or a brand mascot, a mechanism by which a single brand (usually in fashion, but just wait for Progressive Insurance to digitize Flo) is shared both on social media and in traditional advertising. An example of the latter is Balmain’s trio of digital influencers: Margot, Shudu, and Zhi. Margot is white (seemingly French based on her name), Shudu is black (and considered the world’s first digital supermodel), and Zhi is East Asian. These racial characterizations are never explicitly stated, but its fairly obvious; and that leads to an ethical question.
Modeling is a famously exclusionary and discriminatory industry, long dominated by European-American beauty standards. Even in modern times, countries all over the world use skin bleaching, “whitening” medications, laser treatments, and chemical peels to achieve lighter skin; this attitude is a byproduct of colonialism, during which time the rulers were light-skinned Europeans unburdened with working out in the fields. Only in recent decades have different standards of beauty emerged from their origins in specific ethnic groups into the mainstream, resulting in Western obsessions with tanning, cosmetic surgeries for a more diverse range of features, and even blackfishing (see: Rachel Dolezal and the Kardashians). Tall, slim, cisgender, and fair-skinned people are no longer the only ones considered worthy to walk on a runway or model clothes for online shopping.
But people with historically marginalized identities often use their platforms to speak about social issues; and most large corporate brands, if they had a choice, would prefer not to have their name associated with anything even remotely controversial to avoid alienating customers. By creating virtual models who check the diversity box but never use their fame to “pick sides,” they eliminate the need for real people and real voices in the modeling industry. The true benefit of diversity is varied perspectives; in an increasingly polarized and reactive culture, however, it’s more profitable for brands to disregard these perspectives. And for the record, Balmain’s virtual models, pioneered by creative director Olivier Rousteing, are all slender and symmetrical with sharp cheekbones and defined jawlines — this is nothing particularly groundbreaking in terms of aesthetics.
In fact, virtual models seem like the logical endpoint of unattainable beauty standards: when even real people with access to plastic surgery, non-invasive cosmetic treatments, expensive creams and masks, and social media filters are not beautiful enough, why not just create an artificially beautiful person? Bombarding viewers with perfectly airbrushed selfies free of such unforgivable flaws as pimples or visible pores means that there is still fodder for criticism — a real person is being edited because they are not enough on their own. But with virtual models (and influencers as a whole), the entire façade is fake, from nothing into perfection. AI influencers are designed from scratch with flawless appearances, eliminating the need for filters or editing and saving brands lots of time, headache, and money in the process. It’s certainly possible that we will be less susceptible to comparison when we are looking at a non-real person instead of a Kardashian; but that may be understating the powerful effect of virtual people who, for all intents and purposes, pass as actual people.
Combining AI with increasingly competent (and “creative”) algorithmic music generators means that singers and musicians aren’t safe, either. Earlier this month, Korean record label Deep Studio Entertainment (which began, as you might imagine, as a deepfake technology developer) debuted the band Superkind, which is composed of four real people and one virtual person called Saejin. He does not appear to use artificial vocals (yet) like sensational moe anthropomorphism Hatsune Miku, nor is he a virtual avatar mirroring a real person like in the Aespa metaverse; instead, he’s a virtual star with his own unique look (the ideal male K-pop appearance with easily changeable hair colors) and presumably, personality. Virtual avatars in music aren’t new (think Gorillaz), nor are virtual musicians (both Miquela and Korean counterpart Rozy added “musician” to their repertoire after starting as influencers), but the idea of an unaging, undying artist is still perplexing. Do machine-made creations cheapen the meaning of art? Are we as a society losing something that is a fundamental part of our identity as humans?
“They’re taking everyone’s jobs!”
Hopefully not. We (perhaps naïvely) believe that real people still have an edge over digital characters. But inevitably, jobs in social media influencing, music, fashion, acting, and even porn will be partially replaced by virtual characters. How long until Miquela kissing Bella becomes them [redacted]? These replacements are already happening, accelerated by the digital revolution that the COVID-19 pandemic necessitated. No longer do you have to be in Paris for Paris Fashion Week (or, for that matter, at a Balmain store to see the Balmain inventory: Rousteing has created a virtual showroom accessible with a VR headset).
All to say that there’s money to be made and difficult social conversations to be avoided in using virtual influencers that mimic humans in the most marketable ways and are otherwise a blank canvas (and teleprompter) to be programmed as needed.
So now we return to the question at hand: why else would a capitalist with dreams of maximum productivity want to be able to replicate human appearance and behavior with technology?
Two: political and social manipulation.
If we can create original online personalities realistic enough to fool the masses, nothing stops us from using the same tools to impersonate real people; these falsified videos are known as deepfakes. They use autoencoders (a type of neural network) and generative adversarial networks (in which a deepfake generator and deepfake detector work against each other to constantly improve the quality of the falsification) to create videos showing real people in a made-up situation. Take, for instance, the countless examples of Donald Trump and Hillary Clinton deepfakes that circulated Reddit boards and Facebook feeds during the 2016 American election. Or the widely reported-on Tom Cruise deepfake created for YouTube and TikTok by his younger doppelgänger Miles Fisher (he engaged in such adorable curiosities like playing golf in a backyard and finding bubblegum in a lollipop).
Why do people do this? Simple: manipulation of the narrative.
The opportunities to spread fake news and disinformation are essentially limitless. Some can be harmless, like the Tom Cruise videos; but recall the vitriol people now have for whichever figurehead represents their enemy political party. Now imagine they have the power to make up lies about these people and convince others that the lies are real. Long gone are the days of captioning photos of famous people with invented quotes: now we can mislead people with videos of them “actually” speaking. The potential to wreak havoc isn’t limited to the political arena, either — researchers estimate over 90% of deepfake videos on the Internet are nonconsensual pornography in which the face of a famous woman (usually an actress) is put onto a real porn actress’s body in a real video. Women who have become victims of these are targeted for either fantasy fulfillment or revenge porn; in both cases, subjects feel violated and scared for their reputations.
Deepfakes in politics are the most well-known application of this new technology (for some reason — see the last paragraph). Because voters are increasingly polarized in either liberal or conservative viewpoints, some element of confirmation bias is often at play. As voters increasingly distrust the media, they are less and less willing to challenge their own ideology; any reporting which contradicts existing beliefs is dismissed as fake. Even if consumers don’t fall for deepfakes, polluting the discourse in this way reinforces the perception that the media is not reliable. This creates a vicious cycle of further polarization.
At first glance, porn deepfakes may seem to represent a less significant upheaval: famous women are already used to dealing with plenty of online harassment, and stories of private nude photos being leaked are not uncommon. The key difference represented by deepfakes is a loss of control not only over who can access private photos, but over the content itself — the scenes depicted in deepfakes are often extremely violent and degrading, exposing women to reputational damage even harsher than that of leaked nudes. In an attempt to preserve relationships with friends, family, and employers and avoid further harassment, victims of deepfakes often retreat from their online presence, creating a silencing effect that may reduce awareness of the problem. The issue isn’t limited to celebrities: there have been several documented cases of deepfakes of everyday people created as revenge porn, an issue for which most jurisdictions’ lackluster and outdated legal protections (largely reliant on minor sexual abuse and child porn laws) provide very little recourse.
There are even more applications of this technology: deepfakes have bolstered social media sockpuppet accounts, allowing them to espouse controversial views or popularize conspiracies without fear. In the corporate landscape, the Federal Bureau of Investigation has warned of people from other countries (often North Korea) interviewing for remote American jobs using deepfakes and easily-searchable personally identifying information. Motives involve access to foreign currency, obtaining security clearances for corporate espionage, or even gaining control over critical American infrastructure with the ability to bring it down at any time. Deepfakes can also be used for bullying and blackmail, sometimes to financial ends: one can imagine a faked video featuring a company’s corporate officer committing a heinous act being released just before an IPO, doing irreversible damage to the stock price before the video can be debunked. There are pitfalls for payment technology with deepfakes; ostensibly, some forms of two-factor authentication are vulnerable. With stories like that of German Defense Minister Ursula von der Leyen, whose fingerprints were reverse-engineered from high-resolution photographs and then 3D-printed onto a makeshift hand, the possibilities are endless (and sobering).
The psychological impacts of nonconsensual deepfakes in either category are intense, to say the least. In the political sphere, they further distrust of the media and contribute to an already-forming post-truth society. Osmosis between the realms of truth and falsehood is a dangerous phenomenon — little did we realize that propaganda was only the first step. It also fuels otherism: with political polarization and American jobs at risk, it’s understandable that people would prefer to remain in their own echo chambers and “safe spaces.” The psychological safety impacts on women deepfaked into porn have been discussed already, but it’s not difficult to imagine the threat to physical safety as well; actress Jodie Foster has been stalked multiple times solely for her engaging cinematic performances.
Earlier examples might have given the impression that deepfakes are inherently nefarious, useful only for destabilizing the political sphere and harassing women. No technology is inherently good or bad, however, and increased popularity has been accompanied by more sympathetic use cases. The idea of making immortal “clones'' of deceased individuals using artificial intelligence has long been a theme of futuristic media — perhaps the most famous example is the protagonist of the Black Mirror episode I’ll Be Back, who obtains progressively more realistic (and unnerving) simulations of her boyfriend after he is tragically killed in a car accident. Reality has caught up to fiction in this area — in 2020, a Korean woman made headlines by using deepfakes to have a “tearful VR reunion” with her 7-year old daughter who had recently died of a blood disease. While relatives’ desire to see their deceased loved ones again is clearly understandable, it’s hard to imagine this being compatible with healthy grieving. Is speaking with an AI simulation of a deceased person not simply escapism, preventing the bereaved from ever finding true closure and moving on? The comfort initially provided by deepfakes may evolve into a form of eternal denial.
Some more innocuous use cases of deepfakes are in art, acting, comedy, and advertising. In these, creators are open about the use of technology to simulate real people. The Salvador Dalí Museum created an exhibition called Dalí Lives to allow visitors to engage with and hear from the artist himself. A YouTuber who was a fan of Star Wars created a deepfake video imprinting a young Harrison Ford’s face onto Alden Ehrenreich in Solo: A Star Wars Story; then Rogue One: A Star Wars Story digitally recreated Carrie Fisher and Peter Cushing to reprise the roles of Princess Leia and Grand Moff Tarkin. Perhaps a future SNL sketch will create a Donald Trump deepfake rather than bringing on an (admittedly entertaining) Alec Baldwin. Cadbury worked with Bollywood’s Shah Rukh Khan to create free deepfaked ads for Indian mom-and-pop shops affected by the pandemic. Inevitably, someone will soon sell deepfaked NFTs, which I guess is technically okay (musician Holly Herndon has already announced that people can create NFTs of songs using her deepfaked voice, which can be minted on her DAO and sold with a fifty-fifty revenue share). But these examples are still critiqued for promoting the controversial technology.
The ethical implications of these deepfake applications are fairly clear-cut. Fake news and nonconsensual porn are obviously abhorrent. However, the technology also presents some more interesting ethical and philosophical dilemmas. Imagine, for example, a deepfake video of a politician apparently confessing to corruption circulates. If the video circulates widely, is believed to be authentic by voters, and ends the politician’s career, it has had the same impact on the world as if the politician had actually made the statements in question. How could we possibly say with full confidence that the video is fake? Widespread deepfake adoption has the potential to sow discord and incite chaos like never before. But policing it is a sticky situation: some call deepfakes parodies, or decry any limitations on free speech (of course, many of these people don’t properly understand the First Amendment, thinking that Facebook marking their anti-vax posts with a disclaimer is a constitutional violation).
Practical policing is…complicated. As long as technology has been around, it has been improving, and the last few decades have moved exceedingly fast in terms of advancement. The pandemic only furthered our addiction to and reliance on technology. Earlier, we wrote about generative adversarial networks, in which a deepfake generator creates a video and a deepfake detector tries to determine whether it is a real video or not: this is a zero-sum game resulting in the continued improvement of generation to fool the detector. Deepfake detectors currently used by different social media platforms are not infallible either. Check out thispersondoesnotexist.com to understand just how good the technology is at conjuring up a “new” person, much less imitating an existing one — out of thirty site refreshes, only one was a bit wonky-looking. MIT is even offering a free media literacy class to give people critical analysis skills to combat the threat of misinformation. Thus far in terms of real policing efforts, here are some of the actions that have been taken: Reddit has banned r/deepfakes (but still allows non-porn variations to exist on the website); California governor Gavin Newsom passed AB 730 criminalizing the distribution (within sixty days of an election) of faked audio or video damaging to political candidates with exceptions for parody and satire; China has banned deepfakes without disclosure of their origin altogether. The first (and thus far, only) Congressional hearing on deepfakes was held in 2019. To paraphrase the popular saying, the wheels of legislation move slowly — it’s possible we can’t expect the government to be able to police AI when even competing technologies are vulnerable. Some have suggested the use of blockchain technology (of course they have) as a tool to verify the authenticity of videos to prevent the dissemination of undisclosed and nonconsensual deepfakes; this would be an enormous and complicated undertaking.
We return to the question: why? Why are these artificial intelligence applications that replicate humans rather than perfecting them being considered?
Because there’s money to be made (or, at least for now, to be thrown away). Venture capital support for artificial influencers is based on the marketing potential, while in the deepfake arena, the business of recreating people who have passed away can net a lot of money from desperate, grieving people (see again: that Black Mirror episode). There are also financial motivations for disrupting politics and creating fake porn — as we’ve read a few times since beginning research on this topic, “where there’s innovation, there’s masturbation” — though this money is likely kept out of the public eye to a large extent. At one point we may have believed that the whole phenomenon was built on hype or morbid curiosity, but people don’t seem to have a widespread problem with curated authenticity or intentional deception (at least, not enough of a problem to stop consuming Miquela’s Instagram feed or Miles Fisher’s Tom Cruise TikToks).
The phenomena of artificial influencers and deepfakes have uprooted the online social media and creative industries. They have incited debates about diversity and the extent to which brands will go to be represented by a perfect, uncomplicated agent. They have dramatically upped the misinformation and fake news game in the political arena, and exposed vulnerable people to new forms of harassment and bullying. They have enabled new kinds of national security threats and espionage. And they’re just getting started — the capabilities of AI technologies and the funding behind them are only growing. We’re in for an interesting future.