Holly Herndon’s Infinite Art

The artist and musician uses machine learning to make strange, playful work. She also advocates for artists’ autonomy in a world shaped by A.I.
Many repeating images of Holly Herndon wearing a long blue dress.
Herndon believes that generative A.I. will be hugely consequential to creative fields: “How do you build a new economy around this where people aren’t totally fucked?”Photograph by Carla Rossi for The New Yorker

Listen to this article.

Last fall, the artist and musician Holly Herndon visited Torreciudad, a shrine to the Virgin Mary associated with the controversial Catholic group Opus Dei, in Aragón, Spain. The sanctuary, built in the nineteen-seventies, sits on a cliff overlooking an inviting blue reservoir, in a remote area just south of the Pyrenees. Herndon and her husband, Mathew Dryhurst, had been on a short vacation in the mountains nearby. They were particularly taken with an exhibit of Virgin Mary iconography from around the world: a faceless, abstract stone carving from Cameroon; a pale, blue-eyed statuette from Ecuador; a Black Mary from Senegal, dressed in an ornate gown of blue and gold. Moving from art work to art work, the couple discussed Mary’s “embedding.” In machine learning, embeddings distill data down to concepts. They are what enable generative A.I. systems to process prompts such as “Cubist painting of a tabby cat, wearing a hot-dog costume and eating a hot dog” or “country-club application, as a sestina.” At Torreciudad, the sculptures and paintings on display all had aesthetic and material differences, yet there was something consistent—ineffable but essential—that made the art works legible depictions of the same figure.

Around this time, Herndon and Dryhurst, who is also her primary collaborator, had been experimenting with the embedding of “Holly Herndon” in the data used to “train” text-to-image generators such as Dall-E and Stable Diffusion. Herndon, who is forty-three, has sea-glass-blue eyes, a round, pale face, and persimmon-colored hair; she tends to style it with bangs, a short bob in front, and a long braid in the back. The embedding of the Virgin Mary might be reduced to something involving her posture, gaze, and infant son; Herndon’s embedding is tied to her distinctive look. In 2021, she and Dryhurst began working on a series of computer-generated images, grouped under the title “CLASSIFIED,” that explored her embedding in an artificial neural network created by OpenAI. Though some of the art works are unsettling portraits of Herndonesque women rendered in the style of an oil painting, many are more playful: “x | o 40,” which used the prompt “A building that looks like Holly Herndon,” shows a stately white structure with brick-red bangs, two porthole windows, and pursed pink lips; “x | o 41” depicts a figure with buggy blue eyes and a red braid which could be fan art for “The Simpsons.” “My identity in models is determined by aggregate cliches scraped from the web,” Herndon recently tweeted. “I’m mostly a haircut!”

Herndon is perhaps best known for her experimental electronic music, and for an art practice that spans the art world, academia, and the tech industry. She has performed and shown work at the Guggenheim, the Pompidou, and the Kunstverein in Hamburg; next year, she and Dryhurst have an exhibition at the Serpentine, in London, and will be part of a “prestigious group show” this spring in New York. (When asked if the group show was the kind that happened only biennially, Herndon declined to elaborate.) In recent years, she and Dryhurst have also fought for artists’ self-determination in the era of A.I. “I always felt they were so far ahead of everybody else,” Hans Ulrich Obrist, the artistic director of the Serpentine, said. “They really think about what it does to the whole ecosystem: the artistic, the technical, the social, the economic aspects of these technologies.”

Since 2020, Herndon and Dryhurst have been refining Holly+, a machine-learning model trained on Herndon’s voice. They refer to the model as a digital twin and a “vocal deepfake,” and see it as an experiment in “decentralizing control” of Herndon’s public identity. “I’ve never really fetishized my voice,” Herndon told me. “I always thought my voice was an input, like a signal input into a laptop.” Holly+ can use a timbre-transfer machine-learning model to translate any audio file—a chorus, a tuba, a screeching train—into Herndon’s voice. It can also be used in real time or be fed a score and lyrics: last year, Herndon gave a TED talk that opened with a recording of Holly+ singing an arrangement by Maria Arnal, a Catalan musician. It was a performance Herndon could never do. “These beautiful, melismatic runs—you have to study that stuff for years,” she said. (She also does not speak Catalan.) Several months later, Herndon released a track in which Holly+ covers “Jolene,” by Dolly Parton. It’s glitchy, with oddly placed breaths and slurred phrases, and is weirdly compelling. A free version of Holly+ is available online. When I uploaded a clip of sea lions barking, it returned a grunting, stuttering, portentous motet.

Holly+ represents the future that Herndon and Dryhurst anticipate for music, art, and literature: a world of “infinite media,” in which anyone can adjust, adapt, or iterate on the work, talents, and traits of others. The two refer to the process of generating new media this way as “spawning”—an act they distinguish from well-known forms of allusion such as sampling, pastiche, collage, and homage. When a d.j. samples an audio clip from another artist, the clip is copied, then recontextualized. Neural networks, on the other hand, don’t reproduce their training data but represent its internal logic—something like a style, a mood, or a vibe. Herndon uses the phrase “identity play”—a pun of sorts on “I.P.”—to describe the act of allowing other people to use her voice. “What if people were performing through me, on tour?” she said. “Kind of like body swapping, or identity swapping. I think that sounds exciting.” Decisions about what to do with Holly+ are made by a decentralized autonomous organization—a sort of coöperative group of digital “stewards.” (Herndon retains a veto.) The musician Caroline Polachek told me, “I see it as an inevitability that voice modelling will be outside of artists’ control, that people will eventually be able to use my voice with or without my consent. Holly specifically has woken up a lot of the art and music community to this window of time we have, to determine what we want to do with that.”

“Essentially a dress shoe, but you could run for your life in them.”
Cartoon by Edward Frascino

In conversation, Dryhurst described Holly+ as an “abstracted fork” of Herndon’s identity—in open-source-software development, forking is the act of copying source code and then changing it. Herndon alternated between calling it “my voice” and “the voice.” “It’s not like you don’t have a relationship with that version of you,” she told me. “It’s still an emotional connection, but it’s not you.” Public identities already take on lives of their own, the couple noted; most of the publicly available images of Herndon, which “CLASSIFIED” drew from, are press photos. Years ago, while experimenting with machine-learning software, she and Dryhurst realized that all existing media could be used to train A.I. systems, an idea that now informs their art practice. “As soon as something is machine-legible, it’s part of a training canon,” Herndon told me. “And that’s very radicalizing.”

We were sitting outside their bedroom in Berlin, in a white-walled apartment so spacious, high-ceilinged, and affordable that it felt almost like a slight. Their infant son, Link, played quietly with a babysitter in the living room. A large print by the artist Trevor Paglen, titled “Tornado (Corpus: Spheres of Hell) Adversarially Evolved Hallucination,” hung over the couch; it depicted a neural network’s concept of a tornado. In the bedroom, previously Herndon’s music studio, large white acoustic panels hung from the walls and ceiling, framing a low, unmade bed and a small bookcase—Mark Fisher, Michel Houellebecq, “Baby-Led Weaning.” A towering dieffenbachia plant, inherited from an elderly neighbor who had recently died, slouched against the doorframe. Dryhurst, who is thirty-nine, bald, and bespectacled, offered to demonstrate Holly+. “See if it sounds the same with speech,” Herndon, who was wearing white overalls, instructed. Dryhurst picked up a microphone, and chatted for a moment; Holly+, processing his voice—he has an English accent—sounded drunk and a little congested. “It’s optimized for singing,” Herndon said, laughing. Dryhurst sang a sequence of notes. After a tiny lag, Holly+ began to harmonize with him, and then the real Herndon joined in. The choral effect was pleasant, if chaotic. “She’s definitely a better singer than I am,” Herndon said.

That month, a friend of Herndon’s, the artist Marianna Simnett, was putting on a “flute opera,” “GORGON,” featuring a character voiced by Holly+. “It’s a dog that is me—it’s a Holly dog,” Herndon said. (The dog, played by a male flutist, wore a braided, stepped red wig.) She pulled up an audio file on her computer and hit Play. “I’ll tell you a story,” Holly+ sang—a little stumbling, but sensational for a dog. Simnett told me that she saw Holly+ as an optimistic gesture—a “twin” that could be “liberated from the human Holly”—but she suspected that, for Herndon, the process also had a certain melancholy. “There’s something about that splitting, of the human and the double, that I find very interesting, and gothic in a way,” Simnett said.

The clip from “GORGON” sounded, to untrained ears, like a fairly simple piece of music. I asked Herndon if she had considered recording the aria the old-fashioned way. She frowned. “I would never sing this,” she said. “It’s just not something in my aesthetic universe.” She gestured toward the computer. “But she can do it.”

Artists have been experimenting with artificial intelligence for decades. In 1974, the British painter Harold Cohen débuted AARON, a software program that produced drawings—and, eventually, colorful paintings—in a freehand style, based on a complicated set of rules. (Cohen worked on AARON until his death, giving it a robotic arm and archival memory.) In the late nineties, Lynn Hershman Leeson developed Agent Ruby, a female chatbot whose mood could be influenced by Web traffic, and Ken Feingold fashioned “Head,” a realistic animatronic bust, designed to look like a friendly older man, that could engage in surreal, occasionally unhinged small talk. Feingold has said that it behaved “something like a psychotic.”

A new wave of A.I. art-making began around 2014. Advancements in generative A.I. meant that novel, increasingly realistic images, sounds, and texts could be conjured from training data, rather than from rules. The artist Refik Anadol told me that generative models trained on archival materials—“collective memories”—could reveal a “collective consciousness.” He recently exhibited “Unsupervised,” a giant digital animation, trained on works in MoMA’s collection, that generates an uninterrupted flow of new images. Some new works criticize the technology’s biases and shortcomings. “The Zizi Show—a Deepfake Drag Cabaret,” by the artist Jake Elwes, explores the difficulties that A.I. systems have in understanding bodies that defy easy categorization. Elwes created a data set of videos featuring drag artists in London. During filming, one performer diverted from the choreography and dropped into a split. This anomaly meant that, when other A.I. avatars attempted a split, they fell apart. “You see all these different performers just becoming this messy mush on the floor, their wigs kind of exploding off like a balloon, their faces disintegrating,” Elwes said. “It’s this really beautiful, queer body completely breaking one of these systems.”

The late twenty-tens brought advancements in language models, which, when combined with generative models, meant that new media could now be created using colloquial commands. Around 2022, the painter David Salle began working with a customized text-to-image model to, he said, “get beyond my own habits of mind . . . to find some imagistic language which both is and is not me.” Earlier this year, the artist Stephanie Dinkins used generative A.I. to create “Okra Continuum,” a series of images that tell the story of an okra pod’s “travels through space and time.” Experimenting with the new technology became an exploration of its limitations. Dinkins tried to produce an image of a “slave ship,” but, because of the system’s content-moderation guardrails, she received an error message. She had to use the phrase “pirate ship” instead. “We’re truncating history in a way,” she said. Christiane Paul, the curator of digital art at the Whitney, was skeptical of commercial generative-A.I. systems. “You’re dealing with the lowest common denominator of data out there,” she told me. “You’re becoming deeply embedded in the echo chamber. I think that is highly problematic for culture in general.”

Herndon grew up in Johnson City, Tennessee, in what she described as “a very Christian, super-optimistic, bubbly American context.” Her mother, Ann, was a full-time parent, and her father, Charles, was a lawyer and a volunteer preacher. (“I don’t know what the financial arrangement was, but I know that sometimes we would go home with, like, a turkey,” Herndon said.) After college, she moved to Berlin, where she briefly toured with Electrocute, an electroclash band. The band’s MySpace page, at the time, read “We are rock’n’roller skates mean guitar and electronic soup chicks who play twisted pop, iggy pot, baddass low down and dirty lolli bop music.” She tended bar at night clubs and worked at a music-licensing startup, tagging and categorizing songs with mood-based metadata. While working, she often listened to a music podcast made by the independent label Southern Records. In 2006, she e-mailed the label to ask why it had failed to release a new episode on time, and the person who responded was Dryhurst. They struck up a correspondence and met for the first time months later, at a music festival. In 2008, they eloped.

Herndon earned an M.F.A. at Mills College, then joined Stanford’s Center for Computer Research in Music and Acoustics as a doctoral student in 2012. That year, she released the album “Movement,” a collection of unsettling songs warped by homemade technical interventions. A few years later, she released “Platform,” which dealt with themes of surveillance and privacy, and incorporated what she called “net concrète”: recordings of the ambient sounds generated by her laptop as she typed, browsed, scrolled, and Skyped. The following year, Herndon and Dryhurst bought a gaming PC and experimented with A.I. Working with a programmer, Jules LaPlace, they began training machine-learning models, eventually naming the project Spawn. They assigned Spawn female pronouns, and referred to it as their “A.I. baby.” Spawn’s training data were bespoke. They included recordings of Herndon and Dryhurst, and of a roughly ten-person vocal ensemble whom the artists hired. Herndon also hosted “training ceremonies,” where audience members were conducted through simple choral arrangements. She estimates that Spawn incorporated the voices of fifty thousand people. “We were trying to communicate that it’s not just some kind of alien intelligence,” she said. “It’s all of this human activity.”

At the time, Herndon recalled, most of the conversation and research around A.I. and music focussed on things like “infinite Beethoven” and “infinite Bach”: automated composition with a clear set of references. This felt insufficient to her. “Composition is a living, breathing art form that should be responding to today, and not necessarily only trained on forms of the past,” she said. By populating Spawn’s data set with entirely novel data, she hoped to avoid a derivative output. “I knew that we were getting very close to some kind of kitsch trans-humanism by using the baby metaphor,” Herndon said. “But it still felt like the right metaphor, because a lot of it was about data provenance—this idea that you are what you eat. It’s the same kind of care that you take to teach a child to be a good human.” Initially, the software had trouble with vowels, which tend to be elongated in some human speech. “You could really hear the logic of the neural net,” she said. “If it was doing an ‘O’ sound, it would just go ‘oh-h-h-h’ forever.”

“That was supposed to be us.”
Cartoon by Amy Hwang

“Proto,” the album that Herndon and Dryhurst created with Spawn, was meant, in part, to warn against the ways A.I. accelerated platform capitalism. Released in 2019, it was a critical success. “A lot of people latched on to a kind of futurism around it,” she said. “But we were trying to say that this isn’t the future—this is what’s happening now.” That year, a number of high-profile institutions, including the Barbican, in London, and the MAK, in Vienna, held exhibitions of A.I.-related art. At HeK, in Switzerland, Zach Blas and Jemima Wyman showed “im here to learn so :)))))),” a video installation, made in 2017, in which the artists incorporated machine-learning tools into an animation of Microsoft’s failed Twitter chatbot, Tay. (Microsoft had removed the chatbot within a day, after users prompted it to produce racist and antisemitic vitriol.) “Portrait of Edmond Belamy,” an image made by a generative model and printed on canvas, sold at Christie’s for nearly half a million dollars—the auction house’s first A.I. art work. It had been created by three young men from France who trained an algorithm on portraits scraped from WikiArt.

In Berlin, Herndon told me that the artistry of this kind of work derives from shaping a neural network and its data rather than from any single output. “The model is the art work,” she said. “It’s not the sculpture or the painting. It’s the model that can generate infinite art works, in any kind of medium.” She added, “How do you exhibit that? Does that create a new economy for artists? Does that require new governance structures between the institution and the artists exhibiting that work? How do we show people how exciting this is?” We were sitting at a table outside a restaurant in Schöneberg; a group of men walked past, wearing skin-licking leather ensembles and puppy-fetish bondage masks. “I think it’s wise to be wary,” she went on. “I think it’s going to unleash an endless hose of shitty media. That’s one hundred per cent going to happen. I just don’t think that’s the only thing that’s going to happen.”

In the summer of 2022, a glut of bizarre and often hilarious images began circulating on social-media platforms: Sean Penn eating a bowl of nuts, rendered in the style of Dorothea Lange; Frank Lloyd Wright’s Fallingwater in the style of a Pizza Hut; the Hamburglar in the style of Picasso’s “Guernica.” There were, of course, images that might be described as deepfakes, of politicians and other public figures in improbable situations. They were created with Midjourney, Dall-E, and Stable Diffusion, new versions of which had recently been released to the public. That fall, OpenAI released ChatGPT, inspiring a cascade of similarly inventive—and banal—computer-generated text.

Art made with A.I. had long been a province of the research and fine-art worlds; suddenly, it was poised to transform all manner of creative work. The use of artificial intelligence for screenwriting was a key sticking point in negotiations between major TV and film studios and the Writers Guild of America. Apple Books already uses computer-generated voices to narrate some of its audiobooks; a number of companies now provide A.I.-generated music for advertisements and other commercial use. OpenAI released marketing materials touting the ability to prompt a children’s story—complete with illustrations—using ChatGPT. YouTube is reportedly creating an A.I. tool that will allow content creators to use voice models of famous musicians. “There’s an almost eerie desire to have this form of immortality,” Polachek, the musician, said. “At the same time, I feel like maybe that removes some of the pressure, as an artist, to do what I can with this body while I have it.”

Midjourney, Dall-E, and Stable Diffusion are trained on data sets that are industrial in scale. Much of this data is scraped from the public Web. Although private A.I. companies have reached valuations in the billions, the people who unwittingly created their training data—writers, painters, photographers, and so on—have gone uncompensated. Last fall, Greg Rutkowski, a Polish illustrator, painter, and concept artist, discovered that a Google search for his name produced a flurry of art works in his style that he had never seen before: “Greg Rutkowski” had become a popular prompt among people using image generators to create fantasy art. (The embedding for Rutkowski, who has worked on games such as Dungeons & Dragons and Magic: The Gathering, might involve mythical creatures, courage-giving sunsets, and a pervasive sense of peril, suspense, and adventure.) Rutkowski feared that budding artists, who are often hired for entry-level work, such as mood-boarding, would be replaced by A.I. “It was built unethically, and it was built to replace us,” he said.

Earlier this fall, the Federal Trade Commission hosted a roundtable discussion with visual artists, screenwriters, musicians, and actors, many of whom emphasized the need for “consent, control, and compensation” in a world shaped by A.I. “For the first time in my life, I am worried about my future as an artist,” Karla Ortiz, a concept artist and illustrator, said. John K. Painting, representing the American Federation of Musicians, asked what should happen if he played drums on a Taylor Swift album used to train an A.I. system that could spit out new records. “I should see some form of benefit or compensation for that, because those new parts are clearly copying mine,” he said. Even that would be inadequate: “If this scenario works really well, it likely means that I’m not getting hired to record any new albums anymore.” The Authors Guild, working on behalf of writers including Jonathan Franzen and George Saunders, filed a lawsuit against OpenAI, claiming that the company’s use of their work was “systematic theft on a mass scale.” Hito Steyerl, an artist who has used A.I. to comment on the technology’s entanglement with military interests and environmental degradation, compared prompt-based generators to slot machines. She described them as “onboarding gimmicks” to normalize the technology’s use. “It tries to keep you engaged,” she said. “To spend as much time as possible with it, and adapt to its logic.”

Many A.I. companies have argued that scraping publicly available data is legal, and that copyright protections usually do not extend to style. Rebecca Tushnet, a professor at Harvard Law School, compared an A.I. model trained on existing works to a painter who is influenced by other artists. “I think the precedents are pretty favorable for the training,” she said. “As long as the output is itself non-infringing, there’s no copyright-relevant interest.” She suggested that if a piece of work gets too close to an original, the user who made it might be found at fault, but it would be hard to hold the companies involved liable. Rather than looking to the courts, Tushnet argued, it made more sense to broadly foster the conditions for artistic production, perhaps by increasing government funding or by enforcing antitrust laws. “The harder we make it to create new stuff, the more risky it is for small creators,” she said. “Warner Bros. is going to be fine.”

Herndon and Dryhurst believe that generative A.I. will be hugely consequential to creative fields. “How do you build a new economy around this where people aren’t totally fucked?” Herndon asked. In 2022, she and Dryhurst co-founded a company, Spawning, with the mission of building “the consent layer for A.I.”—a way for artists to determine whether, and how, their work is used in training data. They co-founded it with two others, Jordan Meyer and Patrick Hoepner, whom they met on a Discord server. Spawning’s first offering, Have I Been Trained?, is a Web site that allows a person to search for her work in certain training data sets, and to request to opt out if she desires. The aim is to give artists control of their own data, potentially enabling them to monetize it as they see fit—including by selling training rights back to A.I. companies. “Your data is more valuable if the only place to get it is from you,” Meyer, Spawning’s C.E.O., said. (He later added that, in general, he tries not to refer to art work as “data.” “Individual works can take years, right?” he said. “It’s weird to think of one little data point as someone’s life’s work.”)

Have I Been Trained? has facilitated the removal of about one and a half billion images from commercial training-data sets—a precedent, but hardly a dent. An E.U. copyright directive from 2019 allows content-owners to opt out of having their works used in training data (except that used in scientific research), but there is not yet a standard protocol for making this happen, and some say the law is poorly enforced. Herndon and Dryhurst hope that Spawning’s tools can serve as a model for how such a system might work. Still, policing A.I. is difficult. There are thousands of models being trained at any given moment, many of which aren’t publicly identifiable. “There’s a bit of a Whac-a-Mole situation,” Dryhurst said. “How do you know to opt out of a model that you don’t even know is being trained?” Critics of the project have argued that opt-outs capitulate to the interests of industry, and that the only ethical mechanism would be to make it mandatory that companies get artists to opt in. Margaret Mitchell, the chief ethics scientist at the A.I. company Hugging Face, defended Spawning: “They’re providing a better solution for a system that’s already problematic, and they’ve been criticized for not solving the problematic issue in the first place.”

Earlier this year, Spawning raised three million dollars in venture capital. The company is currently working on a handful of experiments. In October, it launched Kudurru, an open network of Web sites that aim to identify, and block, Web scrapers. Next year, Spawning plans to launch a kind of marketplace called Source+. An artist such as Bruce Springsteen could gather his data—demo tapes, vocal snippets—and license it to a company such as OpenAI, for training purposes. He could also create a model based on that data—Boss+—for other musicians to collaborate with, for a fee. Lesser-known artists could band together, in I.P. unions of sorts: a group of architectural photographers might create a data set large enough to appeal to corporate entities, and split the proceeds. “We think this is way better than the Spotify model,” Meyer said. “The alternative is, like, if Stable Diffusion were to give one one-millionth of a penny to everyone who was in the training data.” Currently, the Web site for Source+ displays an A.I.-generated image, rendered in the style of a painting, of a woman who looks like Holly Herndon painting a portrait of a woman who looks like Holly Herndon.

None of this would solve the broader existential threat of A.I. replacing creative workers, or driving wages down. “Capitalists love to fire people, so if they can get away with it they will,” Tushnet, the legal scholar, said. Dryhurst didn’t want to be alarmist about the threat to human livelihoods, in part because he believes that authorship still holds sway in creative communities. “The artists who are commissioned to make avatars for furries on Twitter—I would make the contestation that it means more coming from someone who’s a part of that community,” he said. Herndon rejected the idea that a generator such as Midjourney would become “the best artist ever,” dismissing it as “a ridiculous understanding” of art or its function. “I’m alive,” she said one afternoon, as we sat in her apartment. “I am constantly updating what I am, and how I respond to what’s happening around me. I have all of these sensors that are constantly taking in new stuff. That just doesn’t exist in the machine-learning world.” She leaned back in her chair. “There are cool, sophisticated systems, but they are nowhere near as sophisticated as this,” she said. She gestured toward herself. “This is remarkable.”

Last December, on her due date, Herndon went into labor. At the hospital, monitors indicated that the baby’s heart rate was rising, and Herndon underwent an emergency C-section. During surgery, a doctor nicked an artery, which was then stitched up. But afterward, holding Link in a hospital bed, Herndon sensed that the machines tracking her vitals were beeping more than usual. The nurses, seemingly unconcerned, reset them. The machines were insistent. Suddenly, Herndon’s room filled with medical personnel, and she was rushed back into the operating room. The stitches had given way; she had been bleeding out internally. Herndon turned to a nurse and asked if she was going to die. She lost nearly sixty-five per cent of her blood, and was put into an induced coma. Her first four days of motherhood were spent in the I.C.U.

Cartoon by Emily Bernstein

The experience, which both she and Dryhurst described as deeply traumatizing, is the subject of “I’M HERE 17.12.2022 5:44,” a short video that they recently showed at the Pompidou. (They missed the video’s début, having contracted COVID at an A.I.-centric gathering held at a castle. “Kind of a polyamorous vibe,” Dryhurst said. They’d brought the baby and stayed at a hotel nearby.) In Berlin, they pulled the video up on a computer, and we sat down to watch. The piece is a computer-generated animation, trained on personal photographs and videos of Herndon and Link, including footage from the hospital. It is set to an audio file, recorded on Dryhurst’s iPhone, of Herndon recounting a dream she had while in the I.C.U. In the dream, Link is a soloist in a choir; he wears a robe and sings like the soprano Sarah Brightman. Herndon is both conducting the choir and watching the performance on television. The characters morph and multiply, in a wobbly, kaleidoscopic fashion; the baby plays a trumpet with three hands. At one point in the narration, Herndon’s voice breaks, and she begins to cry. The recording is lo-fi and muffled; Link can be heard mewing in the background.

“It seemed kind of interesting to take really sensitive footage and share it, but in a way that wasn’t—what’s the word?” Dryhurst said, when the video ended. Herndon offered, “It protects your privacy a little bit.” She added, “With Link, I wouldn’t want to take his actual face. It’s trained on his face, but it’s not his face. There’s a step of removal that makes it more comfortable.” To make “I’M HERE,” the couple had experimented extensively with prompts—“newborn baby boy plays trumpet ethereal light Lucien Freud and Alex Colville, atmospheric lighting, fantasy, Thomas Hart Benton”—to achieve their desired aesthetic. “There’s a lot of negative prompting,” Dryhurst said—telling a system what to exclude. One common negative prompt is “big boobs.” “It’s trained on the Internet, and there’s a lot of hentai,” Dryhurst said, referring to a pornographic form of anime or manga. Herndon added, “Basically, how to de-pornify any outcome.” Other negative prompts included “extra limbs,” “plaid sports bra,” “low quality,” and “facial hair.” Using A.I. was like dropping a scrim over documentary footage that was otherwise too raw. Herndon still hadn’t been able to look at the source material for the art work, the photographs and videos that Dryhurst had taken while she was in the I.C.U. “It sounds hokey to be, like, ‘Art is helping me work through my trauma,’ ” she said. “But it is, kind of.”

A few evenings later, Herndon and I went to Kraftwerk, a techno club, for Berlin Atonal, a festival of experimental art and music. The sky was still light when we arrived. Young people wearing black mingled outside, smoking cigarettes. The building—a decommissioned power plant—was made significantly less imposing by the presence of a cheerily branded food truck selling empanadas in the courtyard. Artificial fog leaked from the entrance. Herndon, who wore a draped, smock-like denim jumpsuit and black Birkenstock sandals, walked into the club as if it were a room in her own home. She had been there numerous times, first as a club kid and then as a performer. She had last gone in the spring, for an N.F.T. festival that involved “live minting” (the in-person débuting of new digital art works) and generative electronic music.

The day we visited, Herndon and Dryhurst had been talking about “twentieth-century industrial structures”—record labels, galleries, publishers—and the antiquated visions of art-making that had persisted into the twenty-first century. “When we met, in Berlin, in our early twenties, we spent two hundred euros a month on our first apartment, and had friends who paid rent selling noise tapes on blogs,” Dryhurst told me. “People sold records, bought magazines.” His social world, he noted, was governed by “etiquette, norms, and habits inherited from times we had only really read about, and that we were unwittingly experiencing the tail end of.” These days, this kind of uncompromised, subcultural life style seems available mostly to the independently wealthy, and to what Dryhurst calls Tims: “temporarily illiquid millionaires.” Berlin had taken care to preserve certain kinds of cultural spaces, Herndon said, and although she appreciated this, the city could sometimes feel “like a museum.” Kraftwerk was redolent of the nineties. Down the street was a squat called the Køpi that was like being “transported into 1980.” “I don’t know if these spaces would be able to survive if they were just left to market dynamics alone,” Herndon said. Nostalgia could be a form of inertia: “We also need art that’s responding to the very unique time that we’re living in.”

At the bar, we sat down in an empty lounge area. A sequence of elegant, resonant clunks reverberated from the stage downstairs. It was a time of transition and flux. Herndon and Dryhurst were working on a relief sculpture, commissioned by a Bay Area A.I. company, that drew inspiration from the architecture of neural networks and archeological digs. The Serpentine show was still inchoate. “That’s the fucked-up thing about being an artist,” Herndon said. “You have to be really comfortable with just knowing that the idea will eventually come, and it will come on time, and it will be the right idea.” Still, starting a family introduced a new dynamic. “What if, now that you have a baby, you just can’t do it anymore?” she asked. “What if the special sauce between the two of you has fundamentally changed, because you created a new human, and he’s actually the perfect project? And now there’s no other project that you can ever do that’s as perfect?”

Later, I watched “I’M HERE 17.12.2022 5:44” again, alone with my computer. The audio was intimate and moving, swinging between joy, terror, grief, adoration. The animation—bodies and environments that fractured and multiplied psychedelically—more or less mapped to the narration, as if the images were being prompted in real time but couldn’t quite be controlled. In context, this disjunction seemed like a feature—an echo of the subject matter—rather than like a malfunction. Still, working with trendy technology, even when it glitches, runs the risk of flattering or foregrounding the tech. I found the seamlessness of the animation distracting; I imagined people seeing “I’M HERE” at the Pompidou and thinking, Sick. Every time I watched it, I found myself in tears.

On my last evening in Berlin, Herndon and Dryhurst invited me to join them at an art opening across the city, for their friend Jenna Sutela. While Herndon applied lipstick and Dryhurst packed a diaper bag, I sat alone with Link in the living room, administering a bottle of milk. As he turned his head, he looked first like one parent, then like the other—a quality Dryhurst called “heridescence.” I thought about the ways that parenthood forced and foreclosed on multiplicity. What was more of a fork than a baby?

When we got to the gallery, Herndon and Dryhurst embraced their friends, who were gathered on the sidewalk. Inside, it was pitch-dark. Strobe lights periodically illuminated three large heaps of compost, flecked with humus; a machine puffed artificial fog. Speakers played recordings of a compost pile which had been processed with a timbre-transfer model. The sounds of worms and microorganisms at work emerged as the honking peals of a saxophone. The vibe was playful, but also ominous. Worms were thundering. In a side room, a sheaf of poems, printed on edible paper, sat on a spotlighted pedestal. Visitors were invited to eat them. It was hard to know how to be. “Let’s go somewhere else,” a small child said to her father.

The previous evening, I had fallen asleep while listening to Herndon and Dryhurst on a podcast. “We need to take very seriously that our digital twins are us,” Dryhurst had told the hosts. “There needs to be serious regulatory thought about dealing with that, if we’re entering into a scenario in which our digital twins are potentially more economically productive than our physical corporeal existence.” The idea of my digital twin engaging in artistic collaborations while my corporeal self slept, or listened to podcasts, was a little haunting. But the alternative seemed worse: what if, beyond the avant-garde, there was no demand for strange, expansive work? What if the forms of culture that A.I. facilitated with the least friction—the lo-fi beats and anime aesthetics and generic prose style—were actually what most people wanted? Rather than tilting toward differentiation, the culture could become a void. When I talked to Herndon, she was more reserved about making predictions. “I don’t like to call the future at all,” she said. “That’s the beauty of living in a chaotic society—it’s always going to mutate into some weird format.”

Back outside the gallery, the light was fading; an edible poem softened in my hand. Eclectically dressed people leaned over Link’s stroller, cooing. I took a phone call and wandered off from the group. On the way back, I poked my head into a storefront, and, after a confused exchange across a language barrier, found myself sitting alone on a love seat, being treated to a private cello performance. In a world of infinite media, the successful art work makes its way to others, morphs, moves on. Time passes, and you let it go. For a few brief minutes, the cellist played Bach. ♦