Read

Do Androids Dream of Becoming Cover Models?

Written by Mark DiMattei | Apr 20, 2026

 AI involvement in periodicals is becoming an issue 

Sign up for The Key Point of View, our weekly newsletter of blogs, podcasts, and videos!        

In early March 2026, Esquire Singapore released an interview with Japanese-American actor Mackenyu, who is currently part of the ensemble cast of Netflix’s One Piece. Well…maybe “interview” is doing a lot of heavy lifting here as the actor was unable to respond to the magazine’s questions in person or via email. Instead, the author decided to ask ChatGPT and Claude.

While the magazine is very open about the fact that the responses were generated with artificial intelligence (AI)—“Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses”—it has sparked a lot of outrage from writers and One Piece fans alike. Especially, many are finding issue with how the AI version of Mackenyu hopes to live up to the pressures of following in the footsteps of his late father, action star Sonny Chiba.

The core issue here is that, as a periodical, there is a degree of trust readers place in Esquire to adhere to the truth. We read entertainment magazines to learn more about the movies, actors, musicians, and performers that interest us…which is hard to do when there’s a strong concern that the actor we are reading in an interview in may just be a collection of ones and zeros.

 

 

AI Is All the Rage Right Now

Well before Esquire decided to utilize AI for their interview, we’ve seen that generative AI has been a tool within another periodical—specifically fashion magazines.

  • In 2023, Tattler Asia used Midjourney to generate the models and settings used for a fashion spread. That same year, Glamour Bulgaria and Vogue Singapore featured a cover shoot using an AI-generated model on their issues.

  • In 2025, American Vogue’s August issue featured a two-page ad for Guess that used an AI-generated model (the first in the magazine’s history).

On top of these more mainstream publications, there have been fashion magazines founded on the very concept of creating something that unifies fashion, art, and AI. Forget AI is a publication founded out of Milan, Italy that promotes a concept that “doesn't document fashion, but reinterprets, distorts, and dreams it—exploring a unique space ‘in between’ analog, digital, and artificial intelligence.” Likewise, there is Copy Magazine, which is promoted as the world’s first AI fashion magazine and is the brainchild of Copy Lab (an AI studio out of Stockholm, Sweden).

While there could be an argument for aesthetics and ease of use in creating such spreads via AI, we can’t ignore the hypocrisy of trying to create art and fashion by removing the need for models, photographers, makeup artists, and the countless other people employed to produce a more traditional cover or ad spread. There’s also a greater concern for poor body image in our current social media world of promoting curated perfection over reality—something that an AI model prompted into exact perfection can only exasperate.

 

Keypoint Intelligence Opinion

At a distance, this might all seem silly and unimportant. We have other matters concerning ourselves with like war, a rise in authoritarianism, staggering costs of living…AI use in fashion magazines and actor interviews are distractions. Right?

In their 2022 survey, the non-profit Media Literacy Now discovered that only 38% of US highschoolers were taught how to analyze media for messaging and bias. In addition, in two separate studies performed at Cornell, human participants struggled to reliably detect AI-generated content at an accuracy rate of 54% while approximately 9% of newly-published news articles were found to be partially or fully AI-generated (with opinion pieces being over six-times as likely to feature AI content). This means that there is a great danger of people being unable to distinguish reality from AI-generated content that is increasingly creeping into institutions that are generally believed to be true at first glance.

We are establishing a precedent where media is becoming unreliable with each update in technology. If deepfakes mean we can’t trust our eyes with video and AI content is creeping into news so we can’t trust what we read to be factually true, then how are we, as a people, supposed to remain informed? The easiest way to combat all of this is with regulation—either through laws or social practices of marking any AI content clearly so that it is understood from the start to not be real. Otherwise, we might be in danger of not being able to tell the dictators and warmongers from pixels and algorithms…and not just the models and actors.

 

Stay ahead in the ever-evolving print industry by browsing our Report Store for the latest insights. Log in to the InfoCenter to view research and studies through our Workplace- and Production-based Advisory Services. Not a subscriber? Contact us for more information.