The likeness of actress Yeom Hye Ran was used without her consent in the film The Meter Reader. In other words, despite never appearing in the project, her face, body, expressions, voice, and even subtle movements were recreated using AI—resulting in a production that appeared as though it were her film.
The creators of the video claimed they had obtained prior consent from Yeom Hye Ran and her agency. However, that turned out to be false. Both the actress and her representatives stated that there had been no prior discussion or approval, and that they only became aware of the video after it was uploaded to YouTube. The film was eventually taken down and set to private.

The most intriguing part of this incident is that the creators made a claim that could easily be verified—and proven false. Moreover, Yeom Hye Ran is an actress widely trusted by the public not only for her acting skills but also for her personal image. Many viewers consider her presence alone a guarantee of quality.
Given her status, using her likeness without any verification seems like an unusually careless decision for professionals in the field. Rather than a mistake or misunderstanding, it raises suspicion of a deliberately manufactured controversy—in other words, a calculated form of viral marketing with a specific goal.

In fact, many viewers who watched The Meter Reader were shocked to learn that the figure on screen was not Yeom Hye Ran, but an AI-generated imitation. As we know, the surprise does not stem from the fact that it wasn’t her. Instead, it lies in the blurred boundary between reality and fiction—an ambiguity so convincingly crafted that it became the core reaction to the video.
The controversy over portrait rights essentially became a powerful exposure strategy for the creators. In the AI era, where technology is widely used in content production, viral marketing may increasingly rely on provoking ethical sensitivities that audiences are highly attuned to. That is what makes this case particularly striking.
A similar incident occurred in 2024, when OpenAI introduced ChatGPT’s voice mode “Sky,” which sounded remarkably similar to the voice of Scarlett Johansson, who portrayed the AI assistant Samantha in the film Her. The situation escalated into a legal dispute, though OpenAI maintained that the voice was not a replica but performed by a different professional voice actor.
In that case as well, the public was less concerned about whether the voice had been directly copied. Instead, they were fascinated by how closely it resembled Samantha, bringing a fictional character into reality. If this had been intended as viral marketing, it would have been an undeniably successful one.
Sources: Daum


