top of page

Self-dubbing and self-stunting. When artificial intelligence informs audiovisual coherence: could Jo

  • Writer: Romina  Marazzato Sparano
    Romina Marazzato Sparano
  • Apr 29, 2018
  • 2 min read

Coherence in any message rests on three pillars: message clarity, discourse quality, and audience understanding. In the audiovisual world, these three axes are imbued by experiential factors such as scenery, music, and the physique and voice of a character.

Artificial intelligence applied to these factors opens up the possibility of creating worlds virtually indistinguishable from reality. It makes it possible not only to suspend disbelief but to create a new kind of belief. This in turn presents challenges both on the realms of esthetics and ethics. As the adage goes, “just because something can be done doesn't mean it should be done…”

When thinking about artificial intelligence, we tend to think about sophisticated robots that will surpass human reality, and we neglect to acknowledge the very real impact of imitation technology. An example of such technology is Lyrebyrd, a Canadian startup that has recently announced a product capable of cloning the human voice.

Using and identity through contrast method, their software can create a digital version of a person’s voice after listening to only a few minutes of real voice recording. The beta version is still a bit crude, but Lyrebyrd insists that the digital voice will eventually speak fluently.

This could allow actors to upload their own voice and dub themselves, making the belief of Italian grandmothers come true: they grew up watching dubbed versions of John Wayne thinking he spoke Italian, and now they could expect to hear his very own digital voice.

Also, the use of machine learning techniques in the graphic world, has startled even the experts. The infamous digital magazine Motherboard caught a user of the social network Reddit, by the name 'deepfakes,' superimposing very convincingly celebrity faces to porn star bodies. It quickly turned into using friends’ faces as well.

Applying neural networks to capture the fluidity of human movement is becoming more and more credible, with developments such as the motion capture system put together by a joint effort between the University of Edinburgh and Method Studios.

This technology helps us imagine Emma Zunz editing guilty over innocent to support her case—you might recall she is the Borges’ character who avenges her father’s death by claiming rape (after seeking a sexual encounter with an unsuspecting sailor) and killing the killer in ‘self-defense.’ Let’s also remember that although in short scenes, the Walker brothers helped digitize Brian O’Conner in Fast and Furious 7, after Paul Walker’s tragic death before the end of filming. With these new technologies, Paul Walker could not only appear in selected scenes, but he could continue to be an integral part of the series.

How does the audiovisual industry change if actors can dub themselves o plainly stop acting while stunts or digital stunts do the work to later be superimposed during editing? When we falsify reality and legitimate falsehood, technology is certainly presenting us with challenges worth debating.

Comentarios


Featured Posts
Recent Posts
Search By Tags
Follow Us
  • Facebook Classic
  • Twitter Classic
  • Google Classic
Anchor 1
bottom of page