2e6421fa00000578-3316173-image-a-1_1447370750044

The first sound from the future

With Yamaha’s vocaloid software Hatsune Miku, imitation is no longer merely a question of mimicking intellectual capabilities. The 3D-animated, 16-year-old pop star with her computer-generated voice has had a massive cultural impact: She performs live concerts and according to Crypton, her voice is featured in over 100,000 released songs. Being virtual and disembodied seems to be of no concern to her fanbase. And yet, the crowd’s excitement is clear to see once they see their idol standing before them. Her vast appeal stems from her adaptability, just as Turing describes the digital computer:

Of course the digital computer must have an adequate storage capacity as well as working sufficiently fast. Moreover, it must be programmed afresh for each new machine which it is desired to mimic. (Turing, pp. 8)

In the same way, each musical composition is yet another programming of Hatsune Miku. New songs are constantly composed, and she is able to perform songs of any genre. Therefore, Hatsune Miku is not just imitating the modern pop star: She is a collaboratively constructed material, molded according to the wants and desires of her fans.

 

 

References

  • Turing: Computing Machinery and Intelligence

Leave a Reply

2 Comments on "The first sound from the future"

Notify of
avatar
Sort by:   newest | oldest | most voted
Maria O'Connell
Member

It’s really interesting to see this example. Electronic pop-stars shouldn’t have to look like sexy teenagers, but here we are. It is an intriguing approach to embodiment. Why do our AI have human-like bodies, even virtual ones? It shows how much of our expectations revolving around intelligence, or creativity also revolve around a particular expectation for appearance.

Alexander Wilson
Editor
Alexander Wilson
This is an interesting example, not the least because it challenges our traditional notion of art and artistry. Note how even “real” pop stars in the years leading up to Hatsune had been using “autotune” to treat their vocals, making everything sound glassy and synthetic. We may think of this as a precursor to AI’s eventual replacement of humans: humans will voluntarily submit to the AI, will want to be replaced, synthesized, outsource their capacities to the machine, until eventually even their act of wanting, of desiring, of aesthetic judging, will themselves be replaced. If this continues, eventually we will… Read more »
wpDiscuz