Mona Lisa animated by a networked emotion engine, a project that also include teaching.
An emotion engine for Mona – IA – Lisa
By the end of 2014, during a parisian video games event, the EIGD (European Indie Games Days), Carole Faure that I already met before, put me in touch with two students of her that were facing technical issues using Unity. Kind of an expert on it (I’m using it for 7 years already), I take some time to understand what their issues are and teach them what path to follow to fix them. Those were the first teaching lessons that I gave to students of the IIM school of Paris (Institute for the Internet and Multimedia), and they continued afterwards: Unity3D, technical management, R&D, game jam.
The year after that, the school starts internal projects for external clients, one of them being for Florent Aziosmanoff and the main subject being Mona Lisa. The goal was to make the Leonardo Da Vinci painting alive, and the author’s ambition was to keep the original emotional behavior of la Mona Lisa and make the visitors feel it.
An intimate relation with Mona Lisa
Students started the project, with Jean-Claude Heudin as their tutor, who was also director of the IIM at that time and already an Artificial Intelligence expert, and Florent Aziosmanoff as customer. Students from the 3D creation department are also mentored by the director of their department, and finally Jean-Claude asks me to supervise the students who are in charge of coding the software using Unity3D.
The fair is in two months, you have to move fast!
Students have their own class at the same time and the first few weeks of the project show the frontier: we will never have time to end the project on-time for the fair schedule! A (little) quotation later, here I am, hired to make the project move faster: instead of teaching and leading the students, I will show them how to do it while doing it.
one Mona Lisa, two Mona Lisa, three Mona Lisa
Florent’s idea is not only to create an interactive painting reacting to the public, but to have a virtual persona who feels emotions from what she perceives from as many places as possible: if there are 10 exposed Mona Lisa in 10 places, all of them will be connected so Mona Lisa feeling is influenced by all of them, by everything is happening everywhere. She is made up of a central server linked to a database, an emotion engine, a neuronal network that takes all of inputs from every sensor, and a stand alone application (a computer exe) that handles both displaying Mona Lisa at full scale (a 4K screen in portrait orientation) and detecting the environment (with a Kinect 2 sensor): how many people are close, do they look at her, do they smile ?
The neuronal network feeds itself from data and parameters that represent Mona Lisa persona (is she introverted or not, etc), it transforms those perceptions by feeding pseudo neurotransmitters: dopamine, norepinephrin and serotonin. From those virtual neurotransmitters, we get a position in a cube whose each corner matches an emotion: anger, joy, interest, surprise, disgust, fear and shame, and of course, every position inside is something in between (ref. Lovheim cube of emotion).
a Stand alone Mona Lisa, that’s also good
Once the fair is ended, we draw conclusions from it: as we could expect, the internet connection was not stable enough, we finally ended up installing the server, the database and the application on the same computer: the one that presented the Joconde to the public.
And as I said earlier, a big sized tablet version is also developed, like a tiny painting you can put at home.
This project will attract a big amount of people, newspapers, customers, associations, during my studio’s life, and despite very limited and totally indirect interactions (it’s not the one in front of the painting that makes Mona Lisa reacting, but all of the people that went by her), this project created admiration among its public. Every year, we had requests to show her. Everyone was fascinated by this Leonardo Da Vinci painting, and those very weird interactions.
I remember that once, someone said to me:
She talks to us, you know that
- Development (applications + IA adaptation to C#) : Frédéric Rolland-Porché
- Development learning (IIM): Théo Maudet, Antoine Poujaud
- 3D modeling and animations (IIM): Lisa Berthet, Valérian Telli