Wednesday, May 14, 2008
I had a total number of 57 people participating in the questionnaire within a three-day period from May 10th to May 13th 2008. A number of 33 participants were male, 21 were female and three participants did not indicate their gender. The majority of participants (34) were between 21 and 30 years old. Eleven participants were between 11 and 20 years old and ten participants were between 31 and 30 years old. Two participants were between 61 and 70 years old.
The first emotion people had to guess was anger. A total number 0f 50 people got the emotion right when only seeing it displayed upon the head of the character. This number dropped to 48 with the inclusion of the body. Consequently the majority of participants (34) indicated that the inclusion of the body did not clarify the emotion.
The next emotion displayed upon the character was sadness. When only displayed upon the head, 31 participants guessed the emotion correctly. A significant number of 13 mistook it for disgust. With the inclusion of the body however, 47 people guessed the emotion sadness right and only two mistook it for disgust. Not surprisingly a number of 46 sad that the inclusion of the body clarified the emotion.
The thirst basic emotion portrayed was fear. When watching fear displayed upon the head, 23 people indicated it as fear. The majority of people (30) mistook it for surprise. When being portrayed upon the body however, 51 participants got the emotion right and only a tiny number of 2 mistook it for surprise. Accordingly, fifty people indicated that the inclusion of the body helped them to guess the emotion.
The fourth of the six basic emotions in this questionnaire was surprise. While 16 people mistook it for fear when seeing the emotion being displayed upon the head, 34 people guessed the emotion correctly. Once again, with the inclusion of the body the emotion portrayed was clarified (a number of 45 indicated that). With the inclusion of the body, only 4 people mistook the emotion for fear, while 49 indicated it was surprise.
For the fifth question the emotion disgust was portrayed. When seeing it only through facial expressions, a big majority of 50 people guessed the emotion correctly. However, with the inclusion of the body this number dropped to 36. A surprising number of 14 mistook the emotion for surprise when seeing the body as well. Consequently, a majority of 30 participants believed that the inclusion of the body was not beneficial.
The last emotion portrayed was happiness. The emotion was guessed correctly when being portrayed upon the head by almost everybody (54). But similarly to question number five this number dropped to 42 when seeing happiness being portrayed upon the body as well. People started mistaking the emotion for surprise. The majority (42) believed tat the inclusion of the body did not clarify the emotion.
When cross-referencing gender with the percentage of incorrect answers it becomes apparent that only 19% of all the answers of the male respondents were wrong, whereas 24% of the answers of the female respondents were not correct. A cross-reference of the indicator age with the percentage of incorrect answers cannot give scientifically insightful answers due to the small number of the sample in each age bracket.
With all six assets completed the animations need to be tested to see whether the head and body stand alone or whether they need each other to allow an audience to identify the illustrated animation.
The participants are asked a series of questions revolving around the six basic emotions corresponding with the research assets: sad, happy, fear, anger, surprise and disgust.
For all questions the questionnaire requires the participants to watch a short clip portraying the emotion upon the face of the character. The participant then tries to read the emotion and ticks the corresponding box in the questionnaire. After that he same emotion is portrayed with the inclusion of the body. The viewer is asked to watch the clip, guess the emotion and indicate whether the inclusion of body movement has strengthened the expression of the emotion or not.
I included the variables of gender and age to see if they had and influence on the results. Is there a pattern due to the age and gender of the viewers with regards to their ability to read the emotions?
The questionnaire is produced with Autoform. Originally written by Andy Sutton of Nottingham Trent University’s Social Sciences department, it is now managed by Kristan Hopkins and Andy Sutton. It is a free service for use within Nottingham Trent University. Autoform makes a unique SPSS syntax for each questionnaire allowing the syntax file to automatically set up the entries for variable values, missing values and variable labels in your SPSS data file.
The questionnaire will be distributed via gamer networks, facebook, forums and StudiVZ, a German student facebook equivalent.
The overall aim of the questions is to determine whether facial emotions displayed on the 3D animated characters are recognisable by an audience through the face alone, or is the body needed to clarify the emotion displayed?
Saturday, May 10, 2008
The sixth and final asset for this research study is the emotion of happiness. The emotion of happy can be easily recognised from a 2D line drawing of a smiley face. However, I did not simply want to give the look of a smiling face. I tried in this asset to bring the whole face alive with the smile. I wanted the mood of the character to show without fail the emotion of happiness right from the onset, with such a strong representation that the audience may even mimic what they see.
In this model I brought the eyes to life with the eyelids and brows becoming unthreatening. The smile I felt should be an innocent smile (I did play around with more sinister smiles, but I felt the emotion then could be muddied with others). This representation needed to be pure and straight to the point simple…a happy face.
I felt the harder part of this emotion was how to show happy with the body. With the face itself being such a strong ambassador for the emotion, what does the body do when we are smiling? I settled on making the body unthreatening, and a welcoming pose.
I feel that in the case of Happy within 3D animation in relation to facial and body dynamics that this emotion will be a strong candidate for the face being able to stand alone in its message delivery. The body is not as striking as that all important smile.
I had to settle with using the markers for what a surprised face would be like. However, some people show surprise differently, and the level of surprise can also effect the facial features - from a simple semi-closed mouth pout “oooh” to a wider mouthed “aargh, you made me jump”. For this asset I decided to try to find a middle range, a mouth more pout than a wide mouthed shriek, and eyes wide, eyebrows raised, eyelids open.
I still felt surprise, as an emotion resembles fear so much in characteristics that unless producing the body’s emotive response the difference between the two emotions is blurred. By adding the body movements, both, fear and surprise, gain more distinction as a separate emotive expression.
This asset I feel is a strong candidate for the need of a body to help exaggerate the difference between what is fearful and what is surprise. With a fearful response bringing hands in to defend the body or face, a surprise seems to reverse and the arms and body seem more open as the body decides whether the fight or flight option is the better choice. It is my belief with regards to fear and surprise that you need more information than just a facial expression. In this instance, the body along with the head to allow the message of emotive expression to be clearly delivered upon a 3d animated model.
(Above) Old Asset Surprise Head
The next basic emotion I chose to portray was fear. By watching horror movies and animations like Manga and Bugs Bunny cartoons, I was able to mimic the key emotive features. Whilst trying to get an authentic picture of what a fearful expression on my model would look like, I found that my previous modification of the eyelids was not enough to convey such a strong emotion like fear. A fearful look is mostly achieved through a widening of the eyelids to reveal more ‘white of the eye’, as seen in 2D animation with the eyes bulging out of the head in shock. To gain more empathy with the eyes and to make give them a greater amount of believability, I made completely new eyes in Photoshop with a smaller pupil and iris.
Based on my research question, I tried to investigate whether the emotion ‘fear’ can be recognized through expression of the face alone or if the body is needed as well in order to correctly guess the emotion. I tested my animation on other people and I found that only the head showing a fearful emotion could sometimes be misread for ‘surprise’. The inclusion of the body (hands raised, back arched away from the threat/fearful target) I found really clarified the emotive state. Tests on viewers clarified what I could see myself; the emotion was clearly fear and no longer mistaken for surprise. For this particular asset the body was of great significance to bolster the emotion when keeping in mind that the wrong emotive state displayed within an animation could have dire consequences to a storyline.
Due to similarities between fear and surprise, I chose surprise as the emotion explored within the next asset.
For the third asset for this research, the emotion I picked was ‘Sad’. For inspiration on how to illustrate this emotion I again referenced films animation and people I asked to act the emotion.
Creating the emotion of sadness was not too difficult, with such an easy recognisable emotion I found very little was needed for an audience recognise the emotive state. However I wanted the feeling of sadness to give an empathy with the viewer, I found after including the full range of eyebrow raises, the down turned mouth and eyes half closed that people started to mimic the face, this is when I realised that the character was indeed generating an empathy with the viewers.
The head alone in this instance is very strong at conveying its emotion alone, without fail people where able to identify with the mood of the character. With the inclusion of the body, the mood was strengthened but it was not needed to show the emotion, however if the features of a character are distant the posture set by the body would allow a viewer to acknowledge the emotive state that is so clearly illustrated upon the face.
This asset is a strong candidate for the argument that the body is not needed to illustrate an emotion. I believe that this emotion stands alone from a lot of others however due to its individual characteristics, the down turned mouth drummed into us from an early age even with stick drawn doodles as an unhappy face, for this reason I am believing that some emotions can indeed stand alone, and look forward to exploring the other emotive states.
(Above) Old Asset Sad Head
(Above) Old Asset for Sad with Body
“Anger”
Anger was the first emotion I chose to express due to its strength as a sentiment. To animate a believable expression of anger, I studied existing animations and film clips where the emotion of anger was expressed fluently. The problem I was confronted with was that even though anger is one of the strongest emotions in the world, it can be portrayed in many different ways. To conquer the issue of what is anger looks like on a person’s face, I started looking at nature as well. A dog starts to snarl and bear his teeth when he gets angry. I found that this is a facial expression that many humans mimic in their own anger. At this stage of my research it helped me a lot to take photographs of people displaying their own anger. I took it for granted that this is only a watered-down expression of their real angry faces, for anger is such a strong emotion and thus making it hard to be mimicked when not angry. However, taking the photographs gave me a lot of inspiration and help.
In addition, to be able to express anger correctly, I tested Osipa’s theory that the brow pinch is paramount to conveying emotion. With my first model, I had the problem that eyebrows were too “busy” with an untidy mesh to express a viable picture of the emotion “anger”. By optimizing the model’s mesh (removing 200,000 excess polys) the improved model both, expresses emotion better and received a much stronger and instant reaction from an audience than the previous model.
I personally found that such a complicated model will only get tidier and more expressive through practise and experience with animating models. With each and every animation I am doing I become more skilled, more efficient, more competent and quicker at optimizing a model and producing a believable creation with the synthesis of life. The expression of the emotion anger is working now, as can be seen in the following two pictures.
The Old Model (above)
The New Model (above)