Wednesday, May 14, 2008

Research Methodology

With all six assets completed the animations need to be tested to see whether the head and body stand alone or whether they need each other to allow an audience to identify the illustrated animation.

The participants are asked a series of questions revolving around the six basic emotions corresponding with the research assets: sad, happy, fear, anger, surprise and disgust.

For all questions the questionnaire requires the participants to watch a short clip portraying the emotion upon the face of the character. The participant then tries to read the emotion and ticks the corresponding box in the questionnaire. After that he same emotion is portrayed with the inclusion of the body. The viewer is asked to watch the clip, guess the emotion and indicate whether the inclusion of body movement has strengthened the expression of the emotion or not.

I included the variables of gender and age to see if they had and influence on the results. Is there a pattern due to the age and gender of the viewers with regards to their ability to read the emotions?

The questionnaire is produced with Autoform. Originally written by Andy Sutton of Nottingham Trent University’s Social Sciences department, it is now managed by Kristan Hopkins and Andy Sutton. It is a free service for use within Nottingham Trent University. Autoform makes a unique SPSS syntax for each questionnaire allowing the syntax file to automatically set up the entries for variable values, missing values and variable labels in your SPSS data file.

The questionnaire will be distributed via gamer networks, facebook, forums and StudiVZ, a German student facebook equivalent.

The overall aim of the questions is to determine whether facial emotions displayed on the 3D animated characters are recognisable by an audience through the face alone, or is the body needed to clarify the emotion displayed?

No comments: