Friday, April 17, 2009

In a nutshell.

My functional prosthetic device:

My prosthetic device is simply a plaster cast of my face. It symbolizes both the physical and metaphysical side of me. Physically and visually it is recognizable as a face by which you can read my internal emotions. You are human and you are in my physical space, hence you can understand, relate and respond to the emotions which are conveyed through my face.

But this face is white, stark and blank and reveals no emotion. A computer in my physical space knows nothing of my facial expressions or what they mean and represent. Even when I have programmed the computer to respond to my physical gestures with this face, it only understands what I have told it to understand as it cannot possess let alone understand what these emotions mean.

I however have formed an emotional bond with my laptop. It is named Mrkusich and I say good morning to it and share my emotions with it. I give it human characteristics and pretend it responds. Hence, I feel a bond and a trust which leads me to using my computer as an expression for my emotions available for anyone around the world to access, bound not by the physical but by rather the vastness of universal emotion, whether my fleeting thoughts on Twitter, my photos and status updates on Facebook, my artwork on DeviantArt and Flickr and my blog entries. My virtual identity has become an integral part of how I express myself and my plaster cast represents this disembodiment of this express from the restraints of the physical.


When we are in different physical spaces and communicating through a computer. You cannot read what emotions I am conveying physically through my body language but you have access to the emotions I have chosen to convey virtually. You know nothing of my physical existence merely from my virtual identity as I can chose to share whatever I want, perhaps hence focusing more on my inner self than my physical self.

To bridge these gaps, my prosthetic reads and understands the physical triggers I input, each linking to a primary emotion. It then triggers a stream of images and words relating to this emotion and the resulting secondary and tertiary emotions. Any human sees these images and can relate to their connotations to trigger in them the same emotion and empathy, hence conveying my mood and enhancing our communication.

I chose four out of the six primary emotions to convey and explored how we express these using just the face and a hand and then wired up according triggers on my plaster face.


The moving stream of images and words relating to each emotion spins at an increasing or decreasing rate, depending on the duration of the connection or the amount of times the connection is made with the emotions cross fading in and out of each other.



Full Image Credits
BIG thanks to all those who provided permission to use their images and for kind wishes on my project.









Joy of Happiness - Mani Babbar
Happiness - Swamibu
Just a matter of happiness - Stefano Corso
Children, full of happiness - Hamed Masoumi
Happiness - Sabrina Campagna
The Pursuit of Happiness - Dima Bencheci
Happiness! - IIsaías Campbell
Perfect Happiness - Robyn Flett
Joy.Youth.Sky.Blue.Sun.Shine. Sunshine.Happiness - Irina Iordachescu
Rainbow Umbrella - La Valina

Medo / Fear - Xaime
Deamon of Fear , Fear - H Koppdelaney
I feel cold as razor blade - Anna Visini
Fear of the Dark - Stuart Anthony
do u fear love ? - Ahmed Saeed AlDhaheri
Fear - Christophe Dessaigne
No Fear - Bernat Casero
Teenage Fear , Teenage Fear - Michael
Fear - Meredith Farmer
Fear -
Sara Pirovano
Sad Old Woman - Hamed Saber
Sad Eye - Rafa Puerta
I get Sad When I'm Alone - Adam Foster
Dora Looking Sad - Jon Burney
don't be sad little bear -T.SC
sad bokeh friday - Harold Lloyd
Sad snowman on Commonwealth Ave. - Mick T.
The Sad Sight of the City - Gúnna
Sad Thoughts - Angelo Domini
Sad Flower - Chris Jones
Sadness - Pascal Steichen

Sad Little Friend - Sarah Azavezza

Anger!!
anger. hostility towards the opposition - Sascha
Anger , Anger 2
Joy or Anger? - Geries Simon
Anger - Ferran Jorda
Anger - Evan Leeson
Yelling - Adam Herdman
Stress 39/365 - Mike Hoff
day 12 - artistic frustration - Michael Verhoef
Stress - Leeroy
Anger Management - Sam Scholefield

Thursday, April 9, 2009

If you're happy and you know it

The final sort of stages of pulling together my project saw a few problems encountered, mostly in the process of wiring up my prosthetic. The plaster face proved to be quite delicate, despite using up the remainder of my plaster to try reinforce it and stick in my circuit board from my key-hack. Further problems were encountered when some vital wires came out and required re-soldering.

The programming actually continued to become easier as I went and I found solutions in my previous patches which I could then integrate and manipulate with the jitter example patches. The very first patch we made to control Animata with the Apple Remote, I took the sequence me and Ryan figured out where we had two metros inputting values to control one overall value, to either increase it or decrease it. I used this to control the rate of a movie, to speed it up or slow it down depending on whether the switch was open or closed to represent the intensity of emotion.

It seems logical and was probably intended that way, but I like how with each feed in project and patch created, it became easier to adapt and build on what I'd already experimented with in Max MSP. I mentioned when I first started using it that it was unlike anything I'd used before and I felt like I was working blind but I definitely feel I have more of a grasp of what is possible with it. Like with the NXTs, again it is incredible to imagine that there is so much possible with the software and we only really know a small fraction of it. I liked how Josh mentioned that it was really amazing how everyone came up with something so completely different even though we were all essentially working with the same software, materials, on the same brief and deadlines. This was something I first noticed when we started with Animata, that everyones' each individual interests, backgrounds and talents were coming through in what they were creating and this is definitely what I feel is a driving force behind the whole Creative Technologies course and the industry we've been studying in our theory paper (Intro to CT).

With the introduction with Jitter, I chose to use crossfades to alternate between my four videos, though for the longest time I wasn't sure whether it was possible to even cross fade four. In the end, it was achieved by crossfading two lots of two videos and then crossfading between those two. Essentially, when a key was triggered, say happiness, it would increase crossfade value between the 'happy' and 'sad' video until it reached the maximum so only the happy video was visible. It would also trigger a decrease in the crossfade value of the second video so again, only the happy video was visible.This would then output into a movie window so at any given time, only one video and one emotion was depicted between crossfades. I chose to use crossfades as it could depict the transition between emotions which can often be unexpected and sudden but often also how we can feel 'mixed emotions over something.

Even though for my presentation everything worked as it was supposed to it (which I was immensely relieved about - no unsoldered wires, no plaster falling apart etc.) I still don't feel it was entirely successful overall. I feel that my ideas were too complex and perhaps a bit hard to grasp. I think that my concept would've been better grasped if people could see the videos closer up to get the full interaction and emotional impact of the words and images so the nature of the presentation was something I should've considered. I put time into planning and developing my concept, prosthetic and patch but next time I will allocate time to how I will actually present it all to ensure that the hard work I put into the other three is best shown off.

The nature of the presentation is something I feel we all still need to adjust to. One thing in particular I realised often let the presentations down - and yes, I'm guilty too! - is the confessions of 'I ran out of time / if I had more time...' What I learnt from doing a lot of performing arts and stage shows is that so long as you look confident and keep going and present as if you yourself thought it was great, it makes a bigger impact.

Otherwise, after thinking about the feedback on my prosthetic,
if I were to refine it, I would look at projecting the video so as to make the pictures and words more visible to help get my conceptual idea across. At the beginning, I though about looking at ideas around personal space and boundaries around touch and a suggestion got me to think about physically removing myself and instead having others go up to interact with my prosthetic, physically touching it to trigger the switches. This sort of interaction would also help get across the message as it forces them to confront it and then see the result of their interaction. The sensor points would probably have to be relocated as these I also feel were unclear in conveying the body language I wanted. Perhaps by introducing other elements of the body, disembodied from the actual confines of a literal body to represent the disembodiment of the emotion from the physical, as is what happens through communication in a virtual means. This could also link to meanings around the personal space barriers we place around ourselves, still relating to body language and physical communication, and the cultural differences around touch and physical interaction.

This would require a lot more plaster.

The documentation of my final result is still a work in progress. All the images I used were licensed under creative commons but I am just seeking the final okay from all owners' of the images to repost them. No one has had any problems so far but as I myself post a lot of my photography online, in their shoes I would always prefer people ask especially when dealing with images with what could perhaps be heavy with personal emotional significance.

So for now, here is my completed patch which I am actually quite proud of.

Sunday, April 5, 2009

Getting plastered in the name of art

From my conceptual ideas comes the visuals you will see before you on Wednesday. The idea for my prosthetic came from the idea of a face as an integral part of communication and one of my favourite artists from studying art history last year. George Segal is known for his sculptures made from direct plaster casts made from live models which are left rough and white for "it's special connotations of disembodied spirit, inseparable from the fleshy corporeal details of the figure." He would then place these life sized sculptures into contexts outside the gallery for people to interact with an respond to. Where more than one figure was part of a work, they often do not interact leaving the viewer to contemplate their relationship.

Image: Three Figures and Four Benches, 1979

The concepts he explored around gestures, stances, statures as well as the inner psychology or spiritual condition I felt linked nicely to my conceptual ideas around communicating through body language and disembodying and representing the inner psychology of emotions. I have chosen to visually represent emotions and have chosen to adapt a similar method to Segal by creating a plaster cast of my face to act as my prosthetic. As it will be molded to my face, I feel it will represent a part of me.

Image: Girl Resting, 1970

So following discussion with James and Ryan, a trip to Gordon Harris for some plaster, and a bit more research online, I was ready to get completely plastered on Saturday. Unfortunately this is no one man task and due to lack to communication on my part, my parents misunderstood exactly what I wanting, thinking I was wanting to make a mold of my face (i.e. a negative) where I wanted more of a mask (a positive). Regardless, it was an interesting and perhaps slightly gross experience which involved covering my face in Vaseline, eyes in glad wrap and breathing through straws in my nose for half an hour while the plaster soaked gauze strips set on my face. The final result required a slight haircut to detach from my face and is mostly a strange rough textured mass of plaster, perhaps in a more abstract representation of the face. It is not an experience I am in no hurry to redo...regardless found myself with a faceful of plaster again tonight. Fun for the whole family! Round two was more successful and just needs to be a bit refined and sculpted.

Where the classical whiteness and gesture to Segals' sculptures is to suggest isolation and solitude, I chose to use it to represent the universal underlying emotions of humans. The face is something all people can identify and associate with where the whiteness leaves it unspecific, ambigious, serving as blank slate free of connotations to act as my prosthesis.

It is then rather my videos which act to evoke the emotion which the face doesn't. I was inspired by a project called 'We Feel Fine' - "An Exploration of Human Emotion." This was something I discovered and felt inspired by about a year or so ago. It scours the internet every 10 minutes searching blogs for the statement "I feel". It then identifies the sentence with a preidentified emotion based on adjectives and adverbs and links the emotion to the persons' age, gender, locality and weather conditions of locality which it then organizes into six 'movements'.

Each movement is a different visual representation of emotions to express different ideas, such as 'madness' which depicts each emotion as a coloured particle, all swarming around the screen as a 'birds eye view of humanity' which I feel conveys the dynamic range of emotions felt at any given time all over the world. 'Mobs', 'metrics', and 'mounds' present a more organized system of data around frequency and sample population which help draw meaning and conclusions from the findings.

The two movements which I have chosen to work from are 'murmurs' and 'montage'. Murmurs presents a scrolling list of human feelings in strict formal constriants which I feel reflect my my idea of the difficulty in conveying emotion in a purely textual medium. Rather montage is a more effective movement as it displays and images posted with the sentences to visually depict the emotion.

To adapt these idea, I sourced from a variety of creative commons images on Flickr, by browsing keywords in tags I collated evocative images for four out of six primary emotions. Ryan introduced me to another piece of software, 'Motion' which helped me bring them together with text and movement which help in conveying the emotions. Text was sourced from my friends' status updates on Facebook as an example of people openly expressing their thoughts and emotions through a virtual medium, open to be read and commented on. This sort of open communication I have found can form support structure for those who are feeling negative or spread positive emotions, impacting in what can be in a positive way.

With these videos finished, I have started working with the programming, trying to find ways to overlap and integrate these emotions in different intensities, depending on how the switches are controlled...

Saturday, April 4, 2009

Concept and Meaning

So after a day or two of getting sucked into (or perhaps bogged down?) by research and my conceptual ideas, I think I have my ideas more or less figured out. I had my conceptual ideas first but then had come trouble translating it over into some way to visually represent it but finally had my 'ta-da', lightbulb inspiration moment on the bus home one day. Ultimately, upon presentation I will want my ideas to speak for themselves but for documentation purposes (and also because writing it all out helped me get my ideas properly formulated before I begin construction of my prosthetic.

The concept:

To explore and understand the relationship between the physical and the digital, human and technology, I have looked at the relationship between human to human and human to computer.

Taking the purpose of the prosthetic is to serve as an addition, replacement, an extension, an augmentation or an enhancement we can in fact say that computers perform all these functions thus become a prosthetic to the humans. I think that the regardless how integrated these prosthetics become in our lives, they remain only prosthetics as we are defined as humans by the capability to have and register emotions and function according to them. This is an idea I thought about when we first started working with the NXTs.


We do however form emotional, sentimental bonds with technology which then lead to the assignment of human characteristics to these technologies as they become part of our everyday lives. I for example have named my laptop Mrkusich after one of my favourite artists, hence transferring some of the sentimental value from the name, and I always respond the the noise it makes on start up as if it were saying 'good morning'. These emotional bonds are not limited to any one piece of technology but rather the existence of technology as a whole as they enhance aspects of our lives. For example, I have also named my mp3 player (Masaccio), camera (Leonardo) and even my USB flash drive. All after artists of course.


So with this connection, we become more comfortable using it as an outlet to express emotions. As the nature of communication changes with introduction of more technology, our human to computer relationship begins to act as a barrier to our human to human relationships. The nature of human communication lies strongly rooted in body language as a means of visual communication understandable between humans with most of what we communicate subconsciously conveyed through gesture, posture and expression. As humans, we naturally react and respond accordingly based on what emotions are conveyed, but when we aren't in the physical presence of someone, we find other means of visually expressing ourselves to communicate effectively. As we choose more to communicate through digital means, we have developed the use of colour, images, fonts and writing style such as 'net speak' and emoticons to get the right tone of what we are trying to say.

This sort of communication is open to biased as we only convey about ourselves what we chose, where as with physical communication (body language) often it is a lot more subconscious. Many take advantage of this and seek to therefore use the prosthetics of technology not merely extensions of the physical existence but rather as escapism from it with the promises of cyberspace and virtual realities. The technology knows not about your thoughts or emotions and operates on a more simpler input / output method, you can determine what you want to put in and that's what the computer will know. On the other hand, that's not to say that technology is completely oblivious to our emotions.


Essentially, emotions trigger feeling in the body - tingles, hot spots, muscular tensions - which cause us to respond physically and often unconsciously which give the indicators of said emotions (hence, body language). We can program technology to some degree to understand emotions based on inputs triggered by human emotions, for example, heart rate and temperature. Regardless, even though they can make sense of this input to perhaps trigger a corresponding output if it has been preprogrammed as such, they do not yet have the ability to possess the same emotions or empathise. Among humans, we pick up on each others emotions in a way which then can affect our own, known as social contagion.


So, I therefore want my prosthetic to be a bridge between physical and virtual communication. It represents a very real and integral part of any human, that which we read a lot from in any physical interaction, that is, the face. Without the face, as with online communication, we lack a vital key to understanding. I am using the face as it it something we naturally look to for meaning and often we feel uneasy when unable to see a face in communication. This is enhanced also by leaving it rough and white, evoking this same uneasiness and neutrality of meaning, as well as the idea of emotion as universal among humans, the face distinctive too of humans, real and physical. The computer can be programmed to understand the physicality of body language through touch and what emotion it correlates to which it will then present in a visual stream of images and words which others have used to express those emotions online.


Hence, lets say I am communicating online with who is not in the same physical presence as me. They cannot read my body language and facial expressions which we are naturally attuned to do. My prosthesis however is in the same physical space as me so can trigger a response from my gestures. The prosthetic and the computer can only produce a predetermined response in a visual representation I have select. It cannot make its own judgement as it lacks the ability to feel these emotions itself. However, the visual representation triggers an emotional response with the person with which I am communicating. The prosthetic has serves an extension of my emotion, replacement for my physical presence, an enhancement to my message and the understanding of the recipient.


So what?

In a recent article in Time (April 6, 2009) there was an article on learning to be an optimist - A Primer for Pessimists by Alice Park. It mentioned that we are social beings influenced to no small degree by our friends and family. Dr. Nicholas Christakis undertook a study on the effect of social contagion by looking at Facebook. He noted that people who were smiling in their profile pictures were more likely to have friends who smiled. As social networking becomes increasingly popular, another bit of technology integrated into our lives (surely I'm not the only one who checks my Facebook what may be perhaps at an unhealthy frequency?), it can perhaps utilise the nature of social contagion. As with the example, if we chose to present a positive image of ourselves, then we will attract or influence the presence of positive people around us. So I felt this linked nicely to my own example where the technology as a prosthesis was utilised as a means to the end, as opposed to an end in itself, helping to break the communication barrier as opposed to being the barrier.