Thursday, November 11, 2010

The Show

Exhibition night was upon us and in the two and a half hours leading up to it, what seemed like at least half of the 2nd and 3rd year BCT students were rushing around prepping the studio; vacumming the floor, setting up drinks, moving tables, touching up the partitions with white paint, cleaning the computer screens, fixing up projects and exhibits. It was a really good vibe to see it all coming together and going from a university assessment, to a more formal exhibition.

Once people started coming in, it was exciting going around with fresh eyes and showing off not just my project, but those of others to friends and family. It was fantastic to see the final products which have emerged from the briefs which were pitched to us only a few months ago, especially the ones I saw only small bits and pieces of over the semester.

I had a few bugs getting Unity Remote up and running as the connection is quite finicky sometimes so in the end I just left it running. Whenever people tried it out and it worked, I always received a positive response; something immediately appealed to people about being able to use the iPhone to control navigation through a 3D space on the iMac.

I wad surprised a bit that the iPhone app seemed to have a little bit less of the uptake . This was possibly because they were a bit hard to see just sitting on the plinth and people seemed a bit hesitant to just pick them up and interact with them. By my conversations, if they were encouraged by either a member of the team or if they saw other people using them they were more likely to engage with it.

That did however seem to be the case for most of the exhibits. This being my fourth exhibition during my time in the BCT, I am becoming more aware of what make a successful exhibit. I tried to pay attention to was elements made for a successful exhibit. The animatronic dragon worked well as people were almost forced into interacting with it as it responded to movement via input from sensors which quite often shocked and surprised people. The duo of third years who made a 3D comic book did well by having copies of it people could take away and also by having it running on a large screen. The various flying devices caught people's attention because they were quite visually imposing but really relied on being demonstrated to show off what they actually did.

This exhibition was a bit different from previous ones as each group made presentations. I was up first presenting on behalf of our group.




My speech is also viewable by clicking here.

The next morning we had our final crit / question and answer session to meet with the tutors and discuss any other questions they might have. The main point that was discussed was each team member talking about their role in the group, which led to reflections of how the team worked as a whole. In particular, the international students reflected upon how the nature of the BCT was different to what they were used to.

I found that particularly interesting to hear about as they are used to a more formalised structure and had some trouble adapting to the more casual nature of the studio paper. At the beginning we had attempted to create a formalised structure and timetable with deadlines but it wasn't adhered to. I too work better with a structure and have almost found the balance between keeping deadlines but also to explore those little side roads and detours along the way. I think finding this balance is the trick to this paper as it results in a richer, better explored project, but one which is also completed on time and not rushed at the end. In saying that, this project too was a little rushed towards the end.

I felt my response was a bit different to the rest of the team. Everyone talked about their role in the technical side and bringing this together, while I mentioned this but also talked about my interest in the theoretical and research side which was one of the main driving forces for me in why I chose this project. This too however was something I only started to get more into towards the end of the project and wish I'd explored earlier.

Overall, the tutors seemed pleased with the outcome and the feedback felt positive. We were told that more time should've been spend on the theoretical / research side and James summed up by asking us this: "How do you measure the success of a project - by what you've learned or what you've made?"

I feel that this project had elements of success in both these areas; we created a functioning app to fulfil the brief and we learned a lot of skills to get there. Personally, I didn't learn quite as much as I would've liked as I chose to step back from the programming side of things and focus on other aspects required to bring the project as a whole together. I learnt a lot of the fundamentals to get started in iPhone development which was one of the things I wanted out of this project. Over the summer I would like to keep working on this and get an application in the app store early next year. It is a skill I am looking a possibly taking into project work next year too.

I struggled a bit working in a large team and in reflection of my two years doing this degree, I work better in smaller group of 2-3 , but not on my own. My final reflection and contextual statement is viewable by clicking here.

At the end of the day, it was a good project and I am pleased with the outcome. All that awaits now is the grade! But regardless of that, I feel it was a strong and rewarding project and I am glad I chose it. Next year I am definitely looking to doing a self directed project and pulling together a strong team. I am going to the CreateWorld Conference run by the Apple Universities Consortium (AUC) and hoping this will be an inspiring experience to drive me through into next year.

Bring on third year!


Tuesday, November 9, 2010

Exhbition Set Up


Our deadline has come and gone and tomorrow night will be the big exhibition, presentation and assessment. The requirements for our project were to have the iPhone simulator running the app on an iMac, the Unity Simulation running on an iMac and iPhones running the app.

We decided we would set up our space like a gallery with the artworks on the walls and tagged with the numbers so people could try it out like an actual audio tour.

Due to size / space and financial restraints, the works were all printed at A4, despite the fact many of them are a lot larger, they are all different scale and that some of them are sculptures / installations rather than something that can actually be frames.

The visual result of this turned out quite well and perhaps makes an interesting comment on the nature of the reproduction of art. I had lots of helpful feedback from James while setting up and in the end decided on clustering the artworks as it plays on the fact that they are pretending to be something they're not, and makes obvious that they are in fact, not actual artworks or accurate depictions thereof.

Regardless of the accuracy ( or lack there of), I think it makes for an eye catching exhibition and will draw people over. It will be the first proper time we'll have people outside the project and the BCT interacting with our project and app so will be a good chance to gather user feedback on usability and aesthetics.




We had set up in the end three iMacs running the iPhone simulator, Unity iPhone interfacing with Unity Remote (on the iPhone), and the standard version of Unity with the keyboard and mouse navigation and content interaction.

I am looking forward to showcasing our project and presenting it as it has always generated a lot of interest as I've talked about it over the semester.

Finally, below is the documentation I have collated for this project, summing up key areas and ideas of the project from the duration of the semester. Some of it is taken from this blog and refined and brought together in a more cohesive manner. It can be downloaded to be viewed at full res (originally intended for A3 size) or can be zoomed in.

Sunday, November 7, 2010

The App



With the app in the nearing finished stages, it had undergone a visual revamp and what we have achieved over the duration of a semester long project is an app which implements the integral and fundamental components of an audio tour. I feel it has fulfiled the primary aims of being aesthetically pleasing and easy to use, as demonstrated below.





Reflecting back on the original brief, it is important at this staget o think about other features which would enhance the app to meet the desires of the client and ful lfill the long term brief. These features would work to make the experience even more impressive and interactive.

We have being to tap
into some of these but beyond the scope of a semester long project, there is much more to be achieved past the initial pilot prototype.

Such features include:

Information tailored to user specfi c interests. This would involve research and trials into each of the user profi les to fi nd out what would make the gallery experience enjoyable for them.



Access collateral material such as video, images, audio and text. Recon figured content would be required to fit within the di fferent user pro files and additional images and videos would o er more insight into each artwork as it shows the viewer more into what is behind the creation of the work. e.g. In fluential images / art movements, Initial working sketches, Other works by the same artist



Connect with other visitors and the gallery community as a whole, through a social or participatory experience. Social media is embedded into many websites, giving users the ability to connect with existing pro les such as Twitter and Facebook. These would enable users to leave comments, messages and ideas with visitors in the space and those who come after. Discussions around ideas and topics and personal interpretations might reveal even more to a user or explain it in a way they might not have understood.

Tag works and send links to their email for later reinforces a ‘post-gallery experience’, the ability to continue to interact and learn even after they’ve left the space. Such a feature would encourage return visits.

Playing an interactive activity or educational game, perhaps a scavenger hunt or quiz, would reinforce learning and add the element of a challenge, especially for younger children who may be initially uninterested in the gallery visit.

Access to multiple layers of information, for example an artwork referencing a specifi c artistic movement or historical context could allow the viewer to access a section explaining that background. Providing a deeper and wider perspective on the context in which art is created helps the user form a better understand of what an artwork means. This also means that if a viewer if not interested in the context and only wants to know about the medium of the work, this too is possible.

Hear multiple perspectives and opinions about artworks, especially within the context of New Zealand art, hearing a Maori perspective would be di fferent experience and viewpoint from a Pakeha perspective and reveal more about the cultural signi ficance and New Zealand heritage.

Navigating the building is an essential tool which we have
begun to explore new ways of doing. We are no longer restricted by having to provide a linear tour but it also challenging to have to consider how the user may want to approach, enter and explore the space. Ideally, this would be integrated with a seamless integration with the artworks. That is, the device would automatically be able to detect not only the user’s location but which artwork they are standing in front of and bring up the appropriate content.



Such a feature would have require extensive technical research, a budget and experimentation. Possible technologies would be QR Codes, Wi , Bluetooth or Image recognition software.

The client has also discussed embedding commercial applications , for example is a user ‘likes’ and artwork, it would suggest to them to buy a print, if they’ve been in the gallery for a while it could suggest to them to go to the cafe.

Conclusion
Emerging technologies will continue to change the role smart devices play in our every day lives. In the long term execution of this plan, it is important to take into consideration that devices become very quickly outdated so that the content must always be king: regardless what the platform, once the novelty has worn of, it needs to be the UI and the content and the experience that keeps the user coming back.

Friday, November 5, 2010

Unity Remote

Once the gallery space was set up in Unity and looking reasonably reaslitic, it was time to add navigation features. With Unity iPhone, the scripting had to be adapted over from the inputs we had be using (touch inputs versus keyboard and mouse inputs) but we also had to consider the practical usability of the best way to implement navigation with these different inputs.

This poses and interesting con flict: how do you interact with a 3D environment with a 2D touch interface? With the touch interface, you have no keyboard, mouse joystick or controller, the traditional components used to move around and interact. This is where the limited screen size and touch interface will be become crucial factors.

With a touch screen device, the screen is the one and only form of input and this method of direct manipulation impacts on how the user interacts with the device. The user has both zero and many locations on the screen - if you have no fingers on the screen, you have no location.
The hardware is the content - an application becomes the entire content for the duration it is running.

As I starting point, we had a look at Penelope, an iPhone app available in the app store which has been built using Unity 3D. The source code, files and a tutorial are available on the Unity website to learn how to integrate UI elements in Unity applications for iPhone.

It o ffers three diff erent ways to navigate through the space, two of which are based around an on screen ‘joystick’ off ering the option of camera relative and player relative control. The third option is a tap control where the fi gure moves to where the player taps.

Unlike this game, for our app the user will not be controlling a ‘character’ on screen, but rather they will be navigating themselves through the space, hence the camera will act as the ‘player’ that the user controls. As this is not a game, the joystick interface is not a relevant one.

The tap to move is the most applicable. However, as we have other elements on screen that the user needs to interact with, it could get too di fficult to accurately select where the user wants to move to.

Rather, the other feature of the iPhone we can utilize is the accelerometer. This way, the user can move through the space by simply tilting the iPhone in the desired direction of movement and tap on the screen only when they want to interact directly with the space (i.e. the ‘bubbles’.)

For our application, this is more intuitive than the joystick and more functional than the tap to move but needs to be calibrated carefully to allow for natural movement as the user holds it so it doesn’t move around when the user wishes to stay still.




To implement this through code, the online documentation and scripting reference was extremely helpful. I was given the code for reading the accelerometer input from the iPhone and using that to move and object. From there is was a matter of callibrating the movement so it seemed natural in the gallery space and setting thresholds for the value so that the user could hold the iPhone naturally and it would only move when the user intended it to. I also ran into issues where once applying this code, the person controller stopped reacting to the colliders on the wall so would move straight through them and this had to be fixed.

The swipe to rotate was a little more difficult. I was working off a code which too the swipe movement across the screen and used it to translate an object. I spent a few days trying to recode this to rotate the camera to give the user the ability to 'look around' and after creating and calculating purely through code, I found a function which did all the calculations and reduced my code down to about half a dozen lines. It was however a good learning experience.

The video below shows a demo of this in action.



Unfortunately, some of the other code we tried to adapt over which created the bubble behaviors and revealed the content and interaction with the artworks generated a lot of errors. A lot of what Unity handles in the background has to be done manually for Unity iPhone and with the time constraints, we weren't able to adapt over and integrate.

So for the final hand in of this component of the project, we have two standalone aspects which we will present: Unity iPhone running on an iMac hooked up to an iPhone which enables the user to navigate through the space, and another iMac running standard Unity and interacting with the bubbles and content through mouse and keyboard.

If this were to be further developed, the next stages would be to successfully amalgamate the two and have it running on the iPhone as an app (currently it is only communicating through WiFi and acting as a remote). And from there, embedding it into the audio tour iPhone app as an option for navigation with the ‘birds eye’ view navigation / basic map accessible as well. Not everyone would be comfortable using the 3D interface and would prefer to stick to the familiar 2D map.

However, I am pleased that these two components we will be able to exhibit for our final presentation.

Wednesday, October 20, 2010

Audio half of the audio tour


Our audio is recorded and edited! I met with Ben again today to follow up on the recording session we had last week and to assist in editing it. Although it was really him doing the editing, it was useful to see how it was done and get an understanding of the process.

So we now successfully have our audio and so now it is a matter of splitting it up for the different 'sections' of audio i.e. in the Unity application, the 'sub-bubbles' will give access to different sections of the content if the user if only interesting in, for example, the medium of the work. The PDF below shows how the content has been divided up into different headings which would make up the sub sections of accessible audio content.






Realistic Dimension

This week I have been attempting to make our Unity recreation of the gallery space more realistic. If deployed, this part of the app would create a more augmented gallery experience, combining the basic requirement of navigation with the more abstract exploration of the concepts and information about the artworks simultaneously

To do this, I have been working to try get the artworks and gallery to a more a more realistic scale. I mocked up the artworks to scale in Illustrator and was surprised at how they actually varied.

As we hadn't been given a scale or any measurements with the floor plan of the gallery which we'd used to build up the 3D model of the space, I had to judge by eye what looked right, using the animated tour of the gallery as a guide. However, once the artworks too had been made to relative scale, some of them seemed ridiculously small.

I debated this situation for a while as having studied art, I know that the size of an artworks impacts the meaning of the artwork itself. For example, studying abstract expressionist paintings in high school, working from small scale reproductions in books and online, didn't compare to seeing them in real life on such a scale where it completely encompasses one's field of view. The size of the artwork is interellated with the medium and meaning which is why I was stressing getting this to an accurate depiction within the Unity recreation.

So for the purposes of this application, it makes more sense to have the artworks at a size where they are visible, especially when taken on the context of the iPhone as platform where you have a much smaller screen size than, say, a computer. Taking usability into respect, the artworks act also as clickable 'buttons', hence they need to be a reasonable size for the human finger to touch without difficulty (the ideal mimimum button size for the human finger is 44x44 pixels). In terms of visual aesthetic, it is important the viewer can actually have a good view of the artworks within the space so they can see what links are being made without having to fiddle too much with the navigation to get in 'closer' to the work.

As the primary purpose of the app is to be used in the gallery itself, the user would have the artworks in front of them anyway and would be able to make the visual connection. So at this point, usability is more important. The scale of the gallery itself is a huge improvement as the initial mock up had seemingly small rooms, high walls, narrow doorways and no ceiling.


(Click to enlarge)

I then worked to try get the righting light. In the animated tour, the gallery looks more cream sort of light and I tried to recreate this. The difficult part is to try get not only the tone right but also the lighting itself. Currently it looks quite dim but is an improvement as it looks a bit less like a 'cold' 3D render.

This part of the project is one which will not likely be integrated as part of the final app itself but rather a standalone component which supports the project as a whole. It is important to deploy it start testing it on the iPhone itself so we can start getting user feedback. It makes sense to us as we have been working with it so closely but it has been a concept difficult to explain to others but we are hoping that if we create it right, it will make sense to the user upon being presented with it.

The phase of creation needs to start drawing to a close in order to finish to test and document this project for our final presentation and end of year exhibition. Feedback we can get now will inform our reflections and give us a greater understanding of knowledge to discuss in our final critique.

Sunday, October 17, 2010

Nagivation in 3D Space

Even though we are not sure we are able to implement the Unity 3D navigation audio tour interface working as an integrated part of the final app, we are still aiming to have it as a functional app in itself for this project.

Once the 3D model of the space was build from the floor plan we were provided of the gallery in Maya, this was imported into Unity 3D and then artworks added onto the walls as game objects ( the scale and positioning of these artworks is not precise ) .

Game objects in Unity can then have scripts and behaviors added to them. Trixi made an initial version to give an idea of what we want to achieve in our plan to visually connect the ideas, audio content and artworks within the 3D space. The video below shows this: when an artwork is clicked, a bubble pops up. When this bubble is clicked, related bubbles appear with related ideas to link to other works within the space. A 'play' bubble also appears which, when clicked, triggers the audio content to start and changes to a 'stop' bubble.



As I haven't don't Javascript before, my initial experimentation was the movement you see when the bubble starts 'bouncing' while the audio was playing.

From here, I created the below flowchart to try plan out exactly how these bubbles should behave in reaction to each other and to the user's interactions. From here it is a matter of figuring out how to code it.


(click to view larger)

We probably will also face a few difficulties when it comes time to convert it over to an Xcode file. We have been working on the standard trial version of Unity so we will have to adapt it over to the iPhone version which will involve changing 'clicked' actions over to iPhone based gestures.

What I've had more trouble with is getting my head around yet another new language. I briefly learnt and last used Java a year ago and with this and general programming understanding, I can look at scripts and documentation for Unity and get an idea of what is going on. I am having trouble writing it myself and figuring out how to make it do what I want to so am getting stuck with this. Especially as I've spent the last two or so months learning and focusing on Objective-C.

Though it isn't overly complex in itself, setting up the relationships will be more fiddly than anything else. Hopefully once I've discussed with the tutor, it should become more clear and ready to deploy onto the iPhone next week.

Saturday, October 16, 2010

From code to iPhone

Version 1.0 of our app has been deployed! This version is the initial integration of the various components that each team member has been working on. The base navigation structure was me, the map view was Ryan and loading the content from database was Dodo. The three components were integrated by Trixi.



It is of course still not without its flaws (and has crashed on occasions where I have been showing people) but it was a good and important experience to learn how to actually get an app onto the iPhone.

From here now I have made the decision to step back from the coding aspect to focus on other important aspect of this project. From the outset we realised we would reach difficulty in that we had six people in the group and everyone wanted to do coding. As a result we have almost forgotten that we need audio content as well, among other things.

I have made this decision as I feel content in having spend some time gaining and understanding of the language and building something functional which has been deployed, even though the final version will most like be far from this. Instead I will be focusing on getting the Unity 3D navigation component up and running to be deployed on iPhone too and getting the content ready. This involves making sure that the audio is recorded and cut up to fit with how we want to use it and any other content we may require.

Additionally, there has also been the idea of modifying the app for exhibition purposes. For the end of year BCT exhibition, we aim to have both the Unity application and the iPhone application ready and set up for demonstration purposes but to make it additionally more interesting, James suggested we modify it so that the content is an audio tour of the BCT exhibition.

As we only have material for nine art works for the gallery tour, this would be a much better way of demonstrating what the application is for and how it will behave, as well as a good way to get valuable feedback on the usability of the app. It would also be a good way to provide people from outside BCT more information about the projects on display and the BCT as whole.

I am excited about the potential of doing this as in previous exhibitions, I haven't felt particularly enthusiastic about showing off my work and none of them have been particularly engaging as they've all been fairly static. I am looking forward to having the opportunity to produce something more interactive to exhibit.

Thursday, October 14, 2010

On Air

Two hour in the radio recording booth and we managed to get all our audio recorded. One thing we quickly realized was the difference between reading something in your head and reading it out loud. A lot of simple grammar or spelling errors which we automatically correct and understand become big obstacles to trip over. A few times we had to stop and restructure the sentences or add or remove words.

We had a few interesting things which came up we hadn't considered. For example, a lot of the descriptions had Maori words in them which was important to get these right. As the tour would be documenting the permanent collection of the gallery, many words would have links to Maori culture and significance, especially as the collection tells the story of New Zealand. Ashlee was familiar with the language so could make sure the pronunciation was correct. Leif's European language was useful in getting the names of some of the more foreign artists right.

On this note, we also made the observations that many people from overseas would't understand Maori terminology and in a lot of the works, the usage isn't explained. This creates the opportunity in the app to create perhaps a glossary of Maori terms so within the written text the user can tap on such a word and get a definition or explanation. This works only to enrich the gallery experience we are creating as it reinforces the meaning of the work in relation to the historical and cultural context of New Zealand art and heritage.

Ben and Ashlee were very professional and familiar with the equipment and the proccess which ensured it all went smoothly. Their voices were authoritative and friendly and they kept good pace for Leif to understand and not so slow as to be boring for native English speakers. It was also good to differentiate between the voice used to read the description and the quotes, as the quotes were to sound more like a conversation with the artist so didn't have to be as formal.

Even though I'd read through the descriptions of the artworks before, it was really completely different hearing them read out. It is too easy to skim when reading words on a page and makes more sense when you have a voice telling you about it. I feel Ashlee and Ben did this really well, I could've listened to their voices all day.

Even though I've been interested in art for a few years and been to many galleries and exhibitions, I have never taken an audio tour. I found myself considering what the reason for this might be and realized it was because I like to explore and learn at my own pace and whim. I've learnt about art by studying it through books and discussions with friends, teachers and classmates. This is what I feel is the most enriching experience and never felt the appeal to take an audio tour. So it was interesting that by recording the audio for our tour, I already felt like I was experiencing the works in a different way. It made me look forward to going to actually see the works when the gallery opens.

By having to voice the audio, I think Ben and Ashlee also got a better understanding of the project. At one point when Ashlee was reading and Ben was recording, the description reference another artwork than the one being talked about. We explained to Ben that for cases such as this, the referenced artwork would be linked to within the app within the space if it was also part of the exhibition or an image shown. Ben described it as 'hyperlinking' in real life and I think this is a very good way to describe and approach it.

So with the audio recorded, it will be cut together into the appropriate sections, ready to be input into the app. Ben and Ashlee have gone above and beyond to help us out and it has been good learning a bit about their department works and what they do. They even have a radio station, Static 88.1 so have a listen and support them!

So with the audio side of the audio tour well underway and the app being pieced together, the 3D Unity navigation interface needs also to be completed and implemented. With one week left until study and exam weeks, the end of this project looms ever near.


Wednesday, October 13, 2010

Putting the 'Audio' into Audio Tour.

As we enter a new phase of this project , recording the audio, we once again need to consider the implications of any decisions we make at this point. Upon meeting with the two people from communications who are helping up with this aspect of our project, I found it was good for us to have to explain the project to someone who is outside the sphere of this project and the creative technologies department all together. During our meeting with them, it was highlighted that it would be interesting collaborating between the two disciplines; they're coming in from disciplinary culture where you have an industry which has a very 'top down' mechanism, where as BCT culture has a different directorship models which is more collaborative based.

Rather than having them just "be in" our project as the voices, we want them to be a part of it, hence by bringing their knowledge and expertise to it, makes it a richer project
. Rather than "its your project, your assignment, how do you want it done" , we want their input as they have the knowledge as to what make good audio and tap into their professional advice, but all the while still considering the requirements of the client and we answer back to them and ultimately the project and its outcome is our responsibility.

Audio is never something any of us have worked with in depth so their technical expertise is definitely something we can utilize, especially as they have the resources and the equipment and know how to use it.

Looking over the written content we were provided brought up the differences in converting written content into spoken content as the two differ but Ben and Ashlee , our two voice artists seemed apt at this. We decided it would be a good idea to take along one of the international students in our group to the recording to ensure that it was being recorded in such a way that they understood it clearly enough as a lot of 'self improvers' would probably tourists from overseas whose first language might not be English.

Important issues are raised in terms of the affect the audio will have on the project, especially within the scope of the long term project. For this shorter component of the wider project, we are targeting specifically 'self-improvers' so we need to consider what sort of voice they would want to hear, while also considering for the future scope what 'character profiles' we have and what sort of voice they'd want to listen to - what is that 'voice' to different people? This is part of breaking away from the 'one size fits all' approach of traditional audio tour guide with the authoritative voice telling you what to do and where to go and,everyone gets the same experience; it is very linear, very prescriptive , its not about you. Rather, art is should be about your personal experience and interaction with it and what you want to get out of it.

Ben and Ashlee are used to talking in a 'radio' voice, and where that might not be exactly the right tone, they're used to the concept of having to sit down and pretend they're having a conversation with that person who isn't there. But that more casual, conversational voice is going to be desirable for some of the character profiles and for most self improver. as it is familiar and friendly. We're aiming for friendly yet informative and after talking with them, it seems they have are able to do the whole range of tones.

Finally there was the issue of how to split up the content into the male / female parts. It was decided that where the artist of the work quoted, it would be appropriate to have a female voice quoting a female artist and therefore the rest of the dialogue would be in the male voice.
This then also works visa versa where a male artist was quoted. This resulted in a fairly even balance between the two voices.

Voice and audio is not something I've had to consider as part of a project before so it is good to have the assistance of people who know a lot about this area.


Wednesday, October 6, 2010

Creating

Phase Five - Creating
23rd August Onwards

The creation phase is well and truly in progress! We have spent a month or so working on picking up the language used for developing iPhone applications. We have various amounts of programming skills ranging between Processing, Arduino, Java, Javascript and C++ so if nothing else, we are familiar with the fundamentals of programming.

For me, jumping into Objective-C and object oriented programming was a bit of a challenge. Most of the last month or so was spent working through examples, tutorials and sitting in with the 1st year BCT students who started learning from scratch about two weeks after we had started. Coming back after mid-semester break it was time to start trying to write our own app.

For my first attempt, I tried to make a series of custom buttons to reflect the interface on the Auckland City Art Gallery Website and have them lead to separate views which would ultimately contain content.

As the client didn't specify what sort of visual aesthetic they were interested in, we decided to stick with just trying to reflect the aesthetic of the website.

The Flash interface which is the main component of the main homepage is clever and aesthetically pleasing but redesigning it for a touch interface poses new challenges as it is a different visual language to web. The initial obvious limitation is size, those three simple buttons running down the side already take up about a third of the screen. This led to the debate of whether our app should run primarily in landscape or portrait mode.

Arguments for landscape were the strongest in that a larger proportion of artworks were landscape and that buttons content could be easier arranged to be more aesthetically pleasing.

Arguments for portrait mode were that table views (which we are highly likely to be implementing in one, if not many forms) work better in portrait mode. I argued that it feels more instinctive to hold an iPhone in one hand and, hence, in portrait, while holding it in landscape feels unnatural to hold in one hand and almost demands to be operated with two hands.

The argument seemed to be that, overall, landscape worked better for the visual aestehtics and content while landscape worked better in terms of physical usability. Obviously we can have it operational in both modes but this would require more intensive coding and might be less instinctive to use as the user would have to switch between them and thing about how they have to use it. There is also the added delay in the time taken for the screen to rotate.

In the end we decided to work towards a landscape orientation for the time being and spend some time mocking up some visuals to reflect these ideas. From here we began to build our visual mockups.



Here I attempted to amalgamate the Auckland City Art Gallery visual aesthetic with the iPhone UI features. It is not properly to scale (if scaled down to an iPhone screen, the text and buttons would be far too small.) I tried to make the buttons look more 'pushable' by adding a gradient and drop shadow. The black buttons represent the functionality of the UI Tab bar commonly found in iPhone apps.



Dodo's mock up takes scale into account and the aesthetic still reflects the existing gallery's style. I argued that it looks too flat and no obvious visual cues as to what is pushable. We discussed the issue of the buttons being to small as well to 'push'.



Daniel's is again a bit different, taking on board the colour theme and general style without recreating it exactly. It makes good use of the limited space sticking to the smaller buttons. In all cases I think the semitransparent black text box make effective use of space.

Bones, flesh and organs - building up the app.

With end of semester drawing near fast, it was time to make a plan as to what components needed creating for this app and to actually get it running on an iPhone for testing. A formative assessment with two tutors provided positive feedback on where we were at and suggestions th on how we can work more cohesively as a team.

As we all want a part in the coding, we delegated different sections of our proposed app among the six of us. Our task list was as follows:

- create a Singleton and access it from another view
- navigation for the application (blank views)
- view that loads details from database
- create a structured database and fill it with content
- figure out audio content, record and edit it
- play Audio content
- 2D Map Navigation
- User interface & Graphics

My task was to create the bones of the app, the structure and navigation that the other sections would be integrated into. Despite having initial trouble getting my head around root controllers, view controllers, table view controllers and navigation controllers, I found a tutorial on YouTube very helpful and once I'd created this, I was able to start building on it.

It doesn't look like much yet but I felt proud of what I'd achieved.
We'd agreed that the default aesthetics of the tab bar, table view and navigation controller weren't particularly exciting but there are ways to customize it. This here is the bare bones.

Dodo was working on creating and loading content from a database which she successfully got working. This too is in the early stages of developing and this sort of content loaded from the database is the vital organs of the app.

Ryan is working on the scrollable 2D map with buttons using MapKit and UIScrollView which has been a challenge. The 3D map, which is being Trixi and I have been building using Unity poses its own sets of challenges too.

With only a few weeks to go, there is much more creating to do!

Sunday, August 15, 2010

Phase Three: The Idea

Phase Three - Planning
9th - 15th August

As with the design of anything that is to be used by a variety of people, we were faced with the inherent challenge of how to best create it in so that it is functional and user friendly. In a paradox it almost has to be simple and complicated: complicated so that it performs a wide variety of function and offers a lot of features but is packaged in a way that is intuitive and the user is unaware of the complexity. Each of us attempted to devise a flowchart style visualization as to how our app could be structured and navigated.



(Click for larger Images)

Phase Four - The Idea
9th - 15th August

In our case, in order to create a successful app, we need to devise a way of navigating simultaneously through the physical space and the content. The idea came when we came across online visual brainstorm and thesaurus.


Visuwords and Visual Thesaurus show a visual way of visually exploring concepts and ideas and links between them. This prompted the idea that what if we could create this sort of visual concept navigation in combination with the navigation. Such a solution offers a way for the user to navigate through the spaces they browse topics, themes, artworks, artistic elements they are interested in while immediately seeing them placed in the physical space.

To achieve this, we plan to integrate the use of Unity 3D. This software enables us to take a 3D render of the gallery space which we can make up in a 3D modeling software (e.g. Maya, 3DS MAX) and set up camera movement and other features commonly integrated into 3D games. But what's more than that is that is can be easily converted to an Xcode file to upload to iPhone. Hence, this would enable us to take advantage of the hardware features of the iPhone for navigating through the space, such as, the accelerometer to tilt to navigate or the touch gestures.

The visual brain storm would take the form of 'bubbles' in the 3D space. The user can 'follow' the ideas and bubble which would lead them to works and then reveal more about the works.


For the initial deployment of this project, it would act independent of the user's actual physical location. In the long term implementation of this project, ideally some sort of real time location tracker would make the navigation more natural. That is, it would be able to place which room the viewer was standing in by bluetooth, GPS, IR, RFID or similar, and in combination with hardware such as the gyro and accelerometer, the map could update real time to accurately reflect location.

Even at this stage, it can offer the viewer to skip ahead and plan out which direction they want to head in. For those confused by the interface or the bubbles, there is the option of narrowing down what strings of bubbles they want to see, turn them off completely to see just the artwork titles or switch to simple overhead view for more traditional mode of navigation.

The effect of augmenting the gallery space in such a way creates an interaction and dialogue not just between the viewer and the works, but also between the works and the space. It is not often one gets to see the relationships between the works when in fact, a lot of work has gone into curating exhibitions around such dialogues and connections. We have been told by the client that the works in the permanent collection tell the story of New Zealand and so by presenting the works in such a way, it deepens the understanding of the collection as a whole, rather than just as standalone works.

The next phases are in fact to be the most time consuming. There are many elements we have to bring together, in particular the Unity 3D interface and learning the language of iPhone apps. The conceptual frame work and technical structure are both vital to realize this project.


Sunday, August 8, 2010

The initial phases

Phase One: Ideas and Brainstorms
26 July - 01 August 2010


Our initial exploration and starting point of the task of creating an interactive audio / multimedia tour guide for the Auckland City Art Gallery posed more questions than actual answers. Key ideas to explore were around who the gallery goer and target audience is, what should we consider in regards to usability of the hardware, user interface and content and what the purpose of an audio tour in traditional and modern contexts is.

These answers we began to find through discussion in our group, with lecturers and with the client as well as research.

Phase Two: Research
02-08 August 2010

Why an audio tour?

Audio tours as an accessory to a museum or gallery visit reveal more and deeper levels of information about an artwork that may be otherwise available or understood by the gallery-goer. Most galleries have small placards of text alongside each work but the audio tour offers a greater dept of information than could fit physically. They encourage visitors to spend more time at the gallery and engage in more of a learning experience.

Why an interactive / multimedia device?
Interactive devices deliver this content in a method that is more appealing and engaging, hence people can learn more when enjoying it.

The client has identified their key aims of such a device to help users engage with artworks in a meaningful way which works to build and strengthen the relationship between visitors and the gallery.

Who is the target audience?
Characteristics of the gallery goer to consider:
- Student
- Have they been to a gallery before?
- How old are they?
- What culture are they from?
- Are they tourists?
- Are they by themselves or in couples / groups?

The client has identified their their target audience for the device based on a visitor profile developed through research by the Tate Museum which looked at audience behaviors and motivations for visiting the gallery. They have decided to target self-improvers who are looking for ways to broaden their knowledge their knowledge and skills. The device would serve this target audience by acting as a learning tool that helps them build skills for looking at art and strategies for understanding it.

What sort of content should it deliver?
Many people are put of traditional audio guides because of the connotation of the droning authoritative voice. However, this can be likened to not going to the cinema again after having seen a bad movie. The audio guide is just the platform and the content itself is vital to the user experience.

The content itself should offer various depths of information giving the user the choice of how much they want to access. As well as the traditional audio, users can access images, video and navigation option as well as expand upon the information by delving deeper into specific aspects of the information such as historical / cultural context, symbolism, about the artist, about the the art style / movement, and related works.


The content will be provided to us by the gallery and for the initial stage of this project which we are undertaking for the purposes of this project this semester, we are focusing on eight works which are part of the New Zealand Permanent collection that is the overall focus of the wider project. It has been discussed that for the more long term purposes to set a connection to a database, Vernon, which offers collections management software for museums, galleries and other cultural heritage sites.

How should the content be delivered?

The initial platform we are using is the iPhone. We discussed the pros and cons of web based content versus a self contained app and for the initial project, we are focusing on a self contained. In the long term we will aim to deploy content that is web-based and available on any mobile device so that visitors to the gallery can use their own devices.

The cellphone platform offers a way to attract new audiences. Jane Burton curator of interpretation at the Tate Moderns said
I was particularly interested in finding out whether it could reach new audiences who wouldn’t have considered taking a traditional audio tour. I suspected it might.”

Museums in the US have found that use of cell phone as a platform reduces the infrastructure and staffing costs for the audio tours. However, visitors can be put off by potentially incurring high data fees from mobile service providers, especially foreign visitors incurring roaming fees.

The advantage of an iPhone app is that it provide
s a more or less consistent platform; where a general mobile device would have to cater to a wide range of screen sizes, resolutions, functions etc. The iPhone offers a set of consistent features and so by using it as a platform, we have to utilize the features it offers such as the swipe and gesture functions and the hardware.

It is important that if it is deployed on a device such as an iPhone, the content is what draws the visitor to want to use it, after the novelty of the device has worn.

What has been done already?



Such devices are becoming more common at museums and galleries around the world. The existing iPhone applications we downloaded and had a look at were standalone guides that worked outside the gallery. What we're wanting to create would enhance the gallery experience and offer something more when used in the gallery space.

The overall aesthetic of the apps were quite plain, many of them following the standard template of the iPhone app, even the one MoMA has recently launched.

We think we can go above an beyond what is already available. What is achievable to create this semester will serve as the building blocks for what it will become. Our goal is to create an app that is ready to launch which provides the fundamental components of the overall aim with the potential to expand it further with the other identified components that we and the client have identified. We plan to explore all these avenues so that the groundwork for them is ready and reinforced what is initially executed.

Next step is to being planning how it is to be structured and what identify and learn the skills we need to create it and put it together.

Research sources:
A Museum Electronic Guide in Real Use

Cells and Sites: How Historic Sites are Using Cell Phone Tours

The learning experience with electronic museum guides

London Museum Releases Cool Augmented Reality App
New Plymouth Museum Puke Ariki launches iPad based visitor experience

When In Roam: Visitor Response To Phone Tour Pilots In The US And Europe

“I never take audio guides. I can’t stand them!”

Visitors want to know 'Why?' (museum handheld guides)

The eyes want to have it: Multimedia Handhelds in the Museum (an evolving story)

Designing Visitor Experiences with Mobile Platforms in Museums
MoMA iPhone App Puts a Museum in Your Pocket

Monday, August 2, 2010

Auckland Art Gallery Audio Guide Project

Take Leonardo Da Vinci painting the Last Supper, Masaccio painting the frescos of the Brancacci Chapel or Paolo Uccello painting The Battle of San Romano hundreds of years ago; it would've been completely incomprehensible to them to be handed an iPhone and be able to scroll through, zoom in and out, share and collect the works they worked for months, even years on.

So of course, that is my project for semester. As part of the Auckland Art Gallery Developmental Project, we will be working on developing a multimedia audio guide. to use as an
interactive learning tool which helps visitors develop strategies for understanding art. The gallery reopens its main gallery in 2011 following development and reconstruction.

The client has come to us wanting a technology partnership as a way of exploring ideas and technologies beyond their expertise, deliver engaging gallery experiences through an innovative platform, and incorporate perspectives of professionals working outside the museum / gallery industry.

They are wanting to create a visitor experience that will:
• Provide inspiring, imaginative and enjoyable experiences
• Nurture curiosity and foster life-long engagement with the arts
• Create opportunities for dialogue and debate through interaction and participation
• Encourage visitors to make meaningful personal connections with the collection

Given my previous interest in studying art and art history, this project immediately appealed to me. The potential of launching this sort of application with the iPhone as a platform, I jumped at the chance to learn iPhone coding and development with a practical application.

What they are initially wanting from us (which will take us through the duration of this semester) is to investigate possible ways to approach this, develop a working prototype and explore how the initial device will fit into plans for its long-term development.

I am excited about this project as it is a more real world application of our studio work. For the first time in this course, I am beginning to find an application to my interests and skills that I could work towards a possible path after completing BCT. There are a lot of other exciting opportunities coming up during this semester to network with people in the industry and potential future clients, mainly the Semi-Permanent Conference and the Creative Tech Conference. Specifically, I hope to be able to become adept at the iPhone platform and app development. So all in all, should be a busy semester ahead.

Tuesday, June 29, 2010

Developmental Process and Construction





Contextual Statement:
Simply a suitcase as an object is inseparable from a wide raft of connotations and associations: travel, holiday, movement, mystery. The original brief encouraged the exploration of the suitcase as a symbol of modern life, in such a society where we move around a lot more, are more globalized and often pack up our entire lives in a suitcase, whether by choice (migration) or necessity (refugees). Increased mobility in a changing world has also impacted on the suitcase; it is often the object of scrutiny, regarded as potential threat until proven otherwise - which suitcase contains the condensed and compartmentalized life of a humble traveller, and which contains a terrorist threat?

I chose several of these elements to draw upon. The mystery and intrigue of this banal, everyday object is in the mystery and uniqueness of it's contents. Though my suitcase is open, most of the internal section is concealed and with it all the mechanics. The viewer is confronted only with the sleek reflective black surface and weighty metal cubes.

The interactive nature of the work is that the viewer is asked to unpack and repack the suitcase. It is a strange paradox of control where their actions determine the result of the output, yet they aren't consciously aware of how they are effecting it. How often is this the case with modern technologies where we aren't often fully aware of exactly what the implications of our interactions are? And yet, we can interact with them without fully understanding.

It is in fact these technologies which have had an impact in shaping what we call modern life. I chose to look at the work of the Futurists who 100 years ago were exploring the implications of their modernity with the emergence of the mechanical age. Between WWI and WWII the immense social and economic change stimulated by the technical achievements of the modern age created the founding manifesto of Futurism, that the world's magnificence has been enriched by a new beauty: the beauty of speed. They celebrated the dynamic synergy of man and machine and the inescapable presence of speed in modern life.

Since then, technologies have exponentially developed and our concepts of time have been greatly influenced by networks that offer instantaneous information and connectivity. The speed of modern life dictates that we must always be on the move, always be accessible and must be in touch with these technologies to not get left behind.

Futurists tried to represent concepts of time in static forms through sequences of movement sweeping across a single composition. I chose the medium of photography as it can represent the accelerated pace of modern life by recording in sharp, frozen detail a minute slice of movement too quick for the eye to see. The pressure sensors act as the interface, the metaphorical 'pressures' of modern life which dictate the pace at which we must live our lives.

My final work, though not yet fully resolved, explores the movement of a figure, represented within the suitcase, manipulated by the contents and arrangement of the contents of the suitcase. The suitcase represents a life determined by movement and mobility, condensed and compartmentalized. The pace and way we moved is controlled by external elements outside our control. Modern life has sped up to the point where clarity is lost, moments are fleeting and it all becomes a blur of movement.

Are we no more than a suitcase, a vessel within which we pack out lives, only to be moved around on the conveyor belt out of our control?