Thursday, November 11, 2010

The Show

Exhibition night was upon us and in the two and a half hours leading up to it, what seemed like at least half of the 2nd and 3rd year BCT students were rushing around prepping the studio; vacumming the floor, setting up drinks, moving tables, touching up the partitions with white paint, cleaning the computer screens, fixing up projects and exhibits. It was a really good vibe to see it all coming together and going from a university assessment, to a more formal exhibition.

Once people started coming in, it was exciting going around with fresh eyes and showing off not just my project, but those of others to friends and family. It was fantastic to see the final products which have emerged from the briefs which were pitched to us only a few months ago, especially the ones I saw only small bits and pieces of over the semester.

I had a few bugs getting Unity Remote up and running as the connection is quite finicky sometimes so in the end I just left it running. Whenever people tried it out and it worked, I always received a positive response; something immediately appealed to people about being able to use the iPhone to control navigation through a 3D space on the iMac.

I wad surprised a bit that the iPhone app seemed to have a little bit less of the uptake . This was possibly because they were a bit hard to see just sitting on the plinth and people seemed a bit hesitant to just pick them up and interact with them. By my conversations, if they were encouraged by either a member of the team or if they saw other people using them they were more likely to engage with it.

That did however seem to be the case for most of the exhibits. This being my fourth exhibition during my time in the BCT, I am becoming more aware of what make a successful exhibit. I tried to pay attention to was elements made for a successful exhibit. The animatronic dragon worked well as people were almost forced into interacting with it as it responded to movement via input from sensors which quite often shocked and surprised people. The duo of third years who made a 3D comic book did well by having copies of it people could take away and also by having it running on a large screen. The various flying devices caught people's attention because they were quite visually imposing but really relied on being demonstrated to show off what they actually did.

This exhibition was a bit different from previous ones as each group made presentations. I was up first presenting on behalf of our group.




My speech is also viewable by clicking here.

The next morning we had our final crit / question and answer session to meet with the tutors and discuss any other questions they might have. The main point that was discussed was each team member talking about their role in the group, which led to reflections of how the team worked as a whole. In particular, the international students reflected upon how the nature of the BCT was different to what they were used to.

I found that particularly interesting to hear about as they are used to a more formalised structure and had some trouble adapting to the more casual nature of the studio paper. At the beginning we had attempted to create a formalised structure and timetable with deadlines but it wasn't adhered to. I too work better with a structure and have almost found the balance between keeping deadlines but also to explore those little side roads and detours along the way. I think finding this balance is the trick to this paper as it results in a richer, better explored project, but one which is also completed on time and not rushed at the end. In saying that, this project too was a little rushed towards the end.

I felt my response was a bit different to the rest of the team. Everyone talked about their role in the technical side and bringing this together, while I mentioned this but also talked about my interest in the theoretical and research side which was one of the main driving forces for me in why I chose this project. This too however was something I only started to get more into towards the end of the project and wish I'd explored earlier.

Overall, the tutors seemed pleased with the outcome and the feedback felt positive. We were told that more time should've been spend on the theoretical / research side and James summed up by asking us this: "How do you measure the success of a project - by what you've learned or what you've made?"

I feel that this project had elements of success in both these areas; we created a functioning app to fulfil the brief and we learned a lot of skills to get there. Personally, I didn't learn quite as much as I would've liked as I chose to step back from the programming side of things and focus on other aspects required to bring the project as a whole together. I learnt a lot of the fundamentals to get started in iPhone development which was one of the things I wanted out of this project. Over the summer I would like to keep working on this and get an application in the app store early next year. It is a skill I am looking a possibly taking into project work next year too.

I struggled a bit working in a large team and in reflection of my two years doing this degree, I work better in smaller group of 2-3 , but not on my own. My final reflection and contextual statement is viewable by clicking here.

At the end of the day, it was a good project and I am pleased with the outcome. All that awaits now is the grade! But regardless of that, I feel it was a strong and rewarding project and I am glad I chose it. Next year I am definitely looking to doing a self directed project and pulling together a strong team. I am going to the CreateWorld Conference run by the Apple Universities Consortium (AUC) and hoping this will be an inspiring experience to drive me through into next year.

Bring on third year!


Tuesday, November 9, 2010

Exhbition Set Up


Our deadline has come and gone and tomorrow night will be the big exhibition, presentation and assessment. The requirements for our project were to have the iPhone simulator running the app on an iMac, the Unity Simulation running on an iMac and iPhones running the app.

We decided we would set up our space like a gallery with the artworks on the walls and tagged with the numbers so people could try it out like an actual audio tour.

Due to size / space and financial restraints, the works were all printed at A4, despite the fact many of them are a lot larger, they are all different scale and that some of them are sculptures / installations rather than something that can actually be frames.

The visual result of this turned out quite well and perhaps makes an interesting comment on the nature of the reproduction of art. I had lots of helpful feedback from James while setting up and in the end decided on clustering the artworks as it plays on the fact that they are pretending to be something they're not, and makes obvious that they are in fact, not actual artworks or accurate depictions thereof.

Regardless of the accuracy ( or lack there of), I think it makes for an eye catching exhibition and will draw people over. It will be the first proper time we'll have people outside the project and the BCT interacting with our project and app so will be a good chance to gather user feedback on usability and aesthetics.




We had set up in the end three iMacs running the iPhone simulator, Unity iPhone interfacing with Unity Remote (on the iPhone), and the standard version of Unity with the keyboard and mouse navigation and content interaction.

I am looking forward to showcasing our project and presenting it as it has always generated a lot of interest as I've talked about it over the semester.

Finally, below is the documentation I have collated for this project, summing up key areas and ideas of the project from the duration of the semester. Some of it is taken from this blog and refined and brought together in a more cohesive manner. It can be downloaded to be viewed at full res (originally intended for A3 size) or can be zoomed in.

Sunday, November 7, 2010

The App



With the app in the nearing finished stages, it had undergone a visual revamp and what we have achieved over the duration of a semester long project is an app which implements the integral and fundamental components of an audio tour. I feel it has fulfiled the primary aims of being aesthetically pleasing and easy to use, as demonstrated below.





Reflecting back on the original brief, it is important at this staget o think about other features which would enhance the app to meet the desires of the client and ful lfill the long term brief. These features would work to make the experience even more impressive and interactive.

We have being to tap
into some of these but beyond the scope of a semester long project, there is much more to be achieved past the initial pilot prototype.

Such features include:

Information tailored to user specfi c interests. This would involve research and trials into each of the user profi les to fi nd out what would make the gallery experience enjoyable for them.



Access collateral material such as video, images, audio and text. Recon figured content would be required to fit within the di fferent user pro files and additional images and videos would o er more insight into each artwork as it shows the viewer more into what is behind the creation of the work. e.g. In fluential images / art movements, Initial working sketches, Other works by the same artist



Connect with other visitors and the gallery community as a whole, through a social or participatory experience. Social media is embedded into many websites, giving users the ability to connect with existing pro les such as Twitter and Facebook. These would enable users to leave comments, messages and ideas with visitors in the space and those who come after. Discussions around ideas and topics and personal interpretations might reveal even more to a user or explain it in a way they might not have understood.

Tag works and send links to their email for later reinforces a ‘post-gallery experience’, the ability to continue to interact and learn even after they’ve left the space. Such a feature would encourage return visits.

Playing an interactive activity or educational game, perhaps a scavenger hunt or quiz, would reinforce learning and add the element of a challenge, especially for younger children who may be initially uninterested in the gallery visit.

Access to multiple layers of information, for example an artwork referencing a specifi c artistic movement or historical context could allow the viewer to access a section explaining that background. Providing a deeper and wider perspective on the context in which art is created helps the user form a better understand of what an artwork means. This also means that if a viewer if not interested in the context and only wants to know about the medium of the work, this too is possible.

Hear multiple perspectives and opinions about artworks, especially within the context of New Zealand art, hearing a Maori perspective would be di fferent experience and viewpoint from a Pakeha perspective and reveal more about the cultural signi ficance and New Zealand heritage.

Navigating the building is an essential tool which we have
begun to explore new ways of doing. We are no longer restricted by having to provide a linear tour but it also challenging to have to consider how the user may want to approach, enter and explore the space. Ideally, this would be integrated with a seamless integration with the artworks. That is, the device would automatically be able to detect not only the user’s location but which artwork they are standing in front of and bring up the appropriate content.



Such a feature would have require extensive technical research, a budget and experimentation. Possible technologies would be QR Codes, Wi , Bluetooth or Image recognition software.

The client has also discussed embedding commercial applications , for example is a user ‘likes’ and artwork, it would suggest to them to buy a print, if they’ve been in the gallery for a while it could suggest to them to go to the cafe.

Conclusion
Emerging technologies will continue to change the role smart devices play in our every day lives. In the long term execution of this plan, it is important to take into consideration that devices become very quickly outdated so that the content must always be king: regardless what the platform, once the novelty has worn of, it needs to be the UI and the content and the experience that keeps the user coming back.

Friday, November 5, 2010

Unity Remote

Once the gallery space was set up in Unity and looking reasonably reaslitic, it was time to add navigation features. With Unity iPhone, the scripting had to be adapted over from the inputs we had be using (touch inputs versus keyboard and mouse inputs) but we also had to consider the practical usability of the best way to implement navigation with these different inputs.

This poses and interesting con flict: how do you interact with a 3D environment with a 2D touch interface? With the touch interface, you have no keyboard, mouse joystick or controller, the traditional components used to move around and interact. This is where the limited screen size and touch interface will be become crucial factors.

With a touch screen device, the screen is the one and only form of input and this method of direct manipulation impacts on how the user interacts with the device. The user has both zero and many locations on the screen - if you have no fingers on the screen, you have no location.
The hardware is the content - an application becomes the entire content for the duration it is running.

As I starting point, we had a look at Penelope, an iPhone app available in the app store which has been built using Unity 3D. The source code, files and a tutorial are available on the Unity website to learn how to integrate UI elements in Unity applications for iPhone.

It o ffers three diff erent ways to navigate through the space, two of which are based around an on screen ‘joystick’ off ering the option of camera relative and player relative control. The third option is a tap control where the fi gure moves to where the player taps.

Unlike this game, for our app the user will not be controlling a ‘character’ on screen, but rather they will be navigating themselves through the space, hence the camera will act as the ‘player’ that the user controls. As this is not a game, the joystick interface is not a relevant one.

The tap to move is the most applicable. However, as we have other elements on screen that the user needs to interact with, it could get too di fficult to accurately select where the user wants to move to.

Rather, the other feature of the iPhone we can utilize is the accelerometer. This way, the user can move through the space by simply tilting the iPhone in the desired direction of movement and tap on the screen only when they want to interact directly with the space (i.e. the ‘bubbles’.)

For our application, this is more intuitive than the joystick and more functional than the tap to move but needs to be calibrated carefully to allow for natural movement as the user holds it so it doesn’t move around when the user wishes to stay still.




To implement this through code, the online documentation and scripting reference was extremely helpful. I was given the code for reading the accelerometer input from the iPhone and using that to move and object. From there is was a matter of callibrating the movement so it seemed natural in the gallery space and setting thresholds for the value so that the user could hold the iPhone naturally and it would only move when the user intended it to. I also ran into issues where once applying this code, the person controller stopped reacting to the colliders on the wall so would move straight through them and this had to be fixed.

The swipe to rotate was a little more difficult. I was working off a code which too the swipe movement across the screen and used it to translate an object. I spent a few days trying to recode this to rotate the camera to give the user the ability to 'look around' and after creating and calculating purely through code, I found a function which did all the calculations and reduced my code down to about half a dozen lines. It was however a good learning experience.

The video below shows a demo of this in action.



Unfortunately, some of the other code we tried to adapt over which created the bubble behaviors and revealed the content and interaction with the artworks generated a lot of errors. A lot of what Unity handles in the background has to be done manually for Unity iPhone and with the time constraints, we weren't able to adapt over and integrate.

So for the final hand in of this component of the project, we have two standalone aspects which we will present: Unity iPhone running on an iMac hooked up to an iPhone which enables the user to navigate through the space, and another iMac running standard Unity and interacting with the bubbles and content through mouse and keyboard.

If this were to be further developed, the next stages would be to successfully amalgamate the two and have it running on the iPhone as an app (currently it is only communicating through WiFi and acting as a remote). And from there, embedding it into the audio tour iPhone app as an option for navigation with the ‘birds eye’ view navigation / basic map accessible as well. Not everyone would be comfortable using the 3D interface and would prefer to stick to the familiar 2D map.

However, I am pleased that these two components we will be able to exhibit for our final presentation.

Wednesday, October 20, 2010

Audio half of the audio tour


Our audio is recorded and edited! I met with Ben again today to follow up on the recording session we had last week and to assist in editing it. Although it was really him doing the editing, it was useful to see how it was done and get an understanding of the process.

So we now successfully have our audio and so now it is a matter of splitting it up for the different 'sections' of audio i.e. in the Unity application, the 'sub-bubbles' will give access to different sections of the content if the user if only interesting in, for example, the medium of the work. The PDF below shows how the content has been divided up into different headings which would make up the sub sections of accessible audio content.






Realistic Dimension

This week I have been attempting to make our Unity recreation of the gallery space more realistic. If deployed, this part of the app would create a more augmented gallery experience, combining the basic requirement of navigation with the more abstract exploration of the concepts and information about the artworks simultaneously

To do this, I have been working to try get the artworks and gallery to a more a more realistic scale. I mocked up the artworks to scale in Illustrator and was surprised at how they actually varied.

As we hadn't been given a scale or any measurements with the floor plan of the gallery which we'd used to build up the 3D model of the space, I had to judge by eye what looked right, using the animated tour of the gallery as a guide. However, once the artworks too had been made to relative scale, some of them seemed ridiculously small.

I debated this situation for a while as having studied art, I know that the size of an artworks impacts the meaning of the artwork itself. For example, studying abstract expressionist paintings in high school, working from small scale reproductions in books and online, didn't compare to seeing them in real life on such a scale where it completely encompasses one's field of view. The size of the artwork is interellated with the medium and meaning which is why I was stressing getting this to an accurate depiction within the Unity recreation.

So for the purposes of this application, it makes more sense to have the artworks at a size where they are visible, especially when taken on the context of the iPhone as platform where you have a much smaller screen size than, say, a computer. Taking usability into respect, the artworks act also as clickable 'buttons', hence they need to be a reasonable size for the human finger to touch without difficulty (the ideal mimimum button size for the human finger is 44x44 pixels). In terms of visual aesthetic, it is important the viewer can actually have a good view of the artworks within the space so they can see what links are being made without having to fiddle too much with the navigation to get in 'closer' to the work.

As the primary purpose of the app is to be used in the gallery itself, the user would have the artworks in front of them anyway and would be able to make the visual connection. So at this point, usability is more important. The scale of the gallery itself is a huge improvement as the initial mock up had seemingly small rooms, high walls, narrow doorways and no ceiling.


(Click to enlarge)

I then worked to try get the righting light. In the animated tour, the gallery looks more cream sort of light and I tried to recreate this. The difficult part is to try get not only the tone right but also the lighting itself. Currently it looks quite dim but is an improvement as it looks a bit less like a 'cold' 3D render.

This part of the project is one which will not likely be integrated as part of the final app itself but rather a standalone component which supports the project as a whole. It is important to deploy it start testing it on the iPhone itself so we can start getting user feedback. It makes sense to us as we have been working with it so closely but it has been a concept difficult to explain to others but we are hoping that if we create it right, it will make sense to the user upon being presented with it.

The phase of creation needs to start drawing to a close in order to finish to test and document this project for our final presentation and end of year exhibition. Feedback we can get now will inform our reflections and give us a greater understanding of knowledge to discuss in our final critique.

Sunday, October 17, 2010

Nagivation in 3D Space

Even though we are not sure we are able to implement the Unity 3D navigation audio tour interface working as an integrated part of the final app, we are still aiming to have it as a functional app in itself for this project.

Once the 3D model of the space was build from the floor plan we were provided of the gallery in Maya, this was imported into Unity 3D and then artworks added onto the walls as game objects ( the scale and positioning of these artworks is not precise ) .

Game objects in Unity can then have scripts and behaviors added to them. Trixi made an initial version to give an idea of what we want to achieve in our plan to visually connect the ideas, audio content and artworks within the 3D space. The video below shows this: when an artwork is clicked, a bubble pops up. When this bubble is clicked, related bubbles appear with related ideas to link to other works within the space. A 'play' bubble also appears which, when clicked, triggers the audio content to start and changes to a 'stop' bubble.



As I haven't don't Javascript before, my initial experimentation was the movement you see when the bubble starts 'bouncing' while the audio was playing.

From here, I created the below flowchart to try plan out exactly how these bubbles should behave in reaction to each other and to the user's interactions. From here it is a matter of figuring out how to code it.


(click to view larger)

We probably will also face a few difficulties when it comes time to convert it over to an Xcode file. We have been working on the standard trial version of Unity so we will have to adapt it over to the iPhone version which will involve changing 'clicked' actions over to iPhone based gestures.

What I've had more trouble with is getting my head around yet another new language. I briefly learnt and last used Java a year ago and with this and general programming understanding, I can look at scripts and documentation for Unity and get an idea of what is going on. I am having trouble writing it myself and figuring out how to make it do what I want to so am getting stuck with this. Especially as I've spent the last two or so months learning and focusing on Objective-C.

Though it isn't overly complex in itself, setting up the relationships will be more fiddly than anything else. Hopefully once I've discussed with the tutor, it should become more clear and ready to deploy onto the iPhone next week.