MA Interim Show – 3rd – 5th April

I think our interim show last week went really well : ) It was really interesting to see everyone’s work – in most cases, for the first time and I thought there were some really strong projects developing across all of the MA disciplines. As an experience, it was the first time that I have ever had to present work in that way and I found it really exciting to be part of such an impressive show!

I exhibited a basic interface in which there were five shapes which were graphical representations of sound loops. These could be dragged and dropped on to a playing area, the user could then adjust the volume and pan attributes of the sound loops by scaling and positioning each of the shapes.

I was also hoping to include a new feature that I’ve been working on which takes the form of a low bandpass effect – I originally intended to include this on the rotation of the shapes although I couldn’t quite work it out in time for the show. I did however leave the rotation function in for each shape so that there was an added interaction available.

As well as being an opportunity to present and get feedback on my work, the show proved to be a really valuable research activity as I had the chance to watch people use the experimental interface that I have been developing. It was really interesting to see how people used the interface and how successful they were in using different interactions. I observed a number of different people, all of them with different ages, gender and technical abilities. The most interesting and revealing observations that I made related to the way in which people gradually learnt (or struggled with) how they could use the different features of the interface.

For instance, more people than I had anticipated, found it difficult to move shapes from the bar along the bottom of the screen, on to the area above in order to start the sound loop. One thing I did notice is that several people were attempting to drag each of the shapes by the solid white area of the shape – this makes complete sense – of course you would try to drag that shape itself! – My mistake here is that, until now, I have been assigning the drag feature of each of the shapes to the centre of the shape were there is nothing obvious to grab. As a result, many found that it was possible to rotate and scale each of the shapes, on the spot, but they seemed to have no idea that it was possible to move the shapes from the bottom bar as it isn’t made very obvious.

This initial problem made it clear that there is a definite need in digital interaction design to compensate for a lack of physicality – or what is often referred to the ‘tangible’ attribute of a real object. The interactions must be accompanied by a visible handle – and a form of attraction and/or feedback to encourage and inform the user.

Equally so it is also important in an interactive experience, which offers sound as a reward for successful manipulation of objects, not to mislead the user by providing a visual change where there is no audible change. This is particularly true in the case of the rotation feature – people were able to see that they could rotate each shape and notice that it’s opacity could be increased or decreased by doing so, but there was no audible change to coincide with the graphical change.

The other major problem that I noticed was that people were having difficulty scaling the shapes – I believe that this is because, at the current state of development, I am using a very basic scaling process which relies on specific area being clicked and dragged – if the user rotates the shape – there is a very high probability that they will lose track of where this area is. The simple solution to this would be for me to work out a way of extending the scale function to operate when the user clicks anywhere on the white area of the shape, this way, it could be scaled by dragging to and from the centre of the shape, and rotated by dragging in a circular fashion, around the centre of the shape. Rather than being an issue of difficulty with user interaction, this is more a case of the interface still being in development.

The other key problem I found with the interface at this stage, is that even if the user was able to drag the shape on to the playing area, they then had to scale it in order to produce a sound. Again, this is something which I had overlooked somewhat, though it could be easily rectified in future by maybe offering a preview of the sound loop when the user rolls over, or by automatically scaling the shape by a small amount, maybe 10%, when it is dropped on the playing area in order to provide the initial clue that the shape needs to be scaled in order to produce sound.

Interestingly, I also noticed that some people were making use of previous knowledge of computer interaction in order to try to manipulate objects within my interface. For example, several people attempted to double-click on shapes to achieve a response, when unbeknown to them, there is no double-click feature. I found this to be a very telling sign as, although a person may never have used my interface before, they have a previous knowledge of interactions to rely on, in order to provoke some form of response from a new interactive experience. This does however indicate that it must be very difficult to introduce new gestures and interactions in software programmes, as people will be naturally inclined to test a system with what they already know.

As an observer, I found it very difficult not to intervene – I really didn’t want to show people how to use the interface, although I admit that I did on a few occasions! In terms of my own response to seeing people try to use the interface I felt bad for those who struggled and were unable to get anywhere but equally so, I also felt really satisfied when I heard the sounds playing and saw people experimenting and becoming more engaged with it. I think the most rewarding experience was seeing one particular person spend ages trying to figure the interface out; when they finally did, the excitement of overcoming that initial challenge led them to then go and get a friend so that they could demonstrate it working.

I certainly learnt a lot from these observations and I think that they represent a brief testimony to the scale and complexity of the challenges faced by interaction designers. In terms of developing my work to be more user friendly, and hopefully more intuitive, I will need to provide a lot more in terms of assistance throughout the process of interaction. Some of the ideas that I am considering include the use of short instructional animations which give the user clues about interactions that can be used. I think that if I am able to develop it effectively, I’d really like to pursue this idea in conjunction with a very basic AI/rule based learning feature within the interface which starts the user off with animations that demonstrate basic functions of the interface, as the user successfully carries out each of the functions, maybe a few times; additional, more advanced, features will then be ‘unlocked’.

For example, one of the features that I am thinking on implementing is that of a ‘throw’ function whereby the user will be able to literally throw a circle, causing it to travel across the screen on it’s own inertia, bouncing off of the sides of the screen, and other shapes. Within the context of the intuitive learning aspect of the interface, this would be a feature that can be unlocked after the user has successfully learnt that they are able to drag, position and re-position a shape several times. I think that this will take a lot of careful planning to put together but it would be a really interesting aspect of the interface to develop, and certainly one which may be needed!

One of the things I am finding most challenging at the moment is that there are so many things to think of with regards to how the interface could be developed – over the next month or so I’m going to be spending time, really considering where I want the development to go. Key things that I want to think about are:

  • Audience – who is this interface being targeted towards?
  • Break down of interactions – will all shapes have the same features, would it be more appropriate to use different shapes for different sounds, changing their controls to suit?
  • What further interactions can be developed, how can they be combined for maximum control and manipulation of sound loops?
  • How can the interface be made more user-friendly? – Learning system/animations
  • How will the sound aspect of the interface be developed/chosen? – will there be an option to load different sound loop sets?
  • Look for ways in which to provide more feedback to the user…

As ever, lots of things to think about but only ever 24 hours in a day…

Here’s a great pearl of wisdom that I came across in Bill Moggridge’s book, Designing Interactions:

“The risk of failure puts up a barrier to trying the first idea, as we know that they are almost bound to be bad or wrong in some way. If we can shrug off the fear of that risk and get into the habit of trying things out as soon as we can, we will fail frequently, but the reward is that we will succeed sooner.” Moggridge (2006, p. 684-685)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s