Showing posts with label #InteractionDesign. Show all posts
Showing posts with label #InteractionDesign. Show all posts

Friday, 2 October 2015

FMP: Wake Up light by Philips

To make this clear, my idea came before I research any existing products in the market, but only to realise how similar my idea is to this lamp from Philips. It's the Wake-Up Light



Wake-Up light is doing quite the same as my idea, it has the feature of simulating sunset to compensate the effects done by artificial light at night. Although I am not sure if circadian rhythm can be adjusted by this mean, the light was designed in this regard. When a certain function is turned on, the lamp will slowly dims down over 30 minutes duration.

In the morning, the lamp also simulates the sunrise by gradually brighten the light and the colour changing according the the actual sunrise, along with various default  nature sounds to wake up the person (you can supposedly choose any music in your music library, I am not an expert but I highly doubt that waking up with heavy metal is anywhere close to natural)



It also can track your sleeping quality and pattern, but the accuracy should be under review, there are two common way to track sleep, I have explained this in the previous post. In this product, I think (but not sure), it is using the accelerometer inside your phone to track the your movement during your sleep. However, the mobile phone in this case, is on a docking station not on the bed, very unlikely to detect any movement that is occurred on the mattress, but this is one of the key feature to the mobile app Philips has provided. 

So the question now is, how do I differentiate this Philips Wake-Up light from my own idea, I think there is some similarity between the two, but the main focus of the two are not the same. Wake-Up light is not designed to change a person's behaviour but to suit their lifestyle. 


The product allows the user to customize many features. But to me and my idea, I mainly wish to change the people's behaviour. The sunsets simulation is identical to my idea when the light dims and head droops, but the simulation from Philips starts when the phone is docking on the station in the evening, this will only happen when the person is ready to sleep. My project on the other hand, is for reminding people who has NOT yet gone to sleep. You see the difference? One is purely suiting the user, the other is changing the user, and this links back to my thesis where Marc Hassenzahl stated that "convenience will not instill change but friction will."







Friday, 21 August 2015

FMP:Designing Meaning

In this post, I am writing about my own thought, probably won't really reference much as  back ups for the idea I have in my mind, as this is not an academic piece of writing I suppose this is okay from time to time.

Henry Ford said: Every object tells a story, if you know how to read i[t. This quote sort of explained the idea fairly well, everything that has been produces contain some kind of meaning with itself when it has been sold to a customer. Yet the meaning of this very object will depends on how it's been used by its owner. I do understand that  this is kind of cheating for saying everything has a meaning but it relies on a person to define it. But I honestly think this is fairly true. A meaningful object to one does not mean it will also be meaningful for another person. Meaning by its definition, it's "what is intended to be, or actually is, expressed or indicated;signification; import:"

In designer's view, something that is important may not go the same with the consumers. This is going back to the experience model from Marc Hassenzahl.

Meaning is a personal thing, there is no rules or guidance towards meaning but purely generated from inside one's mind. We do not get attached on the same thing for everybody, there is a preference and there is story behind everyone, thus everyone is going to be different.

So what do I mean by bringing meaningful experience towards consumers? since nothing is meaningful for everyone, how can you possibly design an interaction in this regard. And to be honest, I can't. I can't do it for everything for everyone, I know its complexity in this topic. The reason I chose to pick this topic is that I feel IoT has been quite a buzz on the internet and a market. People enjoy the thought of having automatic everything or long distance monitoring/controlling things, this illusions of living in the future. And many manufactures will also solely produce such products which only because this is a new trend and just for the sake of earning money, they make it. Even the company I worked for asked me to develop a "Cheap" but "Reliable" project for people to turn a light on in London from New York. 

Here I am not criticizing those company are just money whore, what I wanted to say is, we should start thinking of bringing a much deeper meaning things to the market. Now that we have better understanding of many problems we should solve, by designing things that could solve problem from the core is one of the definitions I think is meaningful. Back to the topic, even thought I can't design objects that is meaningful to everyone, but i could design a object that is meaningful enough to the matter itself.

By meaningful enough to the matter itself, I mean the object is directly related to and has direct effects on its target thing it is designed to address. Everything is designed for a purpose, rather it's for someone in love, pain, for a better living, an easier operation of certain tasks or just entertainment. if that particular object is addressing the matter to its core, I would say it is a meaningful design. 

For intense, persuasive technology in my thesis, Why is that I dont think Carla Diana's example really isn't meaningful enough, it is still carries a meaning but not quite there, is because her examples only express "information", like most of the commercial products, they only show information, and hoping the users will change their behavior solely one the information they have been shown. Studies suggested that this type of persuasion will only have short term effect, and for those small number of samples that lasted long term, they are changing their behaviors for the sake of numbers, achievements, competitions. In an other words, they didn't do it for the purpose of better health or better world but an irreverent purpose.

Nike Fuel band is one of the examples of numbers driven activities 

So I think it is pretty clear that in terms of persuasive object of IoT, I am trying to figure out a different approach to the matter, I am trying to solve an issue from the heart of a person rather than showing numbers or forcing people to do so and praying they will truly accept the alternatives. Because people really don't like changes, including me, as it is easier to not change. in order to achieve the goal of this "Change" number itself will not be enough and thus in this context, not meaningful enough.


There are still many thins that need to be proved/disproved in order to support this theory, and I do understand that it could be wrong at many points, but this is what I am thinking at the moment and I am hoping to support the argument from my project and thesis rather than using this as a support to my paper. 


Wednesday, 12 August 2015

FMP: Operand Conditioning

I have mentioned Operand Conditioning in the last post, saying some products or services use this technique to change one's behavior. The principle seems pretty simple but I think it is still worthwhile to explore the technique further.





Operand conditioning is a learning process in which behavior is sensitive to, or controlled by its consequences. For example, a child may learn to open a box to get the candy inside, or learn to avoid touching a hot stove. In contrast, classical conditioning causes a stimulus to signal a positive or negative consequence; the resulting behavior does not produce the consequence. For example, the sight of a colorful wrapper comes to signal "candy", causing a child to salivate, or the sound of a door slam comes to signal an angry parent, causing a child to tremble. The study of animal learning in the 20th century was dominated by the analysis of these two sorts of learning, and they are still at the core of behavior analysis.


In an other words, Behaviors have consequences, there are two main types of consequences: Reinforcement and Punishment, each one of them has two types of consequences: Positive and Negative. Reinforcement is to increase the tendency of the target behavior will occur, Positive Reinforcement is to add an extra thing to increase the possibility, I.e. rewards. Negative Reinforcement is something that it's been taken away in an effort to increase the tendency of target behavior occurs, I.e. completing target reinforcement will remove any warning sounds/sights.

Punishment on the other hand, is to decrease the target behavior will occur again, same with reinforcement, Positive Punishment is adding something, Negative Punishment is to take something away.

So what product uses Operand Conditioning to change your behavior, Nike Run app is a pretty good example. The app tracks your work out duration and distance and over pace, when you meet a certain level you can unlock the "trophies" to honor your effort. this is, to me, a straight Positive Reinforcement. By rewarding you "achievements" (trophies) to increase the tendency of "target behavior" (working out)

Tuesday, 11 August 2015

FMP: Aesthetic of Friction- Marc Hassenzahl


A famous psychology experiment of giving a small child one marshmallow, and tell him/she if they can resist eating that marshmallow, they can get two later on. The result mainly show that the kid will attempt to resist eating that marshmallow but mostly fail in the end. This in psychology is called "temporal discounting" 



Temporal Discounting is a tendency to give greater value to rewards as they move away from their temporal horizons and towards the "now". However, the preference reversal occurs when both rewards are set to be the future by a little difference, for example, a £100  in a week or a £150 in a week and a half, they usually choose to wait a week and a half. But, when the little reward comes to now, the preferences maybe reversed again. Pigeons have the same tendency over the preferences. 

So what does this have to do with the interaction design, or just design world in general?

Lets say we want to change one's health state by convincing him not to consume too much sugar. In this case, the little reward is eating sugar, the large future reward is a healthier body. Yet the idea of better health is abstract and vague, you maybe able to persuade a person in a short term, but in the long run, persuasion will turn into a self-regulation. with the uncertainty of future reward( a vague idea of health), the person might prefer the little immediate reward (eating sugar). And this is where persuasive technology comes in.


Persuasive technology is designed to change attitudes or behaviours of the users through persuasion and social influence, not through coercion.


The persuasive technology model are fairly simple, they usually provide basic information and feedback, nor employ a very simple model of conditioning behaviour.


     

(Conditioning Behaviour/Operant Conditioning  )

Although persuasive technology offers feedback or operant conditioning, human is much more complicated than pigeons even we have the same time-discounting. Human have insight of the world and cannot be forced in to a motivation machines. They need something else help them to transform, to trick them into motivation.

To achieve this, we must forget the socialised culture of design, which is making things easy and convenient, because the aesthetic convenience does not instill change. What we actually need, is an aesthetic of friction, through this friction, but not a coercion, people will then to start behave like the way that object is meant to deliver. 

Although Marc Hassenzahl did not mention anything about meaningful experience in his presentation, I feel this still could be a definition for it, and it is very different from what Carla Diana has given, right here he focus more on the psychology side of interaction, base on psychological studies and design to shape a person's behaviour in a better way, or not!





What's interesting is that the projects he shows on the surface all have very clear statement of what it is trying to express, but the user still owns a choice of accepting the message or not, in an other word, the devices are not trying to force people into a motivation, just like he said, human cannot be forced to become something, all they need is a trigger which to help them develop the behaviour themselves. 
The devices make the suggestion and still offer choices is what draws me into his thinking. 






Sunday, 11 January 2015

The Abovemarine – ‘Small Freedom’ vehicle for José, the betta splendens




The Abovemarine is a fish vehicle that designed to allow fish controlling the movement of the fish tank (it`s more like the fish tank following the fish`s movement rather than in controlled by the fish). A camera was set above the fish tank, tracking Jose(the Siamese fighting fish ), Omni wheels at the bottom then move accordingly. The creator Adam Ben-Dror actually train his fish to follow his hand as he moved it around the tank. 






This is an interesting concept of allowing creating an environment for interaction which has no human directly involved. The interaction for this project is between the fish and the space surrounding. Like an ultimate revenge for a fish finally would be able to travel on land and explore what it has never been able to explore. 





This could lead us to think outside the box, does interaction design really has to have the element of human to be called "Interaction Design"? I mean in "what is interaction design" presentation we did last year, what we have seen was all about human to human or human to a system(machine). Yet Adam Ben-Dror has shown that the relationship in this project has no human inside the loop(although he did train the fish to follow his hand, but the main interaction is still between the fish and the world). Perhaps we could start thinking of how could we allow our creation to communicate with animals in a meaningful way. What if human does not matter the most in interaction design anymore, what could it be?




This project is also remarkably similar to a project of "iRobot Create". It is a Roomba kit made for hacking. In that project they put a boinc hamster inside the ball then attached onto the Roomba. The hamster then "control" the Roomba`s movement by running in different direction inside the ball. 





In conclusion, I think what we can start exploring is the interaction between animals and system. What it is to expect when the interaction is no longer on us but other animals in general. And which role will we be playing in the design of that eventually happened?