As virtual reality (VR) gear gets better, cheaper, and easier to use, there is renewed interest in trying to figure out how best to make a virtual environment feel real. The typical for framing for VR is in the name: it's about creating the illusion of reality, a virtual world. Programmers even talk this way; they describe creating virtual (pretend) environments, objects, spaces, etc. From this point of view, VR is an attempt to create a stable illusory experience by faking a world that doesn't really exist.
Of course, VR programmers don't make worlds; they make information. This makes folding VR into the ecological approach a natural move, and I propose that ecologically, VR development is actually an attempt to design an optic array containing information that can support certain behaviours. It's less virtual reality, and more virtual information. This is important because the nature of the information we are using explains the form of the behaviour we are controlling. Your goal as a developer is therefore not to create tricks and illusions, but to provide information capable of supporting the behaviours you want to be possible,
As a first step towards an ecological understanding of VR, I will first follow the path Gibson laid down taking the science of perception away from illusions and towards information. I'll then think about some of the implications of taking an ecological approach for VR design. Virtual reality needs our theory of perception to become the best it can possibly be, and I hope that this post serves as an entry point for designers to become aware of what we have to offer them.
Showing posts with label pictures. Show all posts
Showing posts with label pictures. Show all posts
Friday, 27 April 2018
Wednesday, 8 October 2014
Limits on action priming by pictures of objects
If I show you a picture of an object with a handle and ask you to make a judgment about that object (say, whether it's right side up or not) you will be faster to respond if you use the hand closest to the handle. This is called action priming (Tucker & Ellis. 1998) and there is now a wide literature using this basic setup to investigate how the perception of affordances prepares the action system to do one thing rather than another.
There is, of course, a problem here. These studies all use pictures of objects and these are not that same as the real thing. These studies therefore don't tell us anything about how the perceived affordances of objects make us ready to act on those objects. This is only a problem because it's what these researchers think they are studying which means they don't pay attention to the nature of their stimuli. The result is a mixed bag of results.
For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.
For example, a recent article (Yu, Abrams & Zacks, 2014) set out to use this task to ask whether action priming was affected by where the hand had to go to make a response. Most tasks involve a simple button press on a keyboard, so they were interested to see whether asking people to respond using buttons on the monitor might enhance priming. The logic was that the spatial location of the response would be an even stronger match or mismatch to the location of the object's handle. However, they accidentally discovered that a) action priming is not reliably replicable and b) the factor that seems to determine whether it shows up is a confounding task demand. This again highlights just what a problem this experimental set up is.
Friday, 26 April 2013
The Information Available in Pictures
I've become fascinated with the problem of pictures and how they relate to the things they are pictures of. One reason is the regular use of pictures of objects to study how the affordances of those objects might ground cognition; this, I think, is a major problem.
A more positive reason is that, like language, pictures contain information about something they themselves are not (see Sabrina's information taxonomy). I have a hunch that an ecological study of picture perception might help guide an ecological study of language, because the former can take more direct advantage of the work already done about how we perceive meaning in events via ecological laws but then act as a bridge, a point along the way to the conventional world of language meaning.
Finally, the topic seems to be woefully understudied in the ecological approach. There is some, however. In the comments section on my rant about using pictures to study affordances, I was pointed to the work of John Kennedy (a Gibson student, now emeritus at the University of Toronto). I have downloaded his 1974 book, 'A Psychology of Picture Perception' and am working my way through it. Matthieu de Wit then linked me to an archive of a discussion, in papers, between Gibson and Ernst Gombrich about picture perception. I thought I'd start with Gibson (1971), The Information Available in Pictures, to begin to sketch out what we know and what we don't.
The current question at hand is, can pictures provide the same information about the things they depict?
A more positive reason is that, like language, pictures contain information about something they themselves are not (see Sabrina's information taxonomy). I have a hunch that an ecological study of picture perception might help guide an ecological study of language, because the former can take more direct advantage of the work already done about how we perceive meaning in events via ecological laws but then act as a bridge, a point along the way to the conventional world of language meaning.
Finally, the topic seems to be woefully understudied in the ecological approach. There is some, however. In the comments section on my rant about using pictures to study affordances, I was pointed to the work of John Kennedy (a Gibson student, now emeritus at the University of Toronto). I have downloaded his 1974 book, 'A Psychology of Picture Perception' and am working my way through it. Matthieu de Wit then linked me to an archive of a discussion, in papers, between Gibson and Ernst Gombrich about picture perception. I thought I'd start with Gibson (1971), The Information Available in Pictures, to begin to sketch out what we know and what we don't.
The current question at hand is, can pictures provide the same information about the things they depict?
Thursday, 28 February 2013
The affordances of objects and pictures of those objects
People interested in how perception and action affect cognition have begun talking about affordances. This should be great news; the ecological approach suggests that affordances are the properties of the world that we perceive that enable us to control our actions, so if you are interested in how action can ground, say, memory or language, then discussing affordances should enable real progress.
The term 'affordance', however, is a technical term, and it refers to very particular properties of an organism's environment. There are methods for experimentally identifying exactly how these properties are composed, and there are methods for testing our perception of them. If you aren't using these methods, and if you aren't using the term correctly, then you aren't studying affordances.
Subscribe to:
Posts (Atom)