A Pilot Study into the Usability and Efficiency of using a Head Mounted
Microdisplay Compared to Traditional Methods in a Psychomotor Task.


James Kelly: Research Student.
Epicentre.
University of Abertay, Dundee
. E-MAIL [email protected]


Abstract


This pilot study was conducted to investigate the usability and efficiency of using head-mounted microdisplay instructions to build abstract models compared to using desktop and paper instructions. The study used six participants, three males and three females who were all familiar with computers in their respective professions. They were asked to build three models with play bricks with the same instructions in three conditions, with paper instructions, desktop instructions and with microdisplay instructions. They were timed building the three models in each condition. After the experiment they were asked questions in a semi-structured interview to elicit their views of the study. The hypothesis was that there would be an overall difference in the completion times taken for all the models in each condition. The hypothesis was not supported however feedback from the semi-structured interviews revealed that participants had perceptual problems following the instructions in the microdisplay condition. These findings are discussed in relation to the way the information is presented in microdisplays.


Introduction

According to Ockermann, Najjar, Thompson, Treanor and Atkinson (1996) the modern workplace is an environment that is in constant flux. In recent years there has been an increase in downsizing and automation and this constantly changing work environment requires a more flexible workforce that can be retrained in new skills very quickly. This situation often necessitates training that is more efficient than traditional methods. Conventional training methods are viewed as costly and time consuming. The traditional method has been criticized for being removed from the context of the job and there are problems with employees transferring what they are learning to the workplace. Traditional training is seen as being geared towards the achievement of classroom goals rather than job performance. Carr (1992) cited from Ockermann et al. (1996)

Many researchers in education and instructional technology see the wearable computer as a panacea to many or all of the problems of traditional retraining. As defined by Mann(1996) a wearable computer is a portable display, which can be attached to the user in a natural fashion, which allows them to engage in other activities. These displays are head mounted and some permit the viewer to see the world through the display. It is claimed that wearable computers can train workers in task specific skills such as a psychomotor skill or a learning a specific computer package. However the usability of this nascent technology for the general population has been taken for granted. As pointed out by Roscoe(1993) military aircraft pilots have worn some form of head mounted display for several years and have experienced various physiological difficulties, such as eyestrain and altered perceptions after prolonged use. Given that these pilots have above average eyesight, the question for human factor research is what would the effect of wearing such systems have on the general population whose eyesight has great individual differences.

This pilot study will investigate whether there are any differences between the efficiency and usability of a wearable computer compared to traditional methods. Six participants will be asked to build three abstract models from play bricks in three conditions. In one condition the instructions are simple paper instructions. The other two conditions have the paper instructions converted into an interactive computer programme. In one of these conditions the instructions are presented on a desktop, on the other condition they are presented on a head mounted microdisplay. The participants will be timed building the three models that represent different levels of difficulty in the three conditions. After the experiment the participants will be asked about their views on the experiment using a semi-structured interview. The experimental hypothesis is that there will be an overall difference in the completion times for all the models in each condition.


Method

Design

This pilot study was in two parts, a quantitative section and a qualitative one. The quantitative section employed a within subjects design. The independent variable was the type of display used to give instructions to build the models. The dependent variable was the time taken to construct the models. The order in which each participant received the instructions was counterbalanced. The qualitative section involved a semi-structured interview with set questions to obtain feedback from the participants about the experiment.

Participants

There were six participants, three males and three females. Four of the participants were involved in some capacity in research at the University of Abertay, Dundee. The other participants had daily contact with computers in their respective professions. Only two participants, a male and a female, wore correction for their eyesight. Participants were selected using convenience sampling.

Materials and Apparatus

Materials in this study comprised of two computers, a Sony Glasstron binocular LCD head-mounted display that could view output from PCs (see appendix 1). Play bricks to construct three models (see appendix 2). The models in the study were referred to as "Model A, "Model B and "Model C". Each model had 22 bricks and represented a different level of difficulty in construction. The constructions were designed so that Model A would be the easiest to build, Model B a bit harder, then Model C the hardest to build. The level of difficulty was based on several factors such as angle of view of the model in the pictorial instructions, type of bricks and complementary instructions. For example model C was thought to be the hardest to build, since the screenshots were more distant than the other two models. The angle of view made the construction of parts of the model ambiguous. As well as this there were less complementary instructions, e.g. 2x6 to indicate which size of brick was needed. Three sets of paper instructions for assembling the models (see appendix 3). An interactive programme using the same instructions as the paper guide was used in the desktop and microdisplay conditions (see appendix 4). This comprised of a split screen presentation, with an animation of the model on the right hand side and a frame by frame guide on the left. Participants could go to any stage in the animation or move through the assembly frame by frame. A stopwatch was used to time the participants constructing the models in seconds.

Procedure

The order in which participants were assigned to the conditions was counterbalanced (see appendix 5). The three display types in which the instructions were presented to the participant were: paper, desktop and microdisplay. The same instruction programme was used in both the desktop and microdisplay conditions. Each participant was timed building the three models in each of the three conditions, thus building models nine times. One PC was used for the desktop condition and the other for the microdisplay condition. Before using the desktop and microdisplay instructions, the experimenter carefully explained the control system to the participant. After the participant had completed the test battery, the experimenter asked a set of questions about the experiment (see appendix 6).


Results

Figure 1. Graph showing completion times for Model A, Model B and Model C in each instruction condition: Paper, Desktop and Microdisplay.



Figure 1 shows the mean completion times for the three models in each instruction condition. In the paper instruction condition model B had the highest mean completion time of 120.17 seconds. Models A and C had almost the same completion times, 103.67 and 103.66 seconds respectively. The mean completion times in the desktop condition were fairly similar. Model A had a mean completion time of 107.83 seconds. Whereas model B had a completion time of 105.83, almost the same as model C which was 105.33. In the microdisplay condition model A had a mean completion time of 138.83 much higher than models B and C. The mean completion times of the latter two models were similar, model B had a completion time of 122.83 and model C had a completion time of 121.33.

Figure 2. Graph of the collective mean completion times and standard error of the three models in each of the three instruction conditions, paper, desktop and microdisplay.



Figure 2 shows the mean completion times and standard error of all the models in each of the three types of instruction. The mean completion time for the paper instructions is 109.17 seconds and the standard error is 7.45 seconds. The mean completion time for the desktop condition is slightly lower at 106.33, with a standard error of 4,64. The highest overall mean completion time, was 127.67 seconds for the microdisplay instructions. The standard error for this condition was 8.76. To test these observable differences further the data were analysed using a two-way analysis of Variance for repeated measures. There was no significant effect for type of display instructions F(2,10)=2.936,n.s. There was also no significant effect for type of model F(2,10)=0.524,n.s. and no the display type times model type did not reach significance F(2,10)=0.477,n.s.


Feedback from Semi-Structured Interviews.

Participants were first asked to say which model they found easiest and which they found the hardest to build. Four participants found model A the easiest to build and one found model C the easiest. Three participants found the model C the hardest to build. However one participant found model B the hardest to construct and another model A.. Some participants found no difference in difficulty between some models. For example one participant found model A the hardest to build but the other two models they thought were a similar level of difficulty.

All the participants except one found the paper instructions easiest to use. The one exception found that the desktop instructions were the easiest. Most of the participants did not think that the two formats of instruction (animation and still frame) in the desktop and microdisplay conditions very useful. In fact all the participants used the animated instructions to construct the models in the conditions. One participant commented that the movement of the animation drew your eyes to it. All the participants except one found the animation too fast at the beginning of the experiment but once they had adjusted to the speed, they found the animation too slow. One participant thought it was too fast all the time. All the participants thought the control system for the animation was easy to use. However one participant suggested that the links to each stage should be put down the side rather than cluttered at the bottom since this made it difficult to pick out a particular link to a stage.

After answering the set questions the participants were invited to give general comments about the experiment. One participant experienced eyestrain using the microdisplay because they were forced to concentrate more and stare into the screen and count the nodules on the play bricks. They had to scan the image to take in all the instructions. Another participant found the head-mounted display uncomfortable at first but got used it eventually. This participant had to put the microdisplay right up to their eyes to see the instruction programme. They felt that the image should be sharper. Two participants complained about the poor resolution on the microdisplay compared to the desktop, even though both resolutions were the same, 800x600. They experienced more difficulty in seeing the size and the orientation of the bricks in the microdisplay than in the desktop, especially the black pieces. One participant found the instructions as seen through the microdisplay as too small and had to keep adjusting the display to see the whole screen.

Discussion

From the results it can be seen that the hypothesis that there will be an overall difference in completion times for all the models in each condition was not supported. This finding conflicts with the post-test interviews where participants reported difficulties using the microdisplay instructions. They found that the instructions were either hard to see clearly, too small or having a poor resolution. Some participants also reported having difficulty discerning the orientation of the bricks in the microdisplay condition. Some also found the component parts and details hard to discern. These perceptual problems may indicate that the way in which the instructions were presented were not compatible with viewing on a head mounted microdisplay. These findings may also reflect the large individual differences in eyesight of the general population. The fact that not all participants found the model A the easiest to build and model C the hardest may reflect the differing perception that people have for objects.
It must be noted that this study only used a small participant population, larger numbers may make affect the significant level of the observed differences. A future extension of this research paradigm could look at ways to overcome these perceptual problems encountered with the head- mounted microdisplay and enhance the instructional information so that it is compatible with a head-mounted microdisplay. It appears from this experiment that the information that is presented as paper or desktop instructions is not necessarily seen the same way on a microdisplay. Changes could be made so that information presented to the user of the microdisplay is presented clearly and unambiguously. On the plus side some participants did say that they found the head-mounted display awkward and difficult to use at first but they said they eventually got used to it. The fact that most showed a preference for the paper instructions may indicate that this is a format for instructions that they are used to.
In sum it can be seen from this pilot study that for a psychomotor task there was no real significant difference in the overall mean completion times for the models in the three conditions. In general participants found that the instructional information harder to use on the head-mounted microdisplay. A possible extension of this study would be to try and enhance the information presented on the microdisplay so it can viewed more easily by the user.

References

Carr, C. (1996) in Ockerman, J.J.,Najjar,L.J., Thompson, J.C., Treanor, C.J. & Atkinson, F.D. FAST:A Research Programme for Educational Performance Support Systems. http://mime1.marc.gatech.edu/mime/papers/edmedia97_demo.html

Ockerman, J.J.,Najjar,L.J., Thompson, J.C., Treanor, C.J. & Atkinson, F.D. (1996) FAST:A Research Programme for Educational Performance Support Systems. http://mime1.marc.gatech.edu/mime/papers/edmedia97_demo.html

Mann, S. 1996 Wearable, Tetherless, Computer-Mediated Reality (with possible future applications to the disabled) http://hwr.nici.kun.nl/pen-computing/wearables/steve-mann/

Appendices


Appendix 1
The Sony Glasstron Head-Mounted Microdisplay.




Appendix 2
Abstract Models
Model A



Model B



Model C



Appendix 3
Example of Paper Instructions. (These were presented in full colour).





Appendix 4
Desktop and microdisplay instruction programme.



Animation Panel            Frame by Frame panel          Navigation buttons


These instructions comprised of a set of unpublished web pages created using Liquid FX v 3.1 HTML editor. The stages of construction of the models were made using Lego Creator. The model was constructed in the program. Then each stage was saved as a jpeg file using the screen capture facility of Paint Shop Pro v.6. The jpeg files were then turned into animations. The animation was made using Animation Shop v.2. which ran at a speed of one frame every 4.5 seconds. The right hand panel contained the instructions in a frame by frame format. The left-hand panel contained a full animation of the model beneath the animation are links to each stage in the animation. The current piece to be used is in the left hang corner of each panel, with an arrow indicating its position on the model. The blue "Restart Animation" button starts the animation from the beginning.

Appendix 5 Counterbalancing

Subject 1 A E I 1 2 3

Subject 2 F G A 2 3 1

Subject 3 B H D 1 3 2

Subject 4 G F B 3 2 1

Subject 5 E C I 2 1 3

Subject 6 H C D 3 1 2


Appendix 6

Post Test Interview


1) Which figure did you find easiest/hardest to build?

2) Which of the instruction formats, paper, desktop or microdisplay did you find easiest to use?

3) In the desktop and microdisplay conditions did you find the option of the two types of instructions:- animation and still frame useful?

4) In the desktop and microdisplay conditions, which instructional format, did you use the most, animation or still frame?

5) In the desktop and microdisplay conditions did you find the speed of the animation either too slow or just right?

6) In the desktop and microdisplay conditions did you find the control system easy to use?

7) Do you have any further comments about the test?

|Research||Experiment1||Homepage| |Biological Basis||Wearable Computers| |Memory Systems|
Hosted by www.Geocities.ws

1