Input Devices for Virtual Reality
Input Devices for Virtual Reality
  • archivist
  • 승인 2005.07.01 12:01
  • 댓글 0
이 기사를 공유합니다

Input Devices for Virtual Reality The last article examined various display systems for different human sensory modalities used as output devices in virtual reality systems. Here, we take a look at typical sensors used as input devices for virtual reality. Remember that interaction (both input and output) is a key to making a human user feel that one is part of and "in" the virtual environment. The sensors can be largely divided into three types: continuous, discrete and combined. Continuous input devices are hardware devices that sample certain physical properties or quantities of the real world such as a position, orientation, acceleration, velocity and pressure. The discrete input hardware devices generate one event at a time upon the user's designated action, such as the pressing of a button or making a pinch action. Continuous input devices are usually used in combination with discrete input devices (such as a mouse), and in conjunction with event-generating recognition software (for the recognition of gestures, voice commands and body movements). The most important (continuous) input device used for virtual reality systems is the tracker that senses and tracks a designated position/orientation in 3D space. Its importance is analogous to how a mouse is important to desktop computing. They are important because tracking 3D position and orientation is essential in realizing natural interaction. Trackers come in many different flavors according to how they work (which again is related to the accuracy and amount of possible distortion), whether they are wired or not, degrees of freedom, and operating range.
Magnetic trackers are composed of a source that emits a low frequency magnetic field and sensors that determine their position and orientation relative to the magnetic field. They are relatively inexpensive with reasonable operating range (up to 30 ft.) and accuracy (0.1 inch/0.1 deg.), but suffer from significant distortion with metal objects in the environment. Acoustic trackers use sound waves andtheir travel distance in unit time to triangulate the position and orientation of the sensor. Due to its operating principle, the line of sight between the sensor and the sound wave source must be clear. Acoustic trackers are inexpensive but usually have low accuracy (~ 1 cm) and limited range (~2m). Mechanical trackers rely on sensing the joint movements of mechanisms, as in a manipulator-like robot, and thus are highly accurate. Inertial trackers are based on computing distances or orientation traveled with acceleration values obtained from gyros or accelerometers. Due to the nature of integration, after some time of operation, errors start to accumulate and the system must be reset again. Camera-based tracking offers an inexpensive way to make these sensing wireless and convenient to use. Camera based tracking is becoming popular because of the increased capabilities and reduced costs of the PC and associated hardware like digital signal processing. It still has relatively low accuracy, unless markers are used or a known static background is used. There also special purpose devices for tracking gaze, fingers (e.g. a glove-like device), body postures and human limbs. Figure 1: Various tracking sensors: From to top, left to right (a) Magnetic trackers, (b) Ultrasonic 3D mouse, (c), (d) 3D mouse (cursor position moves in 6 degrees of freedom by controlling the isotonic ball, (e) Finger trackers (glove), (f) camera-based tracking. There is a variety of discrete event generators used for virtual reality systems. From the viewpoint of interaction, we describe them by the parts of the body used to initiate the events. The most typical interaction is that carried out through the hand. Button devices are the most common hand/finger activated event generators. Another possibility is to use pressure sensors mounted on fingertips on gloves, and use the finger pinch actions to generate many different events. Hand (motion) gestures are also used often. However, recognition of hand gestures is generally difficult because hand/finger posture/movements need to be tracked either by mechanical sensors that are difficult to use ergonomically. The foot has been used primarily for interaction control for navigation. Custom made buttons or pressure-based sensors mounted on mats or floors (or even a stepper machine) have been used to detect footsteps and interpreted for navigation control (e.g. direction and speed). Figure 2 shows a natural foot gesture-based navigation. Figure 3: The "Walk-in-Place" metaphor used for navigation interface. The "natural" walking gesture of the user is recognized to control navigation in the virtual environment. Voice and speech recognition represents another natural method for interaction in virtual reality systems as humans use speech everyday in conjunction with other modalities. While the technology is advancing rapidly, voice and speech recognition is only at a level for use in isolated word recognition. It is still speaker-dependent, requires training, and typically suffers from a low recognition rate if there is significant ambient noise. In this case, special microphones might be required. VR systems employ a number of input sensors to realize 3D multimodal interface. However, due to the various operating principles and external conditions, they often exhibit a large amount of error, which results in an incorrect reflection of user input and distorted output, thus make it difficult for the user to accomplish a given task and cause user discomfort. One ways to battle such intrinsic sensor errors is to calibrate them, correcting the sensor output by adjusting it to match or conform to a dependably known and unvarying measure. To summarize, sensors are equally important as display systems: they allow the user to convey his/her intention as to control and access the virtual world. After all, interaction is a two-way street! In the next article, we will look at various cases of non-traditional 3D multimodal interfaces that utilize some of the displays and sensors I have presented in the last two articles. These 3D multimodal interfaces help users experience the contents more directly.

댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • ABOUT
  • CONTACT US
  • SIGN UP MEMBERSHIP
  • RSS
  • 2-D 678, National Assembly-daero, 36-gil, Yeongdeungpo-gu, Seoul, Korea (Postal code: 07257)
  • URL: www.koreaittimes.com | Editorial Div: 82-2-578- 0434 / 82-10-2442-9446 | North America Dept: 070-7008-0005 | Email: info@koreaittimes.com
  • Publisher and Editor in Chief: Monica Younsoo Chung | Chief Editorial Writer: Hyoung Joong Kim | Editor: Yeon Jin Jung
  • Juvenile Protection Manager: Choul Woong Yeon
  • Masthead: Korea IT Times. Copyright(C) Korea IT Times, All rights reserved.
ND소프트