-
oa Multimodal Interface Design for Ultrasound Machines
- Publisher: Hamad bin Khalifa University Press (HBKU Press)
- Source: Qatar Foundation Annual Research Conference Proceedings, Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1, Mar 2016, Volume 2016, ICTSP2476
Abstract
Sonographers, radiologists and surgeons use ultrasound machines on a daily basis to acquire images for interventional procedures, scanning and diagnosis. The current interaction with ultrasound machines relies completely on physical keys and touch-screen input. In addition to not having a sterile interface for interventional procedures and operations, using the ultrasound machine requires a physical nearby presence of the clinician to use the keys on the machine, which restricts the clinician's free movement and natural postures to apply the probe on the patient and often forces uncomfortable ergonomics for prolonged periods of time. According to surveys being continuously conducted on the incidence of work-related musculoskeletal disorders (WRMSDs) for the past decade, up to 90% of sonographers experience WRMSDs across North America during routine ultrasonography. Repetitive motions and prolonged static postures are among the risk factors of WRMSDs, which both can be significantly reduced by introducing an improved interface for ultrasound machines that does not completely rely on direct physical interaction. Furthermore, the majority of physicians who perform ultrasound-guided interventions hold the probe with one hand while inserting a needle with the other, which makes ultrasound machine parameters adjustment unreachable without external assistance. Similarly, surgeons' hands are typically occupied with sterile surgical tools and are unable to control ultrasound machine parameters independently. The need for an assistant is suboptimal as it sometimes difficult for the operator or surgeon to communicate a specific intent during a procedure. Introducing a multimodal interface for ultrasound machine parameters that improves the current interface and is capable of hands-free interaction can bring an unprecedented benefit to all types of clinicians who use ultrasound machines, as it will contribute in reducing strain-related injuries and cognitive load experienced by sonographers, radiologists and surgeons and introduce a more effective, natural and efficient interface.
Due to the need for sterile, improved and efficient interaction and the availability of low-cost hardware, multimodal interaction with medical imaging tools is an active research area. There have been numerous studies that explored speech, vision, touch and gesture recognition to interact with both pre-operative and interventional image parameters during interventional procedures or during surgical operations. However, research that target multimodal interaction with ultrasound machines has not been sufficiently explored and is mostly limited to augmenting one interaction modality at a time, such as the existing commercial software and patents on enabling ultrasound machines with speech recognition. Given the wide range of settings and menu navigation required for ultrasound image acquisition, there is potential improvement in the interaction by expanding the existing physical interface with hands-free interaction methods such as voice, gesture, and eye-gaze recognition. Namely, it will simplify the image settings menu navigation required to complete a scanning task by the system's ability to recognize the user's context through the additional interaction modalities. In addition, the user will not be restricted by a physical interface and will be able to interact with the ultrasound machines completely hands-free using the added interaction modalities, as explained earlier in the case of sterile environments in interventional procedures.
Field studies and interviews with sonographers and radiologists have been conducted to explore the potential areas of improvement of current ultrasound systems. Typical ultrasound machines used by sonographers for routine ultrasonography tend to have an extensive physical interface with keys and switches all co-located in the same area as the keyboard for all possible ultrasonography contexts. Although the keys are distributed based on their typical frequency of use in common ultrasonography exams, sonographers tend to glance at the keys repeatedly during a routine ultrasound session, which takes away from their uninterrupted focus on the image. Although it varies based on the type of the ultrasound exam, typically an ultrasound exam takes an average of 30 minutes, requiring a capture of multiple images. For time-sensitive tasks, such as imaging anatomical structures in constant motion, the coordination between the image, keys selection, menu navigation and probe positioning can be both time-consuming and distracting. Interviewed sonographers also reported their discomfort with repeated awkward postures and their preference for a hands-free interface in cases where they have to position the ultrasound probe at a faraway distance from where the ultrasound physical control keys are located, as in the case with immobile patients or patients with high BMI.
Currently, there exist available commercial software that addresses the repeated physical keystrokes issue and the need for a hands-free interface. Some machines provide a context-aware solution in a form of customizable software to automate steps in ultrasound exams, which reported to have significantly decreased keystrokes by 60% and exam time by 54%. Other machines provide voice-enabled interaction with ultrasound machines to reduce uncomfortable postures by sonographers trying to position the probe and reach for the physical keys on the machine. Interviewed sonographers frequently used the context-aware automated interaction software system with ultrasound machines during their ultrasound exams, which shows a potential for the context-aware feature that multimodal interaction systems can offer. On the other hand, sonographers did not prefer using voice commands as a primary interaction modality with ultrasound machines in addition to the existing physical controls, as an ultrasound exam involves a lot of communication with the patient and relying on voice input might cause misinterpretations of sonographer-patient conversations to be commands directed to the machine instead. This also leads to the conclusion that there is a need for voice-enabled systems augmented with other interaction modalities to be efficiently used when needed and not be confused with external voice interaction.
This study aims to explore interfaces for controlling ultrasound machine settings during routine ultrasonography and interventional procedures through multi-modal input. The main goal is to design an efficient timesaving and cost-effective system that minimizes the amount of repetitive physical interaction with the ultrasound machine in addition to providing a hand-free mode to reduce WRMSDs and allow direct interaction with the machine in sterile conditions. Achieving that will be done through additional field studies and prototyping followed by user studies to assess the developed system.