Browse Content by Topic:
Redefining the Human Machine Interface
Author: Geoff Cady
Copyright: 9-1-1 Magazine, Feature Content,
Originally published in our June, 2004 issue.
Technological advances during the past decade have transformed the smallest communication center from having a lone phone and microphone to adjustable workstations, placing more technology at the fingertips of telecommunicators than consoles found in NASA’s Mission Control Center. These technologies include powerful decision support tools for caller interrogation, computer-aid dispatch, computer telephony integrated phone and radio systems, and digital audio-logging and play back systems. And while these technologies have resulted in a 10-fold increase in the quantity of information collected to support telecommunicator decision-making, little progress has been made in improving the ability of users to manage and digest the information presented to them. Some advances in the display and organization of information using gooey user interfaces (GUI) and point-and-click command functions have improved dispatch tasks, but there’s still a ways to go yet.
In the January/February 2004 issue of 9-1-1 Magazine I wrote about multitasking/task switching research being conducted by NASA and the Federal Aviation Administration (FAA) in an attempt to improve the ability of fighter pilots to maintain situational awareness while receiving and processing information received through multiple sensory channels (e.g., visual, auditory, tactile, etc.). In these studies the investigators found that some channels were more effective at managing simultaneous inputs than others. For example, simultaneous visual stimuli were more easily managed than simultaneous auditory stimuli. Given these cognitive processing limitations, avionics and weapons systems have been designed to optimize the reception and management of information through visual and auditory channels by using human machine interface (HMI) technologies (e.g., heads up displays, voice prompts, etc.).
Simultaneous auditory stimuli (e.g., radio and telephone transmissions), for example, is a common occurrence within in the 9-1-1 communication center. Competition for the telecommunicator’s attention and comprehension with receipt of two simultaneous verbal transmissions may result in critical information being missed or lost when switching between tasks. To better manage competing auditory activity, the telecommunicator needs a process to receive, analyze, and organize this activity in such a way that competing or simultaneous transmissions can be “sequenced.” Such a process would reduce or eliminate competition between radio and phone transmissions. By combining existing “voice-to-text” audio logging and computer telephony integration technologies, a process I’ll call “intelligent audio sequencing” (IAS) could improve the HMI between radio and phone systems.
Intelligent audio sequencing would capture, analyze and store a wireline or wireless transmission until a pause occurred in an auditory stream (e.g., statement, series of spoken words, etc.). Once detected, the system would prompt the user with a tone and insert the competing transmission. To ensure high priority radio transmissions (e.g., officer down, shots fired, 10--, etc.) preempt an existing auditory stream, the system would convert radio transmissions to text and search for key words or phrases that would indicate the priority of the transmission. In the event the IAS detected a transmission that must pre-empt an existing auditory stream, the system would alert the telecommunicator that an emergency transmission had been received with a tone and playback the transmission. The delay between receipt of the transmission, interpretation and playback would be imperceptible.
While this may sound like a recent episode of TV’s Alias, all of the technology necessary to create IAS exists. During the recent NAED Navigator conference this May, I discussed the proposed IAS technology with several audio logging vendors. Representatives from Stancil Recorders and Voice Print International believed the idea had merit and went on to confirm that existing technology could be used to accomplish the goal of eliminating auditory channel competition.
Voice Print [now VPI] for example, has recently partnered with a company and product called CallMiner, which converts voice to text and “evaluates calls based on a user definable set of rules and criteria.” This software was designed to perform business intelligence as well as automating some quality assurance activities. Used in IAS, CallMiner could continuously analyze radio transmissions to search for phases, words, or codes to identify priority transmissions. Combined with noise canceling headsets, all audio transmissions, whether they originate from across town or across the console would be processed through IAS to manage auditory inputs.
Other attention competing inputs that could benefit from HMI technologies include the display and management of information appearing on CAD and CTI monitors. A product with considerable promise and availability capable of displaying information are “head mounted” displays. Commonly used to create a three-dimensional virtual reality (VR) environment, this technology could eliminate the growing problem of shrinking console real estate. Furthermore, the ability to create an infinite number of keyboard, touch screen, or point-and-click user configurations could significantly reduce the cost of equipment and repetitive motion injuries that occur as a result of applying a “one configuration fits all” approach. From the users’ perspective, head mounted displays could create a scene similar to one in the movie Minority Report for interfacing with communication center technology. And while waiting for the next call, head-mount displays offer an infinite number of environments to decompress. For example, you could “go” to the beach with all of the sounds and sights. Although this may sound over the top, head-mounted displays would eliminate the need for expensive monitors; provide three dimensional graphics to have alarms leap off the screen; and permit the user to create an infinite number of user configurations to improve ergonomics and organization of their work environment.
While all of this stuff sounds pretty cool, Chris Maloney, President and CEO of Tritech Software Solutions, a leading public safety software solutions firm, pointed out that the real opportunity for HMI exists in how software developers take advantage of computing power. Chris believes we have yet to fully use computing power to assist the telecommunicator in organizing and prioritizing tasks that must be completed to maximize the efficiency of deployed resources. Chris is not alone in his observation. A growing number of system administrators and communication center managers are recognizing without more decision support, we are rapidly reaching the maximum capacity of humans to process the enormous amount of information we now capture and display without changes in how information and sensory inputs are analyzed, organized, and presented.
At the time this column was written, Geoff Cady was the Fire/EMS Analyst for the Gilroy (CA) Fire Department, and a long time communication center and EMS consultant. He subsequently became a Deputy Director for the San Jose Fire Department Bureau of Support services. Geoff is also a member of College of Fellows of the National Academy of EMD. He regularly lectures and writes on EMS and communications related topics.