POSTED: December 31st, 2022
POSTED IN: EM Pulse - The Official Newsletter of MOCEP, November/December 2022,
Timothy Koboldt, MD, FACEP
President-Elect, Missouri ACEP
Experiential learning, and simulation in particular has already become a mainstay of curricula throughout medical school and residency training for physicians. The future of this experiential learning has already started to arrive. Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (XR) have gone from science fiction to something you can purchase online and run on consumer-grade hardware. Existing technologies already give us the ability to recreate virtual environments, to simulate clinical scenarios, to have remote collaboration and learning, and to teach concepts in both instructor-driven and learner-driven experiences. This allows for continued incremental improvements in both the technology and processing powering the VR as well as in our own teaching methodologies and approaches. The foundation has been laid, and now we will be able build upon it to add ever more realistic and engaging offerings to a wide range of learners.
Most people have at least heard of VR. It may be helpful to delineate what constitutes VR versus augmented reality (AR) or mixed reality (XR). It is easiest to think of it as a continuum with VR on one end and AR on the other. In general, a VR application is a completely virtual environment where all you experience is the virtual environment without any real-world inputs. You put a headset on and cannot see the real world, and only interact with virtual stimuli. Augmented reality on the other hand implies that you are still able to see the real world but something virtual has been overlaid on it. This would be like “Terminator” vision for those of us old enough to have seen the movie, or like an Instagram filter for the younger crowd. It is often, but not always, used for procedural or surgical training. Finally there is mixed reality (XR). This is accomplished by combining real world and virtual elements into a scenario. A great example of this is the Vimedix AR trainer by CAE which uses this technology to overlay ultrasound images and anatomy on a mannikin torso to help learners visualize the 2-D ultrasound image in the context of the 3-D anatomy. There is often some amount of overlap between these distinctions and so if you are unsure of what type of reality you are in, just call it XR and it will give at least the appearance that you understand.
The past few years has seen an explosion of healthcare related applications and uses of XR. The power of XR to experience things in three dimensions at scale is hard to describe without putting a headset on. It is more than just seeing things on a screen like a 2D video game or animation. The immersive three-dimensionality made anatomy and radiology an obvious early use case. Organon VR was one of the most popular early applications to allow learners to explore human anatomy up close and in immersive 3-D. Applications like Medical Holodeck allows for viewing real cross-sectional CT or MRI imaging in virtual reality.
Simulated patient encounters and virtual simulation mannikins are some of the most exciting and relevant applications available to date. There are both individual and team-based simulations which can be experienced in a typical instructor-driven format akin to high-fidelity mannikin simulations, or in an asynchronous self-paced or AI-instructor format. There are distinct advantages to each of the approaches. Team-based simulation allows for more collaborative learning process and also fits in well with existing simulation-based education. This has been incorporated into the typical simulation station rotation at our institution using the SimX platform. A group of learners work through a simulated patient encounter in real time in virtual reality with an instructor running the case on a PC. It is a familiar framework for simulation education even if the format is different. Individual and asynchronous applications allow for flexibility in timing and individualized feedback. Applications like Health Scholars VR allow learners to lead a computer-generated team through a high-stakes ACLS or PALS type encounter using just voice commands. While not perfect, the voice recognition works much better than typical digital assistants do and is more realistic to how a team leader runs a resuscitation (no pressing buttons but rather giving orders and directing care verbally). The standardization and repeatability allow for more meaningful feedback and tracking of performance over time. A somewhat unexpectedly powerful benefit of technologies like these is the ability to be done remotely or asynchronously. If, for example, a pandemic makes it impossible to gather in large numbers or causes a sudden need for remote learning, these technologies can be deployed to work together remotely or to have smaller groups staggered in time and location work through the same learning experiences.
If the future is now, it is exciting to consider what comes next. Invariably there will be advances in computing hardware allowing for faster and smaller processors which then will make standalone and wireless headsets more powerful. The evolution of other related technologies may have an even bigger impact. Cloud computing and ultra high speed connectivity will allow further untethering of headsets from computers. (Nothing breaks immersion more than tripping over a wire constantly). Haptic feedback combined with mixed reality will eventually revolutionize how procedures are taught. If the amount of resistance and feel and force feedback for a procedure are accurately simulated it allows for a more immersive experience and to actually build muscle memory for the fine motor aspects. On the software side of things, there could be an open-sandbox platform that allows instructors (without knowledge of coding or high-level technical expertise) to create simulation cases and modules for learners which can run either synchronously with the ability to adapt the case on the fly, or asynchronously on-demand with scoring and feedback provided to the learner. The instructor can then share or exchange cases on the platform to avoid duplication of efforts and allow further dissemination of high-quality cases and teaching modules. A variety of inputs should be available such as voice recognition and hand tracking to avoid having to spend time teaching complicated controller schemes to learners.
It may still take a long time to get to full holodeck or Ready Player One OASIS level mixed reality for immersive learning. Along the way I look forward to continuing to leverage the capability of existing technologies in the short and medium term to help train the medical professionals of the future.