Hello! My name is Matt Gottsacker. I am a third-year Computer Science PhD student at the University of Central Florida. I do research in the SREAL Lab advised by Prof. Greg Welch. My research interests include virtual/augmented reality, 3D user interfaces, collaboration, and embodiment. I take an interdisciplinary approach to research, drawing mostly on philosophy, psychology, and media studies to both inspire research questions and design experiments to understand related phenomena. I design and create software and hardware to investigate these questions. I collect and analyze both quantitative and qualitative data to get us closer to answering them.

I made a comic book that functions like a personal statement and describes how my interests in the humanities and computers have come together into my current academic studies in human-computer interaction and virtual reality. I generated all the images with OpenAI's DALL-E 2 except for some photos of the VR hardware on the last page.

Some profiles elsewhere on the web:

[ Google Scholar ] [ Curriculum Vitae ] [ LinkedIn ]

Email: gottsacker [at] knights.ucf.edu


  • September 2022: Conference paper accepted at ACM Symposium on Virtual Reality Software and Technology. "Effects of Environmental Noise Levels on Patient Handoff Communication." pp. 1-10. Link to pre-print.
  • August 2022: Workshop paper accepted at IEEE International Symposium on Mixed and Augmented Reality (ISMAR). "Towards a Desktop–AR Prototyping Framework: Prototyping Cross-Reality Between Desktops and Augmented Reality." pp. 1-8. Link to pre-print.
  • August 2022: Poster accepted at IEEE ISMAR. "Exploring Cues and Signaling to Improve Cross-Reality Interruptions." pp. 1-6. Link to pre-print.
  • Summer 2022: I started a summer research visit at the Computer Graphics and User Interfaces Lab at Columbia University. I worked with Dr. Steve Feiner on a collaborative mixed reality application. We have researched user interface challenges that arise when co-located AR users share and transition among each other's perspectives. Work is ongoing.
  • February 2022: I was named a finalist for the Meta PhD Fellowship.

research highlights

My PhD research focuses on computer-mediated interactions and transitions across different realities, or cross-reality interactions/transitions. This is most easily illustrated using Milgram and Kishino's Reality-Virtuality (RV) Continuum, a seminal piece of virtual reality (VR) research that introduced the figure below to classify immersive virtual systems. On one end of the continuum is the physical world with no virtual content displayed in it. On the other end is a fully virtual environment (i.e., VR). These extremes are set on a continuum because they can be mixed in interesting ways. If your primary environment is the physical world, and you add virtual content to it, you get augmented reality (AR) (think: Pokemon Go). If your primary environment is a virtual world, and you add elements of the physical environment to it, you get augmented virtuality (AV) (imagine being in a VR headset and seeing real-time video of the physical world through a portal).

Milgram and Kishino's Reality-Virtuality Continuum

In my research, I have investigated using AV and other techniques to improve interactions between people on opposite ends of the continuum, specifically focusing on when an immersed VR user is interrupted by a non-immersed person nearby in the user's physical environment. Additionally, I am exploring methods for transitioning augmented reality (AR) users' perspectives in a collaborative context, and how users might interact before and after perspective transitions. Each AR user's view on the world is their reality, so these transitions are examples of crossing realities as well.

Some of my research projects in this area:

Baseline: no virtual avatar

UI notification + passthrough view

Non-diegetic avatar

Partially diegetic avatar

Fully diegetic avatar

InVR Interruptions: Toward Seamless Cross-Reality Interruptions of VR Users with Diegetic Representations of Interrupters
VR headsets block users' view of the physical environment. How should an interrupter be brought into the VR user's virtual world when needing to interact with them?

[ Paper PDF ] [ IEEE Xplore ] [ Presentation ] [ Google Scholar ]

When someone puts on a virtual reality headset, they are completely isolated from their physical environment, including the people around them. This is by design—VR presents to be the most immersive computing technology. However, there are many cases in which a person wants or needs to interact with someone immersed in VR. Some examples, where "you" are wearing a VR head-mounted display:

  • You are playing a VR game in your home, and your roommate needs to ask what kind of pizza you want.
  • You are working on a 3D design task in VR, and your collaborator needs to tell you about a specification update.
  • You are passing time on a plane in peaceful virtual environment, and your seat neighbor needs to move around you to use the onboard lavatory.

With the current state of the art, the interrupter cannot fully interact with the VR user unless they take off the headset. There is a lot of friction involved in that process, so it seems that there should be a communication channel that does not require the user to doff the headset. Additionally, the interruption is often jarring for the user. They are immersed in another world. When someone taps them on the shoulder or speaks to them, their physical environment abruptly calls them back. This process could be smoother.

With the help of some awesome people in my lab, I designed, implemented, and ran an experimental design human subjects study to examine ways to facilitate this interaction. Our work was published in the proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR) in October 2021. You can read the full paper here: Diegetic Representations for Seamless Cross-Reality Interruptions.

The word "diegetic" comes from describing narrative media elements. A story element is diegetic if it comes from within the context of the story world itself. For example, a sound in a movie is diegetic if it is produced by something in the scene (e.g., a radio playing a song). A different sound is non-diegetic if it is added to the scene as a narrative element (e.g., a musical score added to a scene where there is no orchestra plausibly nearby). I made a 2-minute creative explanation for this concept in my video presentation of this paper. You can watch the whole "live" presentation from that link if you want; my presentation starts around the 20:20 mark.

VR complicates the common definition diegetic because the virtual world completely surrounds the user. We can talk about audio mostly in the same diegetic dimensions, but the lines between diegetic and non-diegetic visuals blur a little. We explored how one might vary the diegetic degree of the appearance of an avatar to represent a non-VR interrupter to a VR user, and the different effects that might have on the VR user's virtual experience and cross-reality interaction experience.

I built a virtual office environment and tasked participants with stacking virtual blocks in a couple different formations—just an easy task that could help them fall into a rhythm pretty quickly. The experimenter interrupted them part-way through each block formation. The way the interrupter was represented to the VR user changed each time:

  • Baseline: interrupter taps the user on their shoulder, user takes off the headset to interact with interrupter.
  • UI + passthrough view: UI notification alerted the user someone nearby wanted to speak with them, then the user activated the headset's passthrough view to see the interrupter through the headset's external cameras.
  • Non-diegetic avatar: The interrupter was represented in the virtual environment as a floating green sphere.
  • Partially diegetic avatar: The interrupter was represented in the virtual environment as an avatar in business clothes with a glowing green outline.
  • Fully diegetic avatar: The interrupter was represented in the virtual environment as an avatar in business clothes.

Based on a Cross-Reality Interaction Experience questionnaire we wrote, we found that participants rated the interaction experience with the interrupter highest for the partially and fully diegetic avatars. We also found that these avatars afforded a reasonably high sense of co-presence with the real-world interrupters, i.e., participants felt they were with a real person as opposed to a purely digital one. We found that participants more often preferred the partially diegetic representations. Their qualitative responses suggest why: several stated that the green outline helped them distinguish the avatar from the rest of the virtual environment; the outline suggested the avatar was not just an NPC (non-player character in a video game). I am interested in further exploring methods for representing cross-reality interactors, especially for interactions that may occur for longer periods (as opposed to brief interruptions).

Additionally, we asked participants about their place illusion, or their sense of "being there" in the virtual environment before, during, and after the interruptions. We found that the avatar conditions led participants to experience a consistent and high sense of place illusion throughout the interruption, where the conditions that caused participants to take the headset off led to a drop in place illusion that did not recover immediately after the interruption. I am interested in investigating further how VR users' senses of presence move and change throughout a VR experience.

Green: Low Virtual Activity

Yellow: Medium Virtual Activity

Red: High Virtual Activity

Interrupting VR Users: Exploring Cues and Signaling to Improve Cross-Reality Interruptions
VR headsets block interrupters' ability to make judgments about VR users' mental state that they typically use to decide when to interrupt. What information should be displayed to interrupters, and how should it be displayed to them?

[ Paper PDF ]

When someone we want to interact with is wearing an HMD, our usual ability to attribute mental states (e.g., busyness, engagement, openness) to the other person is inhibited. We perform this attribution quite often in social interactions—in psychology it is referred to as developing a theory of mind about the other person. VR makes this process difficult because the VR headset blocks much of the user's face and does not share any information about their current task. This presents a problem for people in the VR user's physical environment who need to interact with (i.e., interrupt) the user. The interrupter does not have access to the cues they usually use to determine when and how to interrupt, nor the methods they traditionally use to signal an interruption. For example, they cannot read the VR user's full facial expression to determine how engaged they are with their task, and they cannot announce their intention to interrupt by slowly approaching. This disconnect is a problem that will become increasingly common as VR is used by more people, in more settings (e.g., the workplace), for more purposes, and for longer durations.

To improve this interaction from the interrupter's perspective, I am designing and implementing both hardware and software prototypes, and running user studies to test them. I aim to produce simple modifications to VR headsets that restore these broken communication links and improve these interactions for all parties. To this end, I have explored attaching visual cues to a headset to communicate the VR user's virtual activity level (i.e., their engagement with their virtual task). I have also explored signaling mechanisms such as a gesture sensor mounted on a headset to allow an interrupter to initiate an interruption through a wave gesture.

I presented a poster at IEEE ISMAR 2022 about an initial prototype and pilot study for this project. Through quantitative and qualitative data analysis, the pilot study suggested that the Virtual Activity Cues were helpful in providing interrupters with information about the VR user's mental states and useful for deciding when to interrupt. The gesture system did not seem to be helpful. Going forward, I am planning a user study to understand the information essential for interrupters to understand the VR user's mental states and interruptibility. Then, I will iterate on the design of this prototype to home in on cues that are most useful for communicating when and how one should interrupt.

recent collaborators

  • Robbe Cools (KU Leuven)
  • Dr. Steven Feiner (CGUI Lab, Columbia University)
  • Dr. Pamela Wisniewski (Vanderbilit University)
  • Dr. Stephanie Carnell (VAR Lab, University of Central Florida)
  • Dr. Nahal Norouzi (Meta Reality Labs)
  • Raiffa Syamil (VAR Lab, University of Central Florida)
  • Hiroshi Furuya (SREAL, University of Central Florida)
  • Zubin Choudhary (SREAL, University of Central Florida)


These are some of the things I used to build this space:

contact me

e-mail me at matt [dot] gottsacker [at] gmail [dot] com. Feedback and collaboration ideas are welcome.

user data

The only cookie used for this website checks whether you have already seen the banner about user data. If you have seen it, you won't see it again (unless you clear your cookies).

I use the free and open source GoatCounter for web analytics. I don't store any data on you, but I like seeing from where in the world people visit this webspace. In fact, you can see everything that I can see by visiting this website's public stats site.