College of Social and Behavioral Sciences
109 Visual attention and memory for landmarks during real-world navigation
Lillian MacKinney
Faculty Mentor: Cory Inman (Psychology, University of Utah)
I reviewed, edited, and re-reviewed my application to SPUR. I knew I wanted to be part of Dr. Cory Inman’s lab. His lab focuses on memory formation, and I would be working on a project that examines how visual attention in real-world navigation influences sequential memory development. Although I knew the premise of the research, neither Dr. Inman nor myself knew how to execute the data analysis.
Our study went like this:
Five participants implanted with NeuroPace Responsive Neurostimulators recording local field potentials (LFPs) from their temporal lobes were tasked with navigating a complex 0.75-mile route well enough that the participant could navigate the route in the opposite direction. We cataloged 150 landmarks that were visible to participants along the route. Subjects walked the route 7-8 times across two days, with the 1st walk guided (encoding) and 6-7 of the walks navigated by the participants themselves (retrieval). They then completed a landmark recognition task. Participants were asked to distinguish between 150 landmarks that appeared on the route and 150 similar landmarks that did not appear on the route. For our analysis, we compared participants’ visual attention — measured by fixation duration, saccade frequency, and number of fixations to each landmark — to the results of the recognition task. We created dynamic areas of interest in 1st person videos and began data processing and analysis for each participant. If our hypotheses are supported, our findings would show that the amount of visual attention to landmarks during real-world navigation influences subsequent memory and modulates medial and lateral temporal lobe activity during real-world memory encoding.
When I arrived at SPUR, a grad student in the lab, Lensky Augustin, guided me through the beginning stages of the research. Though Lensky was new to exploring the data, he was in it with me. He and I both wanted to understand the program we were using to get the data extracted.
To even begin understanding how visual attention was associated with memory, we first needed to outline every landmark in the videos collected. That was my job. I spent countless hours making sure the outlines were accurately capturing the entire landmark in every frame of the video. The program we were using, Blickshift Analytics, then would analyze countless statistics; fixation duration, number of fixations, saccade length, saccade frequency, and so much more. But Blickshift was in its beta phase, having only come out 4 months prior. The program was unreliable. Shutdowns would occur every 10 minutes, or every time a landmark was renamed. The program would glitch, losing track of landmarks altogether. Once, after completing annotations for two videos (about a week’s worth of time), the program updated, losing all prior work as the update was not backcompatible. As time went on, the Blickshift team created encouraging updates that sped up the process, making the end-goal feel more reachable. Blickshift wasn’t the only issue I was running into. The server I logged into to access Blickshift would sometimes become inaccessible for hours at a time, slowing down my process.
I know it sounds like I am endlessly complaining. But this experience made me realize something I hadn’t encountered before. Research will always come with dilemmas like these ones, whether it be slow data analysis programs or not getting enough participants for your data collection. While timelines and deadlines help structure research, sometimes those timelines may need to be extended due to unforeseen circumstances. Most importantly, that’s ok! As a stickler for deadlines, it might be harder for someone like me to shift these timelines, but it’s something that needs to be recognized as I continue my journey through research. By getting to experience these hiccups firsthand, I can better understand what realistic research looks like, how to set expectations, and change them through the process.
When I began this research this summer, the goal was to get all 32 of the videos annotated and analyzed. Now leaving, I have completed 6 video annotations. I’m proud of what I was able to achieve in this lab, even though I didn’t quite reach that initial goal. I learned so much working with Dr. Inman and his team and about my own research process. I’m incredibly grateful for such a wonderful opportunity and a wonderful group of people to get to know. I also want to especially thank Lensky Augustin for sticking by my side and teaching me new things every step of the way. The other SPUR students are also one of a kind. Everyone with their different perspectives and histories brought so much joy, learning, and growth this summer. I am forever thankful for them.