Winner of the Spring 2022 Capstone Deans Choice Award: This poster design was presented at UT Austin's School of Information Capstone Open House. It includes an overview of the experiment that I conducted and some selected findings. If you would like to know more about this project and my entire process, continue to scroll. 
Secondary Research Data Analysis
I started this project by collecting 30+ sources in three different areas of research: 1) evolution of reading; 2) information processing & reading comprehension; 3) UX of reading digital text through a screen. The sources I collected were from academic & scientific journals, conference proceedings, dissertations, and academic/UX related websites. Since I wanted to evaluate how reading in the digital environment changed over time with the advancement of technology, my sources ranged in publishing dates from 1981-2021. This cross evaluation between four decades of research lead me to discover a subcategory to my 3rd research topic, which I labeled: screen inferiority and introduction to skim/scan techniques. Research discussing screen inferiority was common in the early 2000's; however, as screen display standards advanced with technology (2013-present), research discussing screen inferiority has become less common. 
Screenshot of sources I collected and evaluated in order to begin brainstorming and construct an affinity map (tool used: Zotero)
Screenshot of sources I collected and evaluated in order to begin brainstorming and construct an affinity map (tool used: Zotero)
Screenshot of the affinity map I constructed to organize research findings, theories, and concepts discussed in research to develop my literature review (tool used: Miro)
Screenshot of the affinity map I constructed to organize research findings, theories, and concepts discussed in research to develop my literature review (tool used: Miro)
Experimental Research Process
With the support of Dr. Randolph Bias, I designed and conducted a quantitative experiment with a ux user testing method. This process included creating a test study plan for UT's Institutional Review Board (IRB), preparing experiment materials, developing a moderator script, and recruiting 10 participants for the user testing of the Mid-Word-Graying (MWG) text design.
Screenshot of excel task_id table created for answer key
Screenshot of excel task_id table created for answer key
recruitment flyer posted on social networks
recruitment flyer posted on social networks
Project Takeaways
Industry Takeaways
Since I didn't have extensive background knowledge in how human information processing occurs or the type of research studies that are typically designed in the field, I decided to approach this project with a literature review first. A literature review is a type of secondary research method used to develop a detailed summary of previous studies that were conducted on a given topic. Even though a literature review is typically used in academic research settings, I've learned that there are variations of this method that are used in applied research settings as well. For example, market and comparative/competitive analysis are a type of secondary research that are an industry iteration to an academic literature review because we are compiling and analyzing secondary data to synthesize insights when developing proposals, grants, and recommendation reports or slide decks. 
Data Nerding
The most exciting part(s) of this research project has been data analysis. For the secondary research data analysis, I used two tools (Zotero and Miro) to help organize and conceptualize the data I was gathering. A ux method I incorporated was affinity mapping so I could visualize my thought process and determine how to proceed with designing the experiment. Once all user sessions were complete, I had a lot of fun working with the quantitative data and learning how to connect data tables using ID key signifiers to help analyze data in a more efficient way. The tools I used to help with my data analysis were Excel spreadsheets and Tableau software. I'm a geek when it comes to finding data patterns, so I was beyond excited to make connections between participants and creating data visualizations to help convey my findings and lead the discussion for recommended next steps.
Overcoming Challenges
As most student lead projects play out, I encountered some unexpected challenges that I had to learn how to navigate as a researcher. The most challenging (and stressful) task I had to navigate was recruiting. Findings people to volunteer for the study was challenging in itself; however, I did not anticipate having two last minute no shows (without warning). This left me scrambling the last few days to find and schedule replacement participants in order to meet my intended quota. Even though I sent out reminders and confirmations before every session, I learned to always plan on scheduling a couple back-up volunteers in case they're needed.

I also encountered a data collection problem during a participant's session. The participant misinterpreted a crucial screener question in the pre-study survey (indicated whether participants qualify for the study). The participant's answer indicated that they passed the screener, so they were scheduled for a session. However, something was off with the participant's time and error rates because they seemed like outlier scores (far from average) and without knowing what the reason was for this, I couldn't stop or interrupt the session. At the end of the session, the participant disclosed having an eye condition, which was the misinterpreted screener question. Since this would have marked the participant as not qualifying for the study in the first place, I had to omit their data and find another replacement participant. This taught me a valuable lesson to never assume the participants being recruited will fully understand screener questions, so when in doubt: explain the question further/give more details or examples.

You may also like

Back to Top