How might we design an online experience that helps the visually impaired learn about homes on the market?
Due to the home buying process being a predominantly visual process, we decided to focus on creating an accessible online platform for the visually impaired to search for homes efficiently and effectively. This includes effective navigation and flow when using screen reader software or any other assistive web tools, inclusive use of text and alt-text, and functional information architecture.
Client: Experience Director of Rocket Homes
Role: UX Designer
Tools: Figma, Webflow, Miro, Microsoft Office
Timeline: 8 Months (September 2023 - April 2024)
The competitive landscape within the real estate market consists of competitors' digital products such as their websites and apps. The team looked at Rocket Homes’ largest competitors and examined each of their website and app interfaces to see how they compare to Rocket Homes’ products. The home searching process was looked at as well as screen reading tests using NVDA to see what competitors do for accessibility.
Research participants were scouted through personal connections within the team. The target user group we looked for had the following criteria:
We met with 10 participants. Each session was between 30-105 minutes, conducted both remotely and in person based upon the preference of the participant.
The primary user research method utilized was moderated semi-structured user interviews. This enabled the team to meet directly with the target user group and begin to truly empathize with them. Deciding to use user interviews was drawn from a need to collect qualitative data to begin the process of understanding our users' needs, wants, pain points, and any challenges they face. The finalized questions that the team decided upon ranged from:
After synthesizing the data from our interviews, we began to discover pain points and opportunities.
An empathy map was created after synthesizing the interview comments. It was sorted into pain points, positive points, comments, and suggestions. The empathy map has many pain points for Says, Thinks, and Feels, but many positive notes for Does. This suggests that there are workaround tactics that our user group uses, but they still feel that there are problems in the system being used itself.
Personas and journey maps were created after user research. These artifacts helped us visualize our user group and their needs better. It also allowed us to see what parts of the home search process were more difficult than others.
After gathering information on our users and the problems they faced, we were finally ready to begin prototyping. The team began by sketching the priority components that the users would interact with the majority of the time. This meant focusing on the search features, listing information, information architecture, and overall smoothness of navigation through the site.
Research participants wanted their search filters to be on the same page as where they type in their search query, we created a basic search with a search bar where users can type in a location and a few filters that they can add to their search below the search bar. Contrary to that however, we also created a guided search in which users would be taken through each filter to further narrow down their home search.
Instead of using traditional card sorting & more standard usability test with a Lo-fi prototype, we opted to run a usability study through the framework of a UX Sailboat activity. A UX Sailboat Activity is typically a team-oriented agile activity, which analyses the current state of the design to uncover themes to improve upon.
The UX sailboat activity consists of:
A boat: This is your team/project
An island: This is the goal you’re working towards. It can be the specific features designed in a sprint or a more operational-level goal
Wind in your sails: What propels your team/project forward.
Rocks: The risks your project faces in the future as it reaches the goal
Anchor: The problems and challenges that delayed the sprint/project
Instead of using this framework to reflect on the project as a whole, we used it as a guideline to come up with interview questions that helped us understand the target population’s strengths and difficulties when navigating online experiences.
With this approach, the team thought of scenarios to brief users on; this helps differentiate user research interviews from usability testing interviews.
An example question would be:
When actively searching for a home on a real estate platform using the search function…
- What features on the interface could help you achieve this task more easily? (Goal)
- In what order would you like those features to appear? (Wind/Open card sort)
- What makes this task difficult for you? (Anchor)
After completing both prototypes, we reached out to our participant pool to conduct a round of open-moderated usability testing sessions. For this round of testing, we decided that it was best to allow our users to roam freely within the prototypes while the facilitator asked follow-up questions to their responses. We found that our users would like to have a space to voice their feedback and concerns, which was a common theme in our past sessions.
While working with Figma to prototype our design, we found that the software is not compatible with screen reader technology. The most common screen readers are JAWS and NVDA, and we needed to test with at least one of these softwares that our users would use. To address this, the team selected Webflow as our primary testing tool because it can create screen reader-friendly designs that effectively communicate with users. Figma would still be used to create a more visually appealing prototype, though both prototypes would maintain the same information architecture.
We created a template in Miro, incorporating screenshots of each page or section of our prototype. Notes were added to specific elements of the screenshots to capture user feedback, categorizing it as positive, negative, neutral, or containing questions/suggestions. The group then organized these notes, which helped uncover themes that informed our design decisions.
Guided and basic search confused the users as they held the same information, it added complexity to the interface that was not needed. The filters themselves were simple to use, just not needed in two places. While 4 out of 5 users mentioned that the guided search is fine, there were concerns about the purpose of its own separate section. One user, who is fully-blind, stated:
“What’s the difference between the landing page where I can search, and this guided search that asks me the same questions?”
The intent was to break up the basic filter process to confine the user to one question at a time, to limit the possibility of error or getting lost. But it was discovered that when a form is communicative and properly formatted, a user is unlikely to have issues. We viewed it with our own eyes, thinking that limiting options and the page size is simple, but in reality it created more interaction points that drew the process out.
After narrowing down the search and proceeding with results, participants expressed their concern with the extended filters being in between the descriptive heading and the results shown. While the extended filters are valuable, they should be placed into an expandable interface option which hides the headings from the screen reader as well as copied & placed on to the landing page with the other filters, to avoid confusion and allow the users to proceed with viewing results.
Alt text was not provided in some aspects of the design, and participants expressed their concern. This reiterated past concerns from previous research and ideation, and once again brought to our attention the importance of alt text. Many participants expressed their desire for highly descriptive alt text in pictures of homes and agents.
Users mentioned that they expect the “saved” button to be in the navigation. The only critical feedback is the term “saved.” Users were confused about what that page would contain. Once users clicked onto the page, they realised that searches and listings were saved on that page - which was reciprocated positively.
Many interaction components were not up to our participant’s standards. The main concern was with the filter interactions, and the input of data to narrow down their search. Some interactions did not make sense for the question being asked, such as being able to put negative numbers into questions that required a range. Specificity of kilometers to meters in filters that required so was another concern, and the ability to switch between the two was suggested. Another concern voiced by multiple participants was that the buttons on the prototype were prompted as links (When read by screen readers), which confused the participants and gave the assumption of leaving the site.
Below is a few major screens with notes that pertain to the design decisions made.