This week I focused on wrapping my head around how this auditory social media feed would work. How would the user move through the space, listening to their feed and recording their own post.
I began with a user flow. The way I imagine this happening is mapping it to an experience the user is already extremely familiar with: Instagram. The user would open the app to their audio feed, free to listen to immediately, utilizing the built in features of their smartphone’s headphone remote: click to play, click again to pause, double click to skip, triple click to repeat. This is hopefully especially useful for my visually impaired target, since I’m aiming to minimize visual controls. However, because this app is meant to be inclusive, I’ll still include them in the UI.
If the user’s intention is to immediately record and post- let’s say she just saw the most amazing shiba inu and needs to gush- she can opt to click the add button to begin recording, saving her feed for later. There is also an option to re-record if she happens to be haunted by the sound of her own voice, like most of us often are 😬
From there, I moved into wireframing. I wanted to keep the UI extremely rudimentary, mainly because I don’t want to visuals to be a focus at all. I don’t want my visually impaired users to feel like they might be missing out on something. The feed timeline will be chronological- sorry Instagram algorithm that everyone despises. The most recent will play first, automatically transitioning to the next user. The audio will also state the user name before their recording to delineate one post from another for the visually impaired and sighted alike. The user can pause, play, repeat, and skip at any time. Each post is only 15 seconds long.
Users can also like, search for users, and return to the feed at any time. I’m hoping to implement audio controls for this. For example, the user could say into the mic: “Search for Claire Kearney-Volpe”.
To record the user can simply click the add button or hold down the headphones remote to record. The app will never share anything to a feed without confirmation, making the user feel more secure.
Finally, after the user has approved of their recording for public sharing (they can listen, edit, and re-record), they can add to their feed.
It’s a unique experience to design something with users who are visually impaired in the forefront- where most of the UX is auditory. I think that element benefits all users, who can’t be looking down at their phones when they’re on the go and crossing busy streets. However, I still felt the comfortable pull of a visual interface and I think my main challenge will be to find a happy medium between the two.