As a product designer, my biggest fear is having my work reduced to its veneer.
I have lost count of how many times my designs have received a “Looks great!” remark from an engineer. Despite the enormous strides we’ve made to establish ourselves as the custodian of the end-to-end user experience, we’re still frequently judged by our ability to deliver UIs, interactions, and all experiences apparent to the eye. Our final deliverables are still largely pixel-based mockups, and the rise of Dribble, Behance, and picturesque UI shots only enhances this surface-level understanding of the work that we do.
Listen beautiful relax classics on our Youtube channel.
However, times are changing. With the rise of voice and generative design, it is no longer enough for designers to just design experiences that are appealing and easy to use. We should seek to go past intuitive design and move towards assistive design. How might we design experiences that not only satisfy, but also inform and predict user needs?
The assistive nature of great software
Great software asks for little but picks up a lot. Instead of requiring users to take an explicit action—clicking a button, typing in a field, or interacting with the interface—great software draws patterns from existing user behaviors and uses that data to help users achieve their goals. Sometimes, the software’s assistance is so great that users barely have to deliberate. Take Gmail’s Smart Reply, for example:
Source: Google’s The Keyword
Without going too in-depth, Smart Reply uses a machine learning approach that combines the hierarchical and semantical nature of language to predict possible responses to each clue in the speech and synthesize them for final predictions. Consider the above image: Smart Reply can pick up on the word “great,” which sets a positive tone for possible responses. It can also understand the sequencing of the email body text, with the phrase “follow up” followed by “Monday or Tuesday,” which informs its subsequent suggestions.
The results are short, simple reply options that you see at the bottom of the page, saving you time from drafting a response. With very little UI, Smart Reply is thus able to provide valuable assistance to users. It also blends natively into the conventional reply flow, as the chosen smart reply becomes the email’s body text before you hit send. This unostentatious feature drove the response rate up 12% upon its initial release to Inbox by Gmail.
The assistive nature of great software is not a new concept (remember Clippy?), but it has been applied to great success in recent years. I attribute its success to the rise of small screens and, especially, screenless modes of human-computer interaction. The reduction and eventual absence of visible design elements forces designers to reckon solely with user flow logics and devise experiences that are assistive and predictive in nature. What else can you do when you can’t ask someone to take action? You make an educated guess at their intentions instead.
Leaning on simple heuristics
If you work in a small to medium-sized product team like me, most likely you don’t have a team of machine learning scientists at your disposal. Don’t let that be a blocker to designing great assistive software. You can still start thinking about the traces of data that users leave behind in their user journey. You can then connect these traces of data to user attributes and form informed hypotheses about when and why a behavior occurs. With some very light data analysis, you can start identifying valuable usage patterns.
Developed by Kickstarter’s System Integrity team, the superlative spotter came about this way.
We observed that many creators used superlatives like “world’s best,” “world’s fastest,” or “the best” to name their projects. This definitive language created unrealistic backer expectations that the final products are guaranteed. Through data analysis, we related this manual observation to other important platform attributes like backer trust and success rate. Using a third-party library that detects parts of speech, we built the superlative spotter, which flags exaggerating language and makes appropriate suggestions in real-time as a creator types in a project name. This small but mighty feature reduced the use of superlatives in Kickstarter projects’ titles by 80%.
Lately, I find myself reviewing many portfolios from friends and acquaintances trying to break into the UX field. They all showcase a great deal of creative visual design capabilities, with beautifully illustrated UIs and fancy animation prototypes. To them, I say, “Never design just pretty little apps.”
The hallmark of a good design is an experience so frictionless that users barely have to interact with it. Sometimes, this means that the design goes unnoticed because the designer has successfully stripped away the input, the click, and all the processing work that’s normally visible to the eye. When crafting a digital experience, we should constantly strive to automate user flows and steer away from traditional form input modes of interaction. This is how we deliver platform-agnostic, timeless design solutions.
What comes after a decade of pixel-pushing? Voice User Interface (VUI) has been a major player in the game for some time, but still remains an untouched territory for many product designers. Voice is only a natural progression of human-computer interaction modes. Most importantly, voice ushers in a new era of designing with machines, which designers aren’t often associated with. It’s high time we explore other frontiers of user experience.