Trella: Post-Load Rating
In Depth Post-shipment Feedback
Enhancing post-shipment feedback, to unlock a vital avenue for drivers to share their experiences and provide invaluable feedback.
context
Trella is a trucking marketplace that removes the trucking hassle by connecting shippers with qualified carriers to move and manage loads seamlessly.
MY ROLE
Responsible for the discovery and design (including prototyping & user testing).
THE PROBLEM
The initial post-load rating approach isn't flexible enough to allow our carriers to specify which areas negatively impacted their experience. Additionally, the approach didn't provide internal teams with actionable insights.
We needed to change the experience from the previous binary thumbs up/thumbs down to a scale-based one, where users could also select from a list of options to provide more details. The new approach would help us identify what our carriers liked and more importantly what they didn't like and what the improvement opportunities are.
Many of the driver app's users aren't tech savvy and some even struggle with literacy. Therefore, one of the biggest design challenges we faced while revamping the experience, was making sure our users didn't find the new experience confusing and that they weren't overwhelmed with too much content. The new experience had to also be quick to complete, to avoid users skipping each time out of frustration.
The OLD approach
The initial design used at Trella featured a dialog that asked users to give either a thumbs up or thumbs down to 3 specific categories.
However, this method didn't provide internal teams with sufficient data to identify all the issues that could've happened throughout the load. It also didn't allow users to tell us what else they didn't like aside from the 3 categories they were asked about.
We knew we needed to revamp the experience to include a scale-based input, as well as asking users to specify which areas they were unsatisfied with.
explorations
iteration #1: stars vs emojis
We drafted 2 versions of scale-based inputs to test with our users. The first included a scale of emojis ranging from sad to happy and the second option included stars which represented a scale of 1 to 5.
Our user tests show a relatively small difference in clarity in favour of the stars scale. However, overall we found no significant differences or interactions between the emoji and the star scales.
iteration #2: chips vs blocks
After collecting the overall load rating, we needed to collect some more specific data about the main issues our carriers might have faced throughout the load. We decided that users that select 3 or less stars would be prompted to provide more information.
In order to keep user effort minimal, we'd show users a list of categories to choose from. We experimented with 2 designs for this section, the first one included horizontally scrollable chips while the second included larger blocks/buttons.
At this point, we conducted a second round of user tests, where we tested a version with each cohort and monitored how they interacted. A key factor to consider is that all the user tests were conducted outdoors at pickup/drop-off facilities, where the sunlight affected the screen's brightness. This was done intentionally to ensure that the tests were performed in the same environment as the real situation.
The results showed that users in both groups behaved similarly, but the key difference was that none of the users in the blocks cohort submitted the rating without selecting one of the issues. The larger tap target on the blocks meant that the bottom sheet took up most of the screen's vertical space and that the choices were clearer and easily scannable.