After two years of usage, the company could tell that the paradigm previously introduced had led to improved Usability overall of the product but users were still confused about workflow.
After having tackled the welcome, we launched into rolling out a complete overhaul of the UX, moving from a left nav Master-Detail paradgigm to a top-nav Subway-Map approach.
The main challenge with rolling out a new UX was that it had to happen for both a Rails and a Merb application. Neither uses the same paradigm when it comes to layout so the approach had to be adapted to each and yet made general enough to not incur (even more than was already present) tech debt.
I served as Tech Lead and architected a solution, led other junior engineers in implementation, and managed interactions and expectation with Product and Design.
Following is a general impression of where we began and where we wound up
Results
Led engineering efforts around a complete UX overhaul of the company’s most important customer interaction.
Our NUX had seen a revamp in the previous major refresh, but we tackled it for a re-design in order to
give our newly-onboarded Visual Designer an opportunity to get his hands dirty with crafting a visual identity direction and
get a sense of the effort for introducing a new layout as it would eventually affect two very heterogenous apps
What it was like before
Here’s a first pass as I took the Designer’s vision and implemented it as a new layout, complete with Zendesk API integration to create a ticket on form post
To facilitate migration between Yahoo Groups and Kinsights group functionality for a key account, I created a Command Line Interface tool for a paginated approach to parsing and importing the member list in addition to parsing and importing the archived messages (for the last seven years.)
Many iterations were necessary and several heuristic methods were applied in order to ensure the cleanest import of the messages
Additionally, I created the activation flow for the transition, partially by piecing together signup components from the existing codebase, partially by crafting new logic in the business layer.
Results
Created CLI tool to ingest 1,000s of Yahoo Groups users given numerous different, undocumented email schemas.
Did not actually implement social signup, merely checked for clicks on buttons (using Optimizely to show/hide based on whichever variation user was assigned.)
With social signup options above:
With social signup options below:
Results
Tested conversion hypotheses with social signup buttons.
Following is a representative control version of the landing page
I whiteboarded several concepts gaining buy-in from the chief product stakeholder and then implemented same, using Optimizely to set the parameter to determine which variation a user saw; below you will see the iterations.
Unfortunately, none performed better than control.
Results
Tracked performance of inbound traffic from Facebook, Twitter, Google AdWords, and StumbleUpon and designed/implemented various landing pages to improve signup rates.
Contributors, as they are called, are the +5M people around the world who do work on CrowdFlower’s platform. The application that enables them to do work is one of the company’s heavily trafficked as well as most complicated – blending a Rails backend with MooTools, jQuery, and RequireJS in the frontend.
The application’s UX
…had largely stayed the same for the last five years. In Q1/2014, we decided to enhance it by making it more interactive and towards engaging our users more and conveying the just how much work there is in our system.
Working with the Product Manager and an external Designer, we came up with the following high-resolution mock
Because the application is so heavily used, we knew we couldn’t merely throw the switch on a new design overnight; both from a community management standpoint as well as application performance. Instead, we chose a strategy of introducing a first at the company: use of A/B Testing to determine a design that would perform as well as if not better than the original.
Our key metric for performance in that regard had to do with contributor’s performance after being exposed to the new UX, particular the messaging around our forthcoming gamification and introduction of Levels. In the beginning, we did not have the infrastructure to determine the value for that metric so we simply settled on ‘clicks’ as a (conversion) proxy to understand if the new design was having an impact.
Infrastructure
Without an A/B Testing framework in place, I needed to choose one. As requirements were not concrete for such, I did some due diligence in vetting several options, coming up with a review of A/B testing frameworks for Rails.
It became obvious that Vanity was best suited to our needs. (Since it doesn’t yet have the ability to throttle a percentage of the traffic receiving experiments, I augmented it with Flipper.)
Once that was in place, we could begin iterating on the design, knowing with confidence how we were impacting the user experience.
Server-side
We knew we wanted the experience to be snappy, but completely replacing the existing experience with a Rich Internet Application was far out of the scope for the first month, particularly as there were infrastructure changes to be made to retrofit the stack with A/B Testing. We decided to make progress iteratively over several sprints.
In our first test, we pitted the control (original) against a bare-bones implementation version of the high-resolution mock as the new design.
original
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
To calculate the overall satisfaction by other contributors for a task (denoted by the stars) proved to be too inefficient in this iteration; it wound up losing.
Client-side
On the assumption that we needed to make the experience snappier in order to drive engagement, it was obvious that we would need to have more (and faster) interaction and therefore, an interactive client-side implementation.
As what was essentially a completely parallel product, leveraging only some of the infrastructure that the server-side rendition was utiziling, I begin to flesh out the following
Further refinement (an actual data) was necessary to get it looking more like the high-res mock (and like its server-side-rendered peer)
At this point, we implemented and integrated with our own homemade badging solution, beginning to display badges in the following iteration
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
Testing the impact of particular messaging was also of interest, so we added a Guiders variation as well. At this time we also leveraged Google Analytics Events on the Guider buttons to track how the far the user got in our messaging.
Letting the experiments run a few days with sufficent traffic, we found that client-side-rendered version peformed no worse than the server-side-rendered version (23.9% vs 22.9%) and that having guiders also performed not significantly worse (23.1% vs 23.7%) so we decided to keep both.
By that time, the new version was out-performing control (the original design) 22.2% vs 20.7% (at 99% confidence) so a decision was made to move forward rolling out the new experience to 100% of contributors, doing some polishing (copy/styling) work before finally settling on the following
Results
Used A/B testing to upgrade company’s most highly-trafficked page (5+M views/month,) increasing user engagement by 5% and saving $2K/month (in Bunchball costs) by rolling own simple badging solution.