Following is a representative control version of the landing page
I whiteboarded several concepts gaining buy-in from the chief product stakeholder and then implemented same, using Optimizely to set the parameter to determine which variation a user saw; below you will see the iterations.
Unfortunately, none performed better than control.
Results
Tracked performance of inbound traffic from Facebook, Twitter, Google AdWords, and StumbleUpon and designed/implemented various landing pages to improve signup rates.
Contributors, as they are called, are the +5M people around the world who do work on CrowdFlower’s platform. The application that enables them to do work is one of the company’s heavily trafficked as well as most complicated – blending a Rails backend with MooTools, jQuery, and RequireJS in the frontend.
The application’s UX
…had largely stayed the same for the last five years. In Q1/2014, we decided to enhance it by making it more interactive and towards engaging our users more and conveying the just how much work there is in our system.
Working with the Product Manager and an external Designer, we came up with the following high-resolution mock
Because the application is so heavily used, we knew we couldn’t merely throw the switch on a new design overnight; both from a community management standpoint as well as application performance. Instead, we chose a strategy of introducing a first at the company: use of A/B Testing to determine a design that would perform as well as if not better than the original.
Our key metric for performance in that regard had to do with contributor’s performance after being exposed to the new UX, particular the messaging around our forthcoming gamification and introduction ofĀ Levels.Ā In the beginning, we did not have the infrastructure to determine the value for that metric so we simply settled on ‘clicks’ as a (conversion) proxy to understand if the new design was having an impact.
Infrastructure
Without an A/B Testing framework in place, I needed to choose one. As requirements were not concrete for such, I did some due diligence in vetting several options, coming up with aĀ review of A/B testing frameworksĀ for Rails.
It became obvious thatĀ VanityĀ was best suited to our needs. (Since it doesn’t yet have the ability to throttle a percentage of the traffic receiving experiments, I augmented it withĀ Flipper.)
Once that was in place, we could begin iterating on the design, knowing with confidence how we were impacting the user experience.
Server-side
We knew we wanted the experience to be snappy, but completely replacing the existing experience with a Rich Internet Application was far out of the scope for the first month, particularly as there were infrastructure changes to be made to retrofit the stack with A/B Testing. We decided to make progress iteratively over several sprints.
In our first test, we pitted the control (original) against a bare-bones implementation version of the high-resolution mock as the new design.
original
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
To calculate the overall satisfaction by other contributors for a task (denoted by the stars) proved to be too inefficient in this iteration; it wound up losing.
Client-side
On the assumption that we needed to make the experience snappier in order to drive engagement, it was obvious that we would need to have more (and faster) interaction and therefore, an interactive client-side implementation.
As what was essentially a completely parallel product, leveraging only some of the infrastructure that the server-side rendition was utiziling, I begin to flesh out the following
Further refinement (an actual data) was necessary to get it looking more like the high-res mock (and like its server-side-rendered peer)
At this point, we implemented and integrated with our own homemade badging solution, beginning to display badges in the following iteration
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
Testing the impact of particular messaging was also of interest, so we added aĀ GuidersĀ variation as well. At this time we also leveragedĀ Google Analytics EventsĀ on the Guider buttons to track how the far the user got in our messaging.
Letting the experiments run a few days with sufficent traffic, we found that client-side-rendered version peformed no worse than the server-side-rendered version (23.9% vs 22.9%) and that having guiders also performed not significantly worse (23.1% vs 23.7%) so we decided to keep both.
By that time, the new version was out-performing control (the original design) 22.2% vs 20.7% (at 99% confidence) so a decision was made to move forward rolling out the new experience to 100% of contributors, doing some polishing (copy/styling) work before finally settling on the following
Results
Used A/B testing to upgrade company’s most highly-trafficked page (5+M views/month,) increasing user engagement by 5% and saving $2K/month (in Bunchball costs) by rolling own simple badging solution.
This was an enormous effort to overhaul a product whose UX had not been altered much in five years.
We took a piece-by-piece approach to swapping out components because of the complexity of the legacy behemoth. First, we refreshed the views in the legacy app, which involved changing styling in three different places (because the app had grown “organically” over the years, taking on three different styling paradigms styling was defined in custom stylesheets, in Less, and inline.)
In parallel, part of the team started building out the new peer Rails 3 app, the eventual destination for all views, complete with the company’s brand-new proprietary SSO solution (also built in parallel.) Finally, routing was updated to send all traffic to the Rails app.
Forming
Between August and September of 2013, we coalesced as a team under the project champion, the company’s CTO, and began formulating what the new UX should be and do.
Below is a screenshot of an example of the dashboard as seen by the end user (Merb, built in 2008)
Below is a screenshot of the progress of a microtask job, also as seen by the user (sensitive information redacted)
Norming
Between September and October of 2013, we cranked out the new experience.
Based on a design concept by the other F2E in the team, we began restyling low-risk interfaces of the system. The new design was not simply a reskin, but involved introducing a similar-yet-improved information architecture, an example of which can be seen below
Following are a few more example screenshots demonstrating the evolving look-and-feel
Configuration Panel
As we were tackling the UX, a backend engineer in a peer team was working in parallel to create a custom role-based SSO system that we would leverage for enforcing authentication and authorization in a new way for the company.
Shortly before the conference, a decision was made to go with a second design concept, not entirely different from the original, but a little more polished. A designer was requisitioned to provide the new design. From that point forward to product launch, we mostly fine-tuned the details.
The following screenshot demonstrates not only the new design but also the use of the new SSO solution, which can be seen where certain UI elements are disabled based on the user’s permissions
To QA the new experience, we ran it in alpha against production data repositories just prior to the conference.
Performing
After the launch, we maintained the product, adding features we had not been able to squeeze in.
Below is an example screenshot of how the final product shaped up
Results
Consolidated multiple styling paradigms for new UX ahead of company-sponsored conference.
In August of 2012, given expertise with Ember at the time, I was tapped by the Yeoman team to head the project. I contributed by adding features and documentation as well as ensuring Pull Requests being to Yeoman quality standards.
Results
Developed and maintained the original scaffolding solution for Ember apps.
The CrowdFlower platform is consumed via a number of microtasking sites. Each site registers and maintains its own users, but to better track unique identities across the CrowdFlower platform, we built a Single Page App in Ember.js to allow associating users across partner microtasking sites with one unique identifier in the CrowdFlower platform.
Results
Implemented a CRUD tool for managing users using Ember.js while iterating in conjunction with Product Manager as requirements changed.
Test QuestionsĀ are used as the gold standard of quality in the CF platform, but they can be laborious to create, particularly for work that’s periodically repeated.
As no templatized solution existed, a team of three of us (me as F2E, Product Manager, and Backend Engineer) tackled creation of an internal product to simplify the workflow.
The user flow was to create “Cases” of Test Questions that got sent to jobs as “Batches”; where the composite idea of a “Mold” encompassed all “Cases” and “Batches” for a particular set of target jobs.
(“The Forge” was the product’s original name, derived from a time when “Test Questions” were known as “Gold.”)
One of the more challenging aspects of the project was the testing of the app. Selenium has always been a robust solution for testing even JS-heavy experiences, but given its heft, Poltergeist was used instead.
The product was to eventually be made available externally but never was.
The app is CrowdFlower’s most highly-trafficked app. It also happens to be one of the company’s most technically complex, given its history.
Its architecture is that of a Rails app, wrapping a Gem that extracted business logic from the company’s legacy (original) Merb app. The Gem contains all logic around rendering, styling, and providing interactivity forĀ CML, the basis of abstracting microtasks in the platform.
The app was built (before my time) in order to bring a richer, more interactive experience to those doing microtasking work. When the original architect departed only weeks after I joined the company, maintenance and feature implementation fell to me.
Results
Supported site’s most highly-trafficked, revenue-generating UI (allowing for custom JS and CSS.)
Iterating with Product to scope, implement (Rails w/MySQL,) and A/B-test pixel-perfect sign-up/refer-a-friend/search/browse/opt-out/profile experiences
Drove conversions in the form of signups, virality, and clicks for not only our flagship web and email products (used by 4M+ users) but also eight new product launches
Quickly integrated into a small, fast-moving, startup engineering team
Became proficient in all things Rails
Ensuring quality through the use of code reviews, TDD, unit, functional, integration, and regression tests under continuous integration, testing plans, and mentoring/pairing to deliver functionality, fix bugs, refactor legacy code, and transfer knowledge
Have assumed lead (primarily frontend) responsibilities while reporting directly to CTO
Established F2E guidelines and best practices
Architected the company’s newest product, an Ember.js-based Single Page Application
Leveraged Facebook, Twitter, and Pinterest APIs to increase our social reach (including the use of Facebook Connect and the Like Button during signup and tell-a-friend experiences)
Prototyped iPhone app for user to navigate item stream during in-house Hackathon
Contributed improvements to our Nokogiri-based data-harvesting framework.
Taken in comparison with the default look and feel of our quintessential ‘tell-a-friend’ experience…
… the following examples depict some of the numerous ways we have experimented with (testimonials, site activity feeds, markup positioning, copy, timer, interstitials, refreshed creatives, etc.) to encourage users to spread the word.
These represent just a handful of the variations I’ve implemented.
Results
Realized a bump of 10-15% in the number of friends told (depending on the variation)