In a unique position given my previous experience with the UX, I took an opportunity when tasked by Product and Design to not only reskin the Listing but also to upgrade it.
Given that no one besides me really knew how RequireJS was working in the application and given its falling-out-of-favor in the general community as a module-loading solution, it was time to upgrade it to Webpack.
Here’s what the Task Listing looked like before
Here’s the mock from Design
In chronological order, here’s what I did
Upgraded DataTables from 1.9 to 1.10
Refactored JavaScript towards more of an OO paradigm
Applied new skin
Ported JavaScript for RequireJS to CoffeeScript for Webpack
Deployed
…and here’s what I delievered
with modal
Results
Successfully advocated for adoption of Webpack to replace RequireJS and then ported most-trafficked page, while achieveing near-pixel-perfect re-skinning.
After focusing on other priorities, I was pulled into a revision of an interface (I had also created) as delievered for the Rich Data Summit allowing customers to send any model predictions under a certain threshold to the crowd.
I took the following (revised UX) static mock from Design
and iterated to deliver the following
Results
Crafted SPA as lynchpin connecting platform’s Machine Learning (Python) and human-in-the-loop (Ruby) systems after convincing team SPA was optimal approach.
In late August 2015, given previous successes in the year, I was tapped to lead the engineering team for delvering a visual identiy refresh (in conjunction with conference-ready AI deliverable) by early October.
Week 1
took Bootstrap 3/Flat UI/custom styling from Designer and created a static page as ‘gold standard’ for other engineers to reference
identified priority routes on which the new design would need to be rolled out
reference page
Week 2
created a new layout for and and began rolling out new design on the Rails app
drafted a plan for updating the Merb app seamlessly
began to onboard other engineers
Weeks 3-5
prototyped and tested the idea for asset precompiling in the Rails app and replacing the base assets of the Merb app
continued polishing
guided other engineers on implementation
continued polishing
Week 6
coordinated bug-free deploy in conjunction with Marketing (who was working for similarly refreshing the third-party-hosted home page)
Results
Organized work of four engineers (two local incl. CTO, two remote) as Tech Lead while planning (and tracking against) engineering sprints and deliverables over two months.
In late August 2015, given previous successes in the year, I was tapped to lead the engineering team for delvering a conference-ready AI deliverable by early October.
In the months leading up to that, the CTO had been prototyping an intial verion of the app in Rails which, for the conference, was to supposed to be integrated with other legacy apps (Rails 3.2 and Merb) and have its UI overhauled to be compliant with the newly-created company Styleguide.
Week 1
Given Balsamiq wireframes, put together a few layouts
Put basic routes in place
Began architecting common styling solution between AI app and legacy apps
basics coming together
Week 2
Continued work on common styling
Made choices aobut JS libs and prototyped interactions given wireframes; got buy-in from CTO, Product, and Design
Began work integrating with ML Python web service
first index page of models
Week 3
Given higher-resolution mockups by Designer, started to polish look-and-feel
With architecture in place, began to parcel work out to other engineers
first version of export
Week 4
As conference neared, knew we weren’t going to be able to deliver everything; worked with Product to focus on MVP
Oversaw work of other engineers
adding data to the model
Week 5
Continued to lead other engineers and refine interactions
annotating a model
Week 6
Applied final polish
Delivered for the conference! Following are a few screenshots demonstrating some of the deliverables
Results
Led team in coordination with CTO to deliver AI application (Rails) for company-sponsored conference on Machine Learning, Artificial Intelligence, and Data Science.
After two years of usage, the company could tell that the paradigm previously introduced had led to improved Usability overall of the product but users were still confused about workflow.
After having tackled the welcome, we launched into rolling out a complete overhaul of the UX, moving from a left nav Master-Detail paradgigm to a top-nav Subway-Map approach.
The main challenge with rolling out a new UX was that it had to happen for both a Rails and a Merb application. Neither uses the same paradigm when it comes to layout so the approach had to be adapted to each and yet made general enough to not incur (even more than was already present) tech debt.
I served as Tech Lead and architected a solution, led other junior engineers in implementation, and managed interactions and expectation with Product and Design.
Following is a general impression of where we began and where we wound up
Results
Led engineering efforts around a complete UX overhaul of the company’s most important customer interaction.
Contributors, as they are called, are the +5M people around the world who do work on CrowdFlower’s platform. The application that enables them to do work is one of the company’s heavily trafficked as well as most complicated – blending a Rails backend with MooTools, jQuery, and RequireJS in the frontend.
The application’s UX
…had largely stayed the same for the last five years. In Q1/2014, we decided to enhance it by making it more interactive and towards engaging our users more and conveying the just how much work there is in our system.
Working with the Product Manager and an external Designer, we came up with the following high-resolution mock
Because the application is so heavily used, we knew we couldn’t merely throw the switch on a new design overnight; both from a community management standpoint as well as application performance. Instead, we chose a strategy of introducing a first at the company: use of A/B Testing to determine a design that would perform as well as if not better than the original.
Our key metric for performance in that regard had to do with contributor’s performance after being exposed to the new UX, particular the messaging around our forthcoming gamification and introduction of Levels. In the beginning, we did not have the infrastructure to determine the value for that metric so we simply settled on ‘clicks’ as a (conversion) proxy to understand if the new design was having an impact.
Infrastructure
Without an A/B Testing framework in place, I needed to choose one. As requirements were not concrete for such, I did some due diligence in vetting several options, coming up with a review of A/B testing frameworks for Rails.
It became obvious that Vanity was best suited to our needs. (Since it doesn’t yet have the ability to throttle a percentage of the traffic receiving experiments, I augmented it with Flipper.)
Once that was in place, we could begin iterating on the design, knowing with confidence how we were impacting the user experience.
Server-side
We knew we wanted the experience to be snappy, but completely replacing the existing experience with a Rich Internet Application was far out of the scope for the first month, particularly as there were infrastructure changes to be made to retrofit the stack with A/B Testing. We decided to make progress iteratively over several sprints.
In our first test, we pitted the control (original) against a bare-bones implementation version of the high-resolution mock as the new design.
original
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
To calculate the overall satisfaction by other contributors for a task (denoted by the stars) proved to be too inefficient in this iteration; it wound up losing.
Client-side
On the assumption that we needed to make the experience snappier in order to drive engagement, it was obvious that we would need to have more (and faster) interaction and therefore, an interactive client-side implementation.
As what was essentially a completely parallel product, leveraging only some of the infrastructure that the server-side rendition was utiziling, I begin to flesh out the following
Further refinement (an actual data) was necessary to get it looking more like the high-res mock (and like its server-side-rendered peer)
At this point, we implemented and integrated with our own homemade badging solution, beginning to display badges in the following iteration
The new version out-performed control (in terms of clicks) 21.3% vs 20.3% (at 95% confidence) so I continued to iterate on the implementation, coming up with the following
Testing the impact of particular messaging was also of interest, so we added a Guiders variation as well. At this time we also leveraged Google Analytics Events on the Guider buttons to track how the far the user got in our messaging.
Letting the experiments run a few days with sufficent traffic, we found that client-side-rendered version peformed no worse than the server-side-rendered version (23.9% vs 22.9%) and that having guiders also performed not significantly worse (23.1% vs 23.7%) so we decided to keep both.
By that time, the new version was out-performing control (the original design) 22.2% vs 20.7% (at 99% confidence) so a decision was made to move forward rolling out the new experience to 100% of contributors, doing some polishing (copy/styling) work before finally settling on the following
Results
Used A/B testing to upgrade company’s most highly-trafficked page (5+M views/month,) increasing user engagement by 5% and saving $2K/month (in Bunchball costs) by rolling own simple badging solution.
Iterating with Product to scope, implement (Rails w/MySQL,) and A/B-test pixel-perfect sign-up/refer-a-friend/search/browse/opt-out/profile experiences
Drove conversions in the form of signups, virality, and clicks for not only our flagship web and email products (used by 4M+ users) but also eight new product launches
Quickly integrated into a small, fast-moving, startup engineering team
Became proficient in all things Rails
Ensuring quality through the use of code reviews, TDD, unit, functional, integration, and regression tests under continuous integration, testing plans, and mentoring/pairing to deliver functionality, fix bugs, refactor legacy code, and transfer knowledge
Have assumed lead (primarily frontend) responsibilities while reporting directly to CTO
Established F2E guidelines and best practices
Architected the company’s newest product, an Ember.js-based Single Page Application
Leveraged Facebook, Twitter, and Pinterest APIs to increase our social reach (including the use of Facebook Connect and the Like Button during signup and tell-a-friend experiences)
Prototyped iPhone app for user to navigate item stream during in-house Hackathon
Contributed improvements to our Nokogiri-based data-harvesting framework.
To leverage my skills and experience from developing web apps for monitoring at the enterprise level, I joined a peer team which had been providing the service engineers of Yahoo with a white-box solution paired with Nagios.
In a bid to move away for the costly distributed model of federated service engineering, our team was tasked with providing a centralized enterprise solution. I contributed as a front-end engineer and implemented features in a custom Perl MVC framework.
Results
added RIA functionality to an enterprise monitoring-as-a-service replacement for Nagios
Yahoo invests a lot of resources into making sure that each of its properties is available around the clock. To assist in that task, a centralized, black-box service was created as part of dev tools to help everyone from senior management to service engineers monitor and understand the health of properties.
On the backend, the service consists of the data store, a metrics collector, aggregation tools, and the configuration store (database-driven.) On the front end, there’s dashboarding, custom reports, and a self-service configuration tool.
Results
built and maintained web tools for a Nagios-based experience management solution checking 10,000+ URLs worldwide daily generating 63M measurements per month
reduced workload of system engineers by creating (from scratch) a web-based, MySQL-driven, MVC-architected, self-service configuration tool for creation of and management of Nagios checks
led SCRUM-influenced development and improved the quality of the team’s SE process by standardizing on championing the use of Catalyst (an MVC framework in Perl.) Improvements included shortened dev cycles, the introduction of TDD, improved performance, better documentation
created snappy, responsive interfaces using custom JavaScript along with YUI in conjunction with JSON-serving REST web services (Perl.) Also achieved performance gains through page-weight optimization
reduced development costs through the use of VMWare virtual machines for testing, building, and deploying as part of continuous integration. Implemented a packaged solution for automated regression testing using Firefox, Selenium, X, WWW::Mechanize