A DISTRIBUTED TEAM TRIES AGILE

Agile development, when a team is distributed across time as well as distance, is not easy.

As UX architect on a such a team for almost three years, I believe that if your team focuses on the methodology, rather than the methods, then agile is both possible and beneficial.

In this story, I’ll share, from a UX perspective, what worked, what didn’t work and key takeaways you can try with your own distributed team.

Like many organizations, the one my team and I worked for used some agile rituals, but still developed software in a waterfall manner. When my team formed, we wanted to be agile, but we didn’t know exactly how to start.

We worked together for a few waterfall-y releases, before our first leap into agile. We’ll call this one ‘bare-metal agile’, since we tried to follow the leanest, meanest practice we could. We’ll call the second attempt, ‘little waterfalls’, since we broke down the project into parts and we used waterfall for each part. The final, and most successful attempt was a hybrid of both.

The special problems of distributed teams

Ultimately, I believe that distributed teams should never be any organization’s goal. They are a reality of modern software development, but always be aware that working together in one place is better for team building, better for communication, better for celebrating victories and dealing with failures, better for the customers because a healthy team makes fewer bad decisions, recovers faster and responds to feedback better.

Team distribution

GMT -8: Product manager, UX, IxD, technical writing

GMT  0: Lead engineers, front end devs, QA

GMT +1: Middleware, backend

teamDistance.png

WHAT THIS MEANT

This meant that a simple question on UI component behavior might wait 7 hours for a response.

If the question were not simple - a middleware behavior, for example - we would either open google documents and work through the problem asynchronously via comments, or we would use a Jira ticket.

Tickets seemed like the more straightforward solution, but also meant lots of feedback from folks interested in, but not responsible for the solution. Again, a cost in time. Even with a virtual whiteboard (as of this writing, I know of no digital product that offer the fluidity of the real thing), we still had large enough gaps between questions and response to cause delays, each of which increased in magnitude with the complexity of the problem, since we had to repeatedly re-orient ourselves in the lengthening conversation.

As a result, our stand-ups often turned into ad-hoc work sessions, which meant more time lost for the folks in the room not immediately involved in answering the question. We rarely held a quick, here-my-update-and-blockers stand-up.

To help keep the team in sync with design changes, I sent a nightly UX update, where I’d describe the current state of the feature we were building, any changes to it, rationale for those changes, with questions for individual engineers to consider. The nightly update was hugely helpful in providing visibility into what was happening on the user-facing side of things, but it also meant an extra 30-40 minutes at the end of each night for me, since the updates had to communicate a fair amount of information as concisely as possible.

We conducted agile retrospectives, planned sprints and groomed backlogs together, but felt like we were more, as our lead engineer called it, “praying to the Agile gods”, than using agile methods.

After a few releases, we attempted a different approach.

FIRST ATTEMPT: BARE METAL AGILE

The engineering team and product manager tried to rapidly iterate on a prototype intended to solve a lack in our reporting tools that had been reported by customers, hoping to see if they could create a build-learn-repeat cycle using nothing but an evolving prototype and customer feedback.

This meant skipping UX research and architecture altogether. The prototype design was derived from a mix of pulling all of what might be relevant data and reporting it using criteria that seemed reasonable, given the data and how customers might want to use it. Might.

The prototype did not succeed as a solution for the customer problem, but it did help the team understand that UX research and tested wireframes, while not really supporting a true agile methodology, did save time and effort in the end.

SECOND ATTEMPT: LITTLE WATERFALLS

We started again, this time with a plan to conduct research, design and test workflows with users, then try to use agile methods, starting sprint 0 with wireframes.

Was it possible to define minimum viable research? With a team of one UX researcher and myself on workflow and UI design, we conducted one round of discovery research with users to understand how they were dealing with the reporting gap, understand and rank their pain points, then break out a set of user stories, organized along all the workflows we learned about in discovery.

article-01-trelloBoard.jpg

With our Product Manager and Engineering Lead, we were then able to rank stories by implementation effort and user value to define an MVP workflow. We also created an aspirational workflow in the form of a story - sort of a magical feature set that would solve the user problem perfectly. This, we used as a kind of north star.

We worked through wireframe iterations with one round of usability testing. I used InVision, both for paper-prototyping tests and as our source of truth artifact for development. I added one new ritual to our process - pre-ticketing. It worked like this: Each InVision screen was labelled with a letter. Each discrete bit of functionality on any given screen was labeled with a number (we called this mark-up a “Coffey cutout” after the former team member who suggested it). I met with the full team to walk through the wireframes together, and gather all their questions. That’s all we did for the entire meeting - just logged every question any of us had, from QA testing questions, to copy and docs to backend or middleware issues that would prevent or complicate delivering the data as per the design. After that, I revised the wireframes, commented directly to individuals in InVision and rapidly arrived at what we all believed to be a workable solution.

article-01-wireframe.png

The final step was to open a spreadsheet, enter each screen item (A1, A2, B1, B2), the user stories it addressed, acceptance criteria, and a ticket title that included the screen item label. The reason for this was to make sure that we could all reference a single InVision link, and quickly pull the right screen and right item for the ticket. It might sound silly, but the extra hour I spent on this spreadsheet was time well-spent.

article-01-sheet.png

We met a final time, and went through the spreadsheet, line by line. QA reviewed and approved acceptance criteria, myself and our product owner consulted on which stories could drop off and still allow us to enable the base workflows our users needed. This final review also revealed any remaining gaps: we color-coded rows with at-risk items red, rows with remaining UX questions or missing acceptance criteria yellow, and then created tickets from the rest. For each ticket created, we added its link to the item row.

A completely waterfall process. Y-es, but, although the description makes it sound painstaking, it was the opposite: the wireframes showed functional bits, not styling and button states. We knew that we would answer these questions as development got underway. Our goal was to make sure that the entire team understood the user stories, understood the larger problem they served, and understood this narrative well enough so that data requirements, performance requirements and testing plans all made sense to us as a group. Once the wireframes went into InVision, it was critical to turn the process from UX design to team collaboration.

In the early stages of workflows and wireframes, our engineering lead, product manager and myself returned again and again to UX research to help us all find the story. In this later stage, when almost all our discussion was tactical, we found that the user stories in the spreadsheet described the purpose of each item well enough to thoughtfully discuss it.

Each of these primary story tickets was blocked by many engineering, UI and technical writing tickets. All the primary story tickets from the spreadsheet were housed in epics that represented complete workflows. Anyone in our organization - from engineers to sales to marketing - only needed the epic tickets to quickly understand the workflows we intended to enable, and their current state.

The smaller tickets - styling, UI behaviors, tasks related to pulling and surfacing data - we created these as we needed them. We had given in to a waterfall, but we tried to make it as light as possible.

The process worked for us, and I think it’s a sound one for organizations where the product releases are not super frequent. Our organization released software once per quarter. The upside is a little more time, the downside is the strong desire not to miss a release and have to wait another quarter to deliver a solution and start learning from it. We were able to alternate iterations on different solutions, so that what we learned from release 1 of solution A, we iterate on while we were waiting on feedback from release 1 of solution B, and so on.

Still, we longed to be agile.

So we tried again.

 

THIRD ATTEMPT: THE HYBRID

We looked at everything we’d tried before: we loved the rapid prototyping of the bare-metal approach, but we needed a foundation built on research, and that had to precede development. We liked the detailed speccing of wireframes, but we didn’t have time or desire to invest in details that we knew we’d want to change as the project drew near to completion. So, we tried to combine the best of both approaches, and arrived at a kind of hybrid.

I still conducted UX discovery research first, but this time, our engineering lead joined me in the sessions. We debriefed together after each session. The questions we asked each other after talking to one user, we could incorporate into the next user’s interview. We found this level of engineering participation to be both low-cost and deeply useful in helping us understand the users’ problems and mental models together.

I still created an experience map from the research, as well as an aspirational workflow, and shared these artifacts with the full team, but this time our engineering lead and I already shared an understanding of the problems we could solve, so we saved time debating with each other on why the user needed access to this or that data, when pulling data was expensive.

Then, I joined all the offshore folks in one of our UK offices and spent the next several weeks prototyping very quickly, against increasingly more complete wireframes. It was, frankly, fantastic. No nightly emails, no laboriously written Jira comments, no long delays for answers to small questions. I realized that I’d forgotten the pure joy of in-person collaboration, of being able to walk over to someone’s desk to work through a problem, of the sheer breadth of exploration that’s possible when you and your team are working so fast that you’re finishing each other’s sentences, and laughing at some of the crazy stuff you end up  drawing from later for a solution you would never have seen from thousands of miles and a full working day away.

We worked this way for almost four sprints before I had to return to the home office.

And collaboration slowed down. Time and distance began to create communication gaps again. We tried to adhere to rapid iteration as much as possible, but ultimately, it was slow going. By this time, our company’s product organization had grown large enough to afford a dedicated IxD team, but they were all located in the home office. UI development slowed to a crawl.  We lost time we could have spent testing interactions. In an agile world, we would have chosen to miss a release and just keep working until we felt ready to release. Unfortunately, our product was complex, required significant effort to release, and our customers were unwilling to upgrade to new versions very often. Missing a release was not an option.

In the end, folks worked extra hours, and long nights. We took our best guesses on interaction usability, and we released a solution in which we had confidence, but which we all wished we could have tested more.

 

TAKEAWAYS

I believe that a distributed team can best employ an agile methodology by:

  • Accept that a time gap large enough to prevent you and your team from working together for 4 hours a day is a significant challenge that you must plan around.

  • Kill the rituals that don’t feel useful as soon as they don’t feel useful.

  • Accept that UX research and workflow design and story-mapping provide the foundation for all UI design and implementation of any solution you end up building and must take place before sprint 0.

  • If you can’t get the full team onsite to work through the first few cycles, then at least get UX there: at this stage, you’re looking at data and workflows: the two areas where UX-Engineering-Product co-create a solution. Your product manager can review progress daily, if they can’t join.

  • Co-locate by role: Product manager and UX, development and IxD. Co-locating these pairs will remove a good part of the pain that distance causes.

  • Don’t throw out the baby with the bathwater: not all waterfall processes are evil. Our pre-ticketing exercise seemed heavy, but it paid off in group confidence that we wouldn’t accidentally miss key functionality as we built.

For all the challenges my team and I faced, ultimately, as a team, we benefitted. We have the kind of camaraderie you only get from overcoming challenges together, we’ve all had a chance to travel and work and in each others’ countries, and we’ve co-created not only solutions to our customers’ problems, but improved the productivity of the team. And we’re agile - as much as we can be.