Language is successful because it allows people to share concepts and find a common understanding. Creating a shared set of definitions and processes allows teams to arrive at common understanding faster and with better confidence. People will feel stronger ownership over their work if they had a part in choosing how it was created. 
I've worked at many companies with wildly different structures of processes and responsibilities, but a common theme I've found is that sometimes people get stuck in the cycle of "that's how my predecessor did it, so we do it that way, but I can't tell you why." Every person works differently. Business needs change. Team sizes grow. The process that was used for your four-person team should perhaps be reconsidered for your eight-person team, and so it's important to discover what methods work best for everyone and create a process around them.
Below, I'm sharing a collection of definitions and processes that my co-workers and I have created together. Perhaps your team can find some processes you find helpful.

If you want to discuss any of these with me, or want to show me another way that we can be solving problems, let's chat! 

Just remember, no process lives in a vacuum, so while some of these things might be improved there are probably many other conditions under which these evolved. 

The Problem: Sizing & Feature Creep

When features appear on the road map, my team is asked for a time estimate. My response is: well, it can take as much or as little time as you'd like, it just depends on how good you want it to be. Does this new experience get tacked onto an existing page, and just opens in a modal? Or do we take a look at our IA and consider re-combining other pages in the navigation or menu to give it a better position in the app hierarchy? What kind of testing do you want to do before design to establish confidence in the feature we're building?


The Process: The Star System

Assigning a star level to a project gives us a definition of what efforts we expect to put into it. It keeps us from overthinking a small project so we can ship quickly. It also gives us a structure to medium or larger projects so we can better estimate the time needed for usability testing. Lastly, it also gives definition to the status of an in-flight project, as "halfway" through a one-star project vs a three-star project is like comparing apples to oranges. 


The Problem: Fidelity & Test Results

When a test doesn't go as well as you'd hoped, can you be sure what caused it? Was the experience confusing? Was the feature not needed? Was the CTA just not catching the user's eye? We want to move quickly, so we try to test things as lightweight as possible... but are we sacrificing some other part of the experience by doing so? 

At WorldWinner, they strugged with testing quickly. Legacy processes lead nearly everything to be tested through the A/B process instead of using other UX research methods. This means features had to be built fully by engineering before we ever got user feedback. 


The Conversation: How to go forward?

The image below isn't a solution, it was a tool for us to arrive at a common language. We needed to look at our processes and find the strengths and weaknesses in how we were exploring new features. 

A high-level answer we did arrive at was this: 

  1. If we are looking for qualitative data or to discover player comprehension, we should run a user interview or usability test.
  2. If we are looking for quantitative data or to observe player behavior, we should run an A/B test.


Skills UX Process – AB Comparison
Skills UX Process – AB Comparison
Skills UX Process – AB Comparison
Skills UX Process – AB Comparison