Amplitude Interaction Language

Building an interaction language that encourages collaboration using multi-state chart controls

  • #web
  • #design language
  • #querying controls

Once upon a redesign

I was tasked to lead the complete redesign of our core analytics application at Amplitude. This was a major effort that involved all of the engineering, product, and design teams at the company. We set ambitious goals for ourselves that included building new advanced functionality and making significant improvements to our overall usability and user friendliness.

Well designed chart controls are essential

Amplitude aims to bring sophisticated data analysis to all members of a product team regardless of their familiarity with SQL or other data tools. Since all data analysis within Amplitude happens through charts, we realized that the chart viewing and creation experience was key to get right. Furthermore, a chart's definition and controls define what and how the data is being queried and so it was essential for us to design well in order support our broad spectrum of users.

Simply put, if a user couldn't understand a chart, they weren't going to get value from Amplitude.


User interviews, customer calls, and more

We started off our research by scheduling user interviews with prospective customers, current customers, and users of competing analytics products so we could better understand how people expected to create and understand charts.

Areas of focus during our user interviews

A key to our interviews were to not only to ask our participants specific questions around how they created and understood queries but to also watch them use their analytics software live infront of us. This allowed us to see first hand how their mouse travelled, how they managed their windows and tabs, and workarounds they used to accomplish common and repeated tasks. We also ensured that we understood how analytics fit into their daily and weekly workflows. Not all of our users needed to use analytics everyday and so we made sure to get a clear picture of the triggers and expected outcomes that brought them into their analytics software.

Key learnings and areas for improvement

Lack of Control Consistency

We learned that many of our existing customers had confusion around the lack of consistency of controls between different types of charts. For example, our segmentation chart and funnels charts shared some common functionality but the buttons and relationships of their controls were laid out entirely differently.

Different workflows for Creators vs Consumers

We learned that the triggers and workflows for users who wanted to create insights versus consume insights were significantly different. They had similar end goals but how and why each workflow gets started were different based on the various needs of our users.

Confusing Query Semantics

We learned that understanding exactly how a query is structured and how the data is being processed was integral for our users to build trust in their analytics software, regardless of the size of the company that created the system. Our chart controls in particular were especially confusing to understand in certain cases and had our users often questioning the credibility what they were seeing.

The Process

Brainstorming, Sketching, Wireframing

We had many meetings, discussions and brainstorms to explore the solution space. I tried many wild ideas to get conversations started. Over time we discovered the identity of our product by discussing my various iterations and realizing what was important to us.

A small sample of the many wireframes and iterations we explored.

It was challenging not only because of the information design problems at hand but also because of how many test cases needed to be scoped to determine if our solutions were viable.

Live experiments in the wild

As we started to build more conviction, we wanted to start testing our ideas in the wild and do more than just user interviews. Since these are live interactive controls used in a very specific contexts, it was important to test it in a live environment.

Rather than wait for our alpha testing period to begin which was still a few months away, we instead decided to ship a set of small ideas we had around our controls in our existing product. We found an unrelated beta feature in which we could house an experimental version of our chart controls and get real feedback from the wild.

A live experiment we ran in production that used natural language instead of property names.

We decided to test an extreme version of our ideas around natural language by replacing traditional property controls with natural sentences. We hoped this would get people talking and, boy, did it ever. We learned a lot about what people like and didn’t like around the natural language controls and realized that the natural sentence needed to live around the traditional form controls rather than become them.

Iterate and then iterate some more

We kept iterating solution after solution. This was one of the most exhaustive explorations I’ve done in my career and I found every iteration to be helpful in learning something new. Every new version that was improved based on feedback was meaningful better and we continued to improve it until it met our bar for usability.

I even created a few interactive prototypes in Framer to get people playing with the controls so that I could watch them in real time to see how they discover and learn aspects of their behavior.

The Solution

The final solution thoughtfully addressed a number of issues we discovered during the research phase including the 3 major learnings around consistency, query semantics, and workflows for creation and consumption.

Modularizing the query to establish consistency

Our design team discovered that all queries could be modularized into three components: the events, the users, and the query metric.

A mock-up showing how we could modularize a query into its events, users, and metric.

Although this seems like a simple conclusion, this was groundbreaking for our interaction language. By learning how people think of queries and the major components they use within their mental model, it allowed us to build a design that reflected real world use. Queries became much easier to parse as they were broken down into how people already thought of them.

Using Natural Language to explain query semantics

Through our live experiments we learned how users appreciated aspects of natural language being baked right into our controls but didn’t want common terminology or the actual select controls to be in natural sentences.

A mock-up showing how we could modularize a query into its events, users, and metric.

This lead us to a number of iterations that, along with our event, users, and metrics modules, helped us craft a natural sentence that brought the query together in an understandable way. User feedback showed that queries were far easier to understand now and people knew how to parse the modules in relation to them.

Multi-state controls to support creation and consumption

The final and perhaps most critical aspect of our interaction language came about when we started talking about the need for an edit mode.

It was common practice for many analytics tools to hide their actual query (often written in SQL) away from the presentation of their chart being an edit mode. Users then need to press edit to actually change their chart query. This made the chart cleaner to read but it didn’t show the user how the chart was actually made and therefore didn’t encourage them to make any edits.

We wanted to do better. I collected our thoughts around our challenges and put together a specifications document that outlined our interaction language in detail.

A sample page from our lengthy interaction language and principles. Entire document available on request.

In the end, we decided to not have the definition of a chart live behind an edit mode and instead we built a multi state solution that was activated on hover. When the user wasn’t hovering over the panel, the controls would be in their most readable form and act as a definition. When they moved into a panel, as most users eventually would, the entire panel’s controls would show affordances to indicate interactivity. Finally, when a user moved onto a specific clause in the query, additional controls would be presented.

These states in actual use felt intuitive. Using subtle animations, we were able to make them not feel jarring and the overall effect was understated but clear. With no explicit edit mode, users were still encouraged to edit charts while being able to also comfortably read their definitions.


Often the best type of feedback for features like these are little to no direct feedback. No one wrote into our support channels praising the controls but no complained about them either. They felt natural to use. When we asked users about them directly, none of them mentioned much about them. It was no longer a usability issue but instead quickly became a standard that the rest of our app had to live up to. Users from a broad spectrum of background easily understood the various modules and the natural language the connected them. The hover controls wasn’t something people even noticed and they used them for editing and viewing when needed.

Overall, this project was a massive success and something I’m incredibly proud of. We would have never come to this final solution if were weren’t obsessed about collecting feedback, testing iterations in the wild, and really getting to the heart of the problems we needed to solve.