Responsive Analytics Dashboard UX/UI Design Case Study

"Really great UX/UI consultant. He did outstanding work for us. We will definitely use Arvand again and can recommend him highly." -Kristian, Technical Lead of Build

Defining the Minimal Viable Product

The client approached me wanting a brand new B2B analytics application designed. I asked the client extensive questions to discover that the business goal of the designs is to communicate and sell their product idea to onboard potential users, as well as investors through presentations. The client had written out several documents with product specs that specified the product features that they wanted. I asked the client hard hitting questions to discover the true underlying purpose of the application, what problems it's meant to solve, the user acquisition strategy, the user retention strategy and so forth, in order to truly be able to break down the list into must-have's, and nice-to-have's. Breaking down the list as such, allowed me to define the Minimal Viable Product, while keeping in mind that some additional features need to be designed for presentation purposes... since the ultimate goal is to sell the idea to investors and potential users. I worked with the product specs document they had written, but questioned, and criticized the features that did not make sense to me from a usability and product strategy perspective. Through this process, the client and I reached a final list of features that we agreed should be in included in the designs, which revealed 3 user types.


Designing Mobile First

Each user type had a separate goal, and use case. I determined what device each user would be using, and designed each user type's UI appropriately on that device, while considering any technical limitations. I first designed for the user type where most of the UI elements would be carried over to the other 2 user types. I designed this mobile first since once the time comes to convert the design to other screen sizes, it's relatively simple to add elements (on bigger devices), but it's extremely difficult to subtract them.


Brainstorming UX/UI Designs for Displaying Data and Analytics

When designing mobile first, I always sketch pen to paper first, since this way I can go through many variations in a short period of time before committing designs to a computer program and in this case it was no different. I tackled what I perceived to be the most complicated UI problem to solve in this app... how to present analytics data. To do this, I had to create a deep empathy with the user by really immersing myself in their headspace by learning about what problems this data would be solving, what specific question they would be trying to answer with this data, and what tools they are using now to address these problems. Some of this was achieved through research, some by talking to the client, and some by speaking with individuals who fit the user persona.
One key tactic I used in order to break down this complicated data into analytics graphs that were intuitive to the user, was to break down the data into how it would be read in simple English, for example: "Department X overwhelmingly responded 'always', Department Y mostly responded 'never'." Some of these got more complicated, since some use cases required datasets to be compared to each other. For these, I used the same tactic, by breaking down the problem that the user is aiming to solve, the question the user is aiming to answer, and how the user would read the data in simple English. 



Designing the App's User Flow Wireframes

Once I was happy with the various methods of laying out the analytics, I proceeded to designing the mobile application's information architecture and overall user flow. I quickly did this on paper since by this point I had a really good idea of how it would look. 

At this stage I had all the core UI that I needed to move onto the computer. My tool of choice is Bohemian Sketch. I converted the analytics UI, as well as the basic user flow into the wireframes for the mobile device. This went through several iterations with the client, as expected. Some things that came up out of discussing the wireframes were new use cases, as well as edge cases that should be addressed. These edge cases and some of the new use cases were integrated into the wireframes. I also designed the additional features that the 2nd user types would see and experience on mobile.

Artboard 7.png


Converting Mobile UI to Desktop UI

Once we were happy with the mobile designs, I moved onto the desktop use cases, and desktop user types. This was essentially converting the mobile designs into desktop, and including additional information, with a couple of additional use cases. This was an iterative process of playing around with grids and layouts until it made sense for desktop, and made sense for the user's goals. Special attention had to be paid to the hierarchy of information, since the user has a specific goal in mind in this use case and a lot of data to look at on a big screen. I went through several iterations of this with the client before we had covered all edge cases and we were both happy with the designs. I went through the same process with the 3rd user type for desktop.

Artboard 9.png


Prototyping the Mobile Analytics Interaction Design

I created a mobile prototype using InVision to make sure that the interaction design of the most complicated UI elements were intuitive. This prototyping exercise led to discovering a few potential interaction sticking points that might not be obvious to the user. These issues were addressed, and re-prototyped. You can see the final mobile InVision prototype here.

Analytics Visual Designs

The last step was making it all look beautiful. I asked the client about the emotions and perceptions they aimed to evoke in users, as well as some applications they find inspiring, unfortunately they had none, which pushed me to play around with a variety of different hues, saturations, and color combinations. This iterative process resulted in the final designs seen below.


get the most out of a usability test

I will outline the best way to get the most out of usability tests. This is what we have found to be most effective when we run our user tests for clients.


Data Analysis

Using MixPanel, or any events based analytics tool, we analyze your data to identify usability sticking points (i.e. "what" is wrong). If you don't have any existing data, we run a UX audit. We can then run user tests on to determine “why” it’s wrong.

UX Research Methods

We design the usability study around the hypotheses that were formed from UX audits, data analysis and just gut feel. The goal is to validate those hypotheses and uncover the "why" behind them. We also identify key performance indicators (usually based on business goals), and test the usability of the features that drive them.

UX Net Promoter Score

A key part of our user testing process is to incorporate net promoter scores, which is a fancy word for the question "On a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?".

We do this to identify a baseline qualitative user satisfaction. We also do this in follow up tests to identify usability improvements that are otherwise qualitative in nature. Sometimes this also reveals polarizing results that are correlated to demographic, psychographic and perceptions of the user.

Actionable Usability Improvements

We take our findings from the user tests and prioritize the highest impact solutions that will address the findings. We then design new user flows in actionable wireframes, mockups and specs, so that your developers can implement them without any guesswork.

Snapchat's Abysmal Usability and What Led to It


I'm a Senior UX Designer and I've been passively using Snapchat and following it's evolution since before it went main stream (not even trying to sound cool). I heard about it from my cousin over the 2012 or 2013 holidays who told me about this new app that was popular in her high school. At the time, there was no chatting feature, no Discover section, no "Story"... it was purely sending ephemeral video or photos to contacts you've added... then you would message them on iMessage to see if they laughed at your Snap (at least that's how I used it).

This is a screenshot I took back in February 2014, after they released the Story feature. See my original tweet here.



A few days ago, Snapchat CEO Evan Spiegel was quoted as saying "One thing that we have heard over the years is that Snapchat is difficult to understand or hard to use, and our team has been working on responding to this feedback."  So, what led here, and why hasn't it been a real issue until now?


Bad UX Worked in Snapchat's Favor (Until Now)

Today, Snapchat has horrendous usability, but I'm willing to admit as a UX Designer that it's potentially worked in their favor up until now, as a "coolness"/exclusivity factor. If you couldn't figure out how to use Snapchat, you were too old, you just didn't "get it". It was your fault, not the apps. But now, the horrendous usability is catching up with them, as new features have been added with little foresight, resulting in a Where's Waldo type of feature discovery mixed with secretly Googling how to use various features while feeling shame.

Screenshot 2017-11-07 15.34.38.png

Google search data shows that average monthly search queries around instructions on how to use snapchat are 10x those of Instagram, and 100x those of Youtube-- a platform I consider to have great usability, partly measured by the fact that I could teach it to my 87 year old grandma who has never used a computer or tablet.

Why Did This Happen?

The big mistake that led here is something that I've noticed Instagram be extremely careful of.  For example when Instagram was just a feed, with the ability to add photos and added the ability to send direct messages, they did not hide the direct messaging ability behind a swipe gesture like Snapchat did. Instagram is not an example of perfect execution, but when you take into consideration the number of features they've added over recent years, and their ability to keep the navigational structure, and discoverability simple, they deserve credit. When these concerns are considered in design, it creates the design culture at the company, and Snapchat's design culture started out as sticking features on top of each other, and snow-balled.

Snapchat never had, what I will term a "scalable information architecture", or "scalable navigational structure". What I mean by this is a navigation structure that can be added onto, while keeping discoverability of features, and not disrupting the existing feature flow. The litmus test for this is:

  1. If New-Feature-X is added, would it in any way interrupt or confuse existing users and their use of existing features or current user flow?
  2. Is the New-Feature-X discoverable?

The way Snapchat added features over the years only met the first criteria, and only for it's core feature... for example the first criteria was not met for the Second-Feature, once the Third-Feature was added, and this snow-balled. This is very similar to regression testing software for bugs, think of it as regression-user-testing.

As for the second criteria, one big reason for the lack of discoverability is that a lot of the features are gesture based, and therefore hidden behind other features. I feel like if user tests were done, all of this could have been prevented to some degree, and would have in the long term led to a higher user adoption amongst the masses.


600% Increase in KPI from User Test Findings (Case Study)

4.5 star rating, >7 million downloads

Outcome: Increased KPI's by 600%

Note: These designs are from 2013, but the principals guiding the user test, user test results and actions taken are extremely relevant still today



UX Key Performance Indicators

Using a combination of data analysis, prototyping and weekly user testing, I increased Key Performance Indicators by 600%, and improved the Best App Market's overall usability.

Through data analysis it was hypothesized that the more searches (ie Action-Adventure) a user follows, the more likely it is that they will be a returning user. As a result, the challenge was to increase the number of "follows" to test this hypothesis in production.


I started brainstorming ideas to increase user exposure to the "follow" button. One of the options I came up with was to place "popular playlists" (playlists were later renamed to "searches") at the top of the user’s feed, as well as in the search dialog, with a follow button that is toggled on and off next to each playlist.



Usability Test Findings

I wanted to learn how users interact with the existing "follow" functionality of the app, so I tweaked myuser tests by adding the following task: "You want to stay updated with the latest 3D Action Adventure games, what do you do?" I observed the user's action and noticed a big difference depending on whether a user had already interacted with the follow button in some way or not.

This resulted in key insights: Users would first tap the follow button when exploring the app. They would stumble upon it out of curiosity of not knowing what to expect from that button; some users even expected it to act as a search tool due to the vagueness of the icon. The large majority of users, upon clicking it, would instantly exit out of the resulting popup. The very small amount of users who did not close the popup, would then enter what they wanted to follow into the field prompting them to name their follow, not realizing that it does not conduct a new search.... because users don't read!

Another thing observed was that a very tiny percentage of the users who did use the functionality as hoped, did not understand where to go to view these apps/games that they had just followed.

Actions Taken
Using the usability test findings, I started brainstorming user interfaces and created the following mocks: they did not require a user to read, did not take the user out of their existing flow, and did not require the user to type or enter any information. One additional aspect of this was a subtle animation for users who had not previously followed anything, with the point being to draw new users to it. We launched this and saw a 600% increase in follows.


Adding a Feature or Launching an App with the Scientific Process

In this post I’ll discuss an overview of how to best utilize data, analytics, and experiments such as A/B testing, when launching a new feature. This is what I often see neglected when designers and stakeholders are caught up in wireframes, animations, and coming up with the next big feature. 


Using the Scientific Method

Lean startup, MVP, Design Thinking, UX Process-- at their core are arguably all rebranded ways of saying the scientific method, which often becomes the last step of the design process or gets completely neglected. The scientific method is a continuous process that involves the definition, and testing of a hypothesis through an experiment, resulting in measurable evidence. The measurable evidence is then compared to the predicted outcome, often times resulting in a new hypothesis, and experiment to test. So, how does this apply to designing a new feature?


Defining your Hypothesis

A feature should always be broken down into it’s smallest core part, as to make measurability more accountable. On one project I was tasked with designing the UX for adding a new comments section with the ability to Up-Vote comments, Reply to comments, and tag users in the comments section. The core hypothesis here, is that “Users will write comments”. If users don’t leave comments in the first place, there will be no comments to read, reply to, tag users in, or up-vote, so it is critical to optimize the commenting experience before adding on the rest of the features.

It’s often the case that breaking down a feature to this bare-bones level receives a lot of push-back from stakeholders such as the client, so it is important to weigh the trade-offs, as each feature and each situation is unique. For example, will adding the ‘up-vote’ feature hinder your ability to optimize the core hypothesis of leaving comments? Probably not, if that feature is designed as a secondary call-to-action, and doesn’t conflict with the core hypotheses visual hierarchy. Will adding the ability to reply to comments hinder this optimization? Same answer, but now we also need to consider that it will slow down the development and design time, time that could have been spent gathering user analytics data through an earlier launch of a smaller feature, data that could have been used for optimizing. This means that your engineering and design resources, and access to market are a factor too.


Measuring Evidence

Once the core hypothesis of the feature is defined, it should always be paired with at least one quantifiable way to measure it’s success, through an analytics tool that does event tracking-- my preferred tool is MixPanel. These Key Performance Metrics should be defined before wireframes are made, this will keep the designs oriented around the correct business goals, and make sure that the wireframes aren’t done for the sake of making wireframes. Here are examples of what should be measured:

-what % of users tap the comment button

-what % of users start entering a comment

-what % of users post a comment

-what % of users get into the ‘see all comments’ page

These data points, as with most user analytics data points, should be analyzed separately for different segments such as new users, and returning users, as they could have drastically different experiences. These results will reveal some gaping holes and opportunities in the feature’s design and optimization, which should ideally reach a comfortable level in a private beta setting before launching the feature on a mass level.

My favorite thing about this process is that you and your coworkers can be on the opposite sides of the hypothesis, and the data will determine a real winner. I have been in situations where the stakeholders did not respect the process and data, and tried to interpret it each in their own way, but that’s a bigger cultural issue.

In future posts I will cover how A/B tests and usability tests can be utilized as a part of this process.

iOS White Label App Store Ban Easiest Solution with User Flow


Apple has released new rules  at this years WWDC, effecting white label apps, such as those for events, schools, universities, B2B and so forth. In short, due to the 'App Store cleanup', they are no longer allowing companies to have white label apps in the app store. Rule 4.2.6 states “Apps created from a commercialized template or app generation service will be rejected."

In this post I'll lay out the simplest design and implementation fix for this, which I helped a client with who has 100's of white-label apps in the app store for various Universities. This idea can be taken and adjusted to different types of white-label apps as well.

Simple User On-boarding Change

This change is simply a user on-boarding change, as you will have to include an additional step upon launching the app of making the user select the University, Event, School etc. If the step is location specific, or if knowing the user's location can guess the user's selection, then that should be used for a better User Experience, but also allow them to make their own custom selection as well.

It is important for the selection to be be cached, and upon re-launching the app, the user should fall into the same flow, rather than having to make the selection on every launch. The user should have the option to change their selection from the app's home page.


Case study: User testing script for an iphone app

Identifying the Usability Sticking Points

This client came to me with an app already in the app store, with poor retention. My first step was to understand the goal of the application by asking the appropriate questions, such as: What problem is your app solving? 

The next steps were to identify the key use cases, and features that a user needs to go through before becoming a sticky user. In this step I also used MixPanel, their go to analytics tool, to assist in identifying some of these sticky users and their behaviors.

This was followed by doing a heuristic evaluation to create hypotheses as to what sticking points might be, and cross correlate this with any existing data points they have through an application like MixPanel. 

Usability Test Methodology

The final step before running the actual user tests was to create the user test methodology and script based on the information found in the previous steps. This involved creating tasks for users to perform (ie "What would you do if you wanted to replay your song"), without giving any hints or clues as to how to do it. Some of these tasks were just there to throw the user off, before giving them the next task. Some tasks asked for one thing, while I was observing for something else. Here is a subsection of the script, tasks and questions:

[Briefing: The purpose of this user test is to find out how to make Whispa more user friendly. I’ll be giving you several tasks, and questions. Please think out loud, the more you talk and express your thought process the better. There is no right or wrong.]

[Question: In your own words, how would you describe what Whispa does, to a friend?]

[Task: Create a song with drums and piano]

[Task: Add a guitar to the stage and make it’s volume 40%]

[Task: Listen to a song someone else created on Whispa]


The goal of the methodology was to cover the major use cases, as well as the major UX sticking points that were discovered or hypothesized in the previously described steps.

It is also important to gather relevant demographic information. For this particular app, one hypothesis was that users who do DJ understand the app better than those who do not. Other demographic information gathered was about what app the users currently use, to determine if there is a correlation between tech savvy-ness, or social-media-savvyness and the Net Promoter Score.

Post-test screening questions

“Are you able to play instruments or DJ?”

“Have you ever used any DJ or music production software in the past? If yes, when and for how long?”

“Which social media tool do you use?”

“How old are you?”

“What smartphone do you own?”

“What are your favorite apps?”


The following is an example of the key discoveries from one user test.



Usability Test Outcome

Based on the first round of user testing, I listed out all the problems, ranked them in terms of impact, and brainstormed solutions.

The final solution was implemented by the client's development team. This was followed by another round of user testing, which determined that the Net Promoter Score increased from 2/10 to 8.5/10 between the two iterations.


5 Things That Could Go Wrong with Your App Development and Launch

So, you have your new wireframes and squeaky-clean UX/UI designs ready to hand to your developer(s), now what?


5 Things That Could Go Horribly Wrong

  1. Engineers not delivering what you expect
  2. Frustrating and seemingly endless back and forth with engineers requiring technical specifications
  3. Engineers requiring additional wireframes and designs due to a lack of understanding or an expanded scope
  4. Post launch: not understanding what users liked or disliked about your product
  5. Post launch: not understanding how users use your product


2 Things You Can Do Right Now

  1. Write out error messages and the corresponding triggers that may lead to these errors (i.e. if a user, who has an existing account, tries to sign up: “Did you mean to login?”, hyperlinking ‘login’ to the login page)
  2. Set up MixPanel to track data analytics in order to monitor and track user behavior