Triangulating Data for Better User Research

Ivan Čiš

Is there still unexplored territory when it comes to core user experiences of which you and your colleagues are not yet aware? Are there complex questions about the performance of these user experiences that can actually be answered simply and credibly? Let’s delve into how your upcoming user research can gain greater validity by drawing conclusions from various types of user data.

Product Growth as a Strategy

On many occasions, various individuals within your organization find themselves focusing on different outcomes, such as OKRs, KPIs, or health metrics. They often get caught up in the pursuit of team goals and agile delivery roadmaps, occasionally neglecting to discover new opportunities within core features. However, it’s important to remember that these core features are the very reason why your existing users keep coming back to your product.

At Njuškalo, we found ourselves in a similar situation. Over the past few years, the number of new ads submitted to Njuškalo has remained stable and competitive when compared to our rivals. Over the same past several years, the Ad Submission feature has gone under the radar in terms of thorough analysis, ever since the launch of the major overhaul of the user flow, accompanied by two exciting additional features added exclusively to mobile apps.

Firstly, sellers using the mobile apps gained the ability to utilize their cameras for automatic category and price recognition of their items. Secondly, when users were prompted to enter an Ad Title, a data science model would suggest an ad category based on the inputted title.

UX as a Journey, Not a Destination

On the one hand, our friends and colleagues viewed the photo recognition functionality as an impressive and technically advanced feature, even though they seldom or never actually utilized it in real life. On the other hand, our Customer Support team occasionally shared anecdotes about users struggling to locate the Ad Submission feature within the mobile apps.

In the previous year, the mobile app stakeholders brought into question the lack of potential value in those new features. As a result, during our product growth workshop, which focused on increasing the retention of new mobile app users, we conducted an evaluation of the ad submission user research project. It was determined that this project had great importance, but a relatively low level of urgency. Consequently, it was scheduled as a planned activity for the upcoming months. This was great, as it allowed us to start the project with desk research. And so we did.

Triangulating Insights

Since these are some of the most significant parts of Njuškalo, it has become imperative to employ a diverse set of methods to triangulate various insights and distill them into clear and concise findings:

  1. Our initial desk research involved a comparison of our own heuristics, user flows, and website traffic with those of our competitors
  2. We conducted a comprehensive analysis of quantitative data on a larger scale, which revealed a variety of emerging patterns
  3. To gain a deeper understanding of user behavior and context, we engaged with a select group of our actual users through in-depth user interviews and usability tests.

Quantitative Data (the Unknown Knowns)

When deciding from which personas we could learn the most, we opted to focus our research efforts on two distinct groups: automotive and marketplace classified ad sellers. To enhance the precision of our study, we further categorized these personas based on criteria such as their recent ad submission frequency (first-time or frequent) and the platforms they predominantly utilize (mobile websites or mobile apps).

Upon reviewing the data that was retrieved by our data warehouse team, a notable trend emerged. First-time sellers showed a relatively uniform distribution in their choice of platforms for posting their first ads. However, as sellers progressed to their fifth or tenth ad within the past year, a shift in the numbers occurred, with a predominant preference for mobile apps. In summary, we can conclude that frequent sellers overwhelmingly opt for mobile apps as their preferred platform.

User flow on mobile apps is different from the website as there is a different first step, a photo recognition feature that suggests the category and the price of the product. When analyzing the distribution of clicks on that initial screen, we could see that the majority of those clicks (90%) were actions to close or skip this step.

Upon visualizing the funnel data in a graph, our analysis revealed another pattern: the majority of drop-offs occurred within the initial steps, where the previously mentioned photo recognition feature was introduced, as well as on the subsequent screen within the mobile apps. Furthermore, a similar drop-off was observed on the mobile website, as well.

What about User Needs?

How could we understand – or to be more precise, “triangulate” – user pain points and identify areas for improvement without interacting with real users in person? To assess the user flow performance in reality, we needed to arrange usability tests and interviews with the previously mentioned personas.

Qualitative Data

After defining the interview discussion guide and the usability test plan, we conducted 7 sessions. We then compiled the insights obtained from these sessions in Miro and organized them using an affinity mapping approach, assigning color-coded severity levels to each insight.

As for the choice to conduct sessions with only 7 users, this decision was influenced by UX research industry standards, which are rooted in scientific studies stating that testing with only 5 participants is typically sufficient to uncover 85% of the usability problems that impact 1 in 3 users.

High Severities

Based on the findings of our sessions (highlighted in red), it became evident that sellers using mobile apps encountered difficulties in locating the Ad Submission feature. Furthermore, when they did manage to access it, typically through the “+” icon, they often remained unaware that they had selected the Ad Submission feature.

During Step #2, which involves selecting the ad category, the experiences varied among our users. For two of our users, the category suggestion feature, which is based on the ad title, worked seamlessly and without any issues. However, for two other users, this feature did not work as effectively. They had to manually navigate through a three-level selection, which proved frustrating for them because they couldn’t determine to which category their item should be added.

Additionally, following each usability test, sellers consistently shared with us that they weren’t utilizing the advantages of photo recognition. This was mainly because they were not using it to capture images but instead chose to upload pictures directly from their photo gallery apps during Step #3.

These insights provided us with a reliable understanding of our users’ needs, empowering our cross-functional team to take the next steps in order to enhance this section of Njuškalo for our users’ benefit.

Keep an eye out for future releases.


Triangulation doesn’t necessarily require the elaborate process outlined here; instead, this case study can serve as a reminder for you to seek verification from multiple angles in order to bolster your effectiveness as a UX detective. By collecting and interpreting data using more than two methods, your research can produce results that are more reliable and valid, enabling your organization to keep a sharp eye on user needs and to continually improve your users’ experiences by making more informed and effective decisions.