AI Magic ✨
Sentiment Analysis Dasboard
Our research revealed that while competitors offered sentiment analysis features, their approach—using AI to categorize emotions—lacked transparency, leaving users uncertain about the accuracy of the results. Recognizing this gap, the product manager, engineering team, and I developed a more reliable method for classifying emotions, enhancing clarity by providing users with a confidence rating on the sentiment data. This not only improved trust in the analysis but also empowered users with clearer, more actionable insights.
Problem statement
Unily customers wanted a way to track and understand employee sentiment based on their social interactions, such as comments and reactions to content. They needed to measure whether the sentiment was positive, negative, or neutral, and have the ability to export reports for further analysis. This would help companies better understand how their employees feel and make more informed decisions.
Vision
-
A dashboard that provides AI powered data on sentiment, filterable over time.
-
Have useful, configurable charts showing the sentiment breakdown and delightful interactions to trigger the user's reward response.
-
Allow downloading of the data into PNG for each chart and SVG.
.png)
AI analysis on conversation, comments, social reactions.
Searchable map of sentiment distribution
Method
01
What is Sentiment exactly?
In the early stages of creating the sentiment dashboard, we dove deep into research—a phase that was as much about discovery as it was about decision-making. We knew from the start that we needed to go beyond surface-level understanding to create something truly meaningful. Our users weren’t just looking for another analytics tool. They wanted a way to cut through the noise and capture genuine, nuanced insights about how people were reacting to their content. This meant that every design choice had to be grounded in a real understanding of both the user’s needs and the complexity of human emotions.
03
The journey from concept to completion of the sentiment dashboard was a dynamic, iterative process—a blend of designing and refining that evolved in tandem with feature development. This was not a linear path but rather a collaborative dance between design and development, each step informing and enhancing the other.
02
The workshops for the sentiment dashboard were conducted in stages, each designed to bring in critical perspectives. First, we worked with the internal team to get everyone aligned and engaged, then with the Customer Success (CS) team to validate and challenge the design, and finally with external customers, including British Airways, EY, and CVS, to ensure the product met real-world needs. Each workshop provided key insights and shaped the evolution of the dashboard in a unique way.
04
As we approached the final stretch of development, we faced a significant challenge. The long-awaited sentiment dashboard was nearly ready, but we weren’t yet confident that the data being displayed was fully reliable.
It was clear that while the dashboard wasn’t ready for a full rollout, we couldn’t afford to delay any longer. That’s when I proposed a solution: launch the dashboard in Beta.
01
Research: AI, Sentiment, Trust

User Interviews: Finding the Core Need
Finding the Core Need
Conversations with AI and ML Experts
Unlocking the Power of Sentiment Analysis
We began by speaking directly with the people who would be using the dashboard. Social media managers, content creators, and digital marketers—they each had their own perspectives, but a common theme quickly emerged. What they were after wasn’t just a snapshot of positive or negative sentiment. They wanted depth. They wanted to know why people were reacting the way they were, and how to trust the data enough to make decisions based on it.
From these interviews, one thing became clear: raw sentiment scores wouldn’t cut it. We needed to design a dashboard that would give context to the emotions behind the numbers, something that could distinguish between a sarcastic “love this” and a genuine expression of joy. This insight drove our decision to delve deeper into emotion classification, moving away from simple "positive" and "negative" labels into more sophisticated emotional categories.
The conversations opened our eyes to the intricacies of sentiment weighting. The team walked us through how their models analyze language patterns and tone, and how they assign weights to different words and phrases. But there was a challenge: emotions are messy, inconsistent. We needed the model to not just classify sentiment, but to understand confidence in its predictions. This led us to one key decision—the inclusion of a trust score.
Working closely with the AI team, we designed this trust score to give users insight into how confident the AI was about its sentiment classification. If the model was 95% sure a comment was expressing anger, that would be reflected differently than if it were only 60% sure. This gave users a sense of how much weight they could place on a given sentiment score, helping them avoid making decisions based on uncertain data.
Psychological Research
Decoding Emotions
Understanding the technical side was only half the battle. Emotions, by their nature, are deeply human and complex. To classify them effectively, we needed more than just algorithmic insight. So, we turned to psychology.
We consulted research on emotional categorization, looking into established models like Plutchik’s wheel of emotions and Ekman’s six basic emotions. But, as we dug deeper, we realized that while these models offered a good starting point, they didn’t fully capture the range of reactions people might have to content online. Online interactions often express mixed emotions—people can be both amused and frustrated, or inspired and critical at the same time.
This insight led to another pivotal decision: our dashboard wouldn’t just assign one emotion per comment. It would show a blend, a more fluid emotional landscape. Comments could reflect 40% happiness, 30% anger, and 30% surprise, for example. This decision allowed us to capture the emotional complexity of online interactions in a way that felt authentic.
Bridging the Gap
The Road Ahead
As we wrapped up the research phase, it became clear that this wasn’t just about building a sentiment dashboard. It was about building trust—between the user and the data. Every decision we made, from the inclusion of a trust score to the use of blended emotions, was aimed at giving users confidence in what they were seeing. And as we moved forward, we were excited to continue refining the experience, knowing that our foundation was grounded in both rigorous research and a deep empathy for the users who would rely on it.
This phase of discovery set the stage for everything that came after. It wasn’t just about gathering data—it was about listening, understanding, and making decisions that put the complexity of human emotion at the forefront. And that, in the end, was what would make all the difference.
02
Work, Workshop, Work

The Team Workshop
Engaging Developers in Early Design
This workshop was particularly significant as it marked the first time the newly restructured company brought developers into the design process so early. For many devs, this was uncharted territory—getting involved at the conceptual stage rather than waiting for requirements to be handed down.
We kicked things off with ice breakers to set a collaborative tone and then split the team into groups, encouraging them to brainstorm, ideate, and learn through hands-on activities. They explored what made a dashboard effective, identifying key differences between good and bad design. This wasn’t just a technical discussion; it was about understanding the user experience and how AI would complicate and enhance the data presentation.
The developers' varied expertise brought fresh perspectives, sparking innovative ideas about handling the complexity of using AI to interpret sentiment. The workshop was so engaging that it won over key stakeholders and drastically increased team-wide engagement. It was such a success that it became a talking point across the company, and I was soon asked to conduct similar sessions for other teams.
Workshop with Customer Success
Bringing User Insights to the Forefront
After the team had a chance to flesh out the dashboard design during several sprints, we organized a workshop with the Customer Success (CS) team. The CS team, being closest to the end users, was essential in validating the designs we had developed so far. Their intimate understanding of users’ pain points and habits allowed them to challenge assumptions that had taken root in the design and dev team.
The session revealed a critical issue: because we had been working so closely with the developers, we had drifted into over-refinement. The dev team’s deep involvement had resulted in a dashboard that was sleek, but sometimes overly complex. CS quickly pointed out where we were at risk of building something that suited developers more than the actual users.
This feedback helped us recalibrate, moving away from a developer-led process to refocus on user needs. It served as a vital reminder that the best designs balance technical innovation with real-world practicality.
.
Workshops with British Airways, EY, and CVS
Ensuring Market Relevance
The final workshop brought together data analysts and content managers from British Airways, CVS, and EY, providing us with invaluable, hands-on feedback from industry professionals who would be the real-world users of our dashboard. I facilitated this session with the goal of testing our refined designs in a live setting and gathering their input on several key aspects: feature set, layout, user flows, and any gaps we hadn’t yet addressed.
Each client brought a different perspective based on their industry and challenges. British Airways focused on real-time analysis and how sentiment data could inform quick decision-making during crises. EY, with a more analytical approach, wanted to understand how they could deepen insights by layering sentiment data with their existing metrics. CVS emphasized ease of use, pointing out that the dashboard needed to cater to a range of technical abilities within their teams.
These external sessions validated much of the work we had done but also highlighted areas for improvement—particularly around customization and scalability. Their feedback gave us clear direction for the final stages of refinement, ensuring that the dashboard would not only be effective but widely adaptable to different industries and user needs.
Each workshop moved us closer to a product that was grounded in solid design, informed by real user behavior, and equipped to handle the complexities of sentiment analysis across different industries.
03
Iterative Design x Developing
Iterative Designing
Designing with Flexibility
I began by laying the foundation for the dashboard. This involved creating wireframes and high-fidelity prototypes that mapped out the user interface, the flow of information, and interactions with sentiment data. My focus was on crafting a clean, intuitive design that balanced functionality with ease of use. I wanted the dashboard to be both visually appealing and highly functional.
Refining Through Iteration
Refining Through Iteration
Sprints for improvements
Each iteration involved a cycle of designing, building, and testing, with user feedback and performance data guiding the refinements.
I also conducted usability tests and A/B experiments to gauge how different design changes impacted user experience. These insights were invaluable for making data-driven decisions about which features to enhance, which to simplify, and which to discard.
Development
Bringing Designs to Life
During development, I faced challenges such as integrating real-time data processing and implementing sentiment analysis algorithms. Continuous communication with the developers was crucial to address technical constraints and adapt designs to fit the evolving technical landscape.
As features were built and tested, I held regular check-ins to review progress. This allowed me to ensure that the implementation aligned with my design vision and to make necessary adjustments.
A Collaborative Effort
Bringing Designs to Life
The synergy between my design efforts and the development team’s technical expertise ensured that both aesthetics and functionality were seamlessly integrated. Regular workshops, feedback sessions, and joint problem-solving kept everyone aligned and focused on delivering a product that truly met user needs.
The design and development phases were not isolated stages but intertwined processes that informed and enriched each other. This ongoing dialogue allowed me to create a sentiment dashboard that balanced user needs with technical capabilities. It was a testament to the power of collaborative, iterative design—a product shaped by both creative vision and practical expertise.
04
Testing and the Beta Solution

Navigating the Final Hurdles
As we approached the final stretch of development, we faced a significant challenge. The long-awaited sentiment dashboard was nearly ready, but we weren’t yet confident that the data being displayed was fully reliable. There were still questions about the accuracy of the information, whether the data we were trying to collect was consistently available, and how to navigate certain privacy constraints that arose during testing. On top of that, further testing was necessary to ensure the product could handle real-world scenarios without compromising on quality or compliance.
Lightbulb moment
Yet, our customers were growing impatient. They had been anticipating this feature for months, and the pressure to launch was mounting. It was clear that while the dashboard wasn’t ready for a full rollout, we couldn’t afford to delay any longer. That’s when I proposed a solution: launch the dashboard in Beta.
Win Win
This approach would allow us to meet customer demands by offering them early access to the product, while also giving us the opportunity to collect valuable data on usage and performance. It was a win-win. Customers would get a sneak peek of the dashboard they had been eagerly awaiting, and we could continue testing, refining, and addressing any issues that arose based on real user feedback. This compromise kept our customers engaged and allowed us to move forward with confidence, knowing we could still perfect the product before its official launch.
Summary
The goal was to successfully build the sentiment dashboard feature from scratch, navigating through a dynamic process of iterative design, development, and testing. After engaging the development team early on, running workshops with Customer Success, and validating the design with key clients like British Airways, EY, and CVS, we refined the dashboard to meet both technical and user needs. Despite challenges around data reliability, availability, and privacy constraints, I proposed a Beta launch to address customer impatience, allowing us to gather real-time feedback while continuously iterating on the product. The dashboard is now live, being closely monitored and improved as we gather insights to shape the MVP.
Gallery






