Redesigning Data Interactions for Lens



I worked on a project to improve chart interactions in an energy data analytics platform used by financial professionals, including portfolio managers, brokers, traders, risk managers, strategic planners, and C-suite stakeholders.

Industry

Energy and natural resources analytics

Role

Product designer

Team setup

1 Product designer +1 Product owner

Timeline

Ongoing 1+ year


Background

Wood Mackenzie is a global research and consultancy firm that provides data, analysis, and insights across the energy, chemicals, metals, and mining sectors. Trusted by governments, financial institutions, and industry leaders, Wood Mackenzie helps decision-makers navigate complex markets, assess risk, and identify opportunities in a rapidly changing energy landscape. Its platforms combine deep domain expertise with large, complex datasets, making clarity, accuracy, and usability critical to how users explore and interpret information.


Challenge

One key feature, Lens charts, allows users to edit visualizations by changing X and Y values to display different datasets. This functionality was accessed via a cog icon, but internal testing revealed that new users struggled to find and use it.

As a result:

  • Users often assumed the charts showed all available data, underestimating the platform’s capabilities
  • Interaction patterns were inconsistent, causing frustration and inefficiency
  • Overall confidence in the platform’s analytical depth was reduced

Goal

Reducing complexity in financial data through thoughtful interaction design and increase the customer retention.

18+

Months of collaboration

12+

Testing sessions

+17%

User retation increase

Process

To understand and address the challenges our users are facing while using our platform, I followed the Double Diamond framework and Lean UX principles, incorporating the key phases of Discovery, Definition, Ideation, and Implementation throughout each project.


1/2 Research & Insights

Research Approach

To investigate the problem, I combined quantitative and qualitative methods:


1. Assumption smash workshop.

As part of the exploration phase, I facilitated an assumptions smash focused specifically on how users interact with the chart settings of the platform. The session helped surface and challenge internal assumptions around discoverability, control, and user intent when configuring charts.

By mapping assumptions against their potential impact and uncertainty, we were able to identify the highest-risk areas in the current experience and frame them as testable hypotheses. This created a shared understanding across the team and a clear direction for what to validate next through behavioural data and user evidence.


2. Analytics (Amplitude) to validate assumptions, track usage and identify interaction gaps.

Following the assumptions smash, Amplitude was used to ground those hypotheses in real user behaviour at scale. By analysing how users engage with chart settings across different flows, we were able to identify drop-offs, repeated interactions, and underused controls that indicated friction or uncertainty.

This data-led analysis helped validate or challenge our initial assumptions, prioritise the most impactful problems to address, and ensure that design decisions were informed by evidence rather than anecdote.


3. Session recordings (Hotjar) to observe behavior in real workflows.

After identifying key friction points through Amplitude, Hotjar was used to add qualitative context to the data. Session recordings and interaction heatmaps helped reveal how users navigated the chart settings, where they hesitated, and how confusion manifested in real interactions.

This qualitative layer allowed us to better understand the why behind the behavioural patterns observed in analytics and informed more empathetic, user-centred design decisions.

Evidence

  • The user repeatedly moves the cursor toward the top-right (collapse / expand / download area).
  • There’s hesitation and back-and-forth movement before interacting with settings.
  • User changes “Color by” but then moves away, scrolls, and looks elsewhere.
  • The gear icon and chart/table toggle don’t seem to immediately communicate “this is where you configure the chart.

Pain point

  • Basic and frequent actions are hidden and hard to find.
  • Low affordance: controls look secondary or decorative rather than actionable.
  • Users are searching for how to change the chart instead of immediately recognizing it.
  • No clear hierarchy: all options feel equally important.
  • Switching modes feels like a hard context switch, not a smooth alternative view.

4. Heuristic analysis.

In parallel, I conducted a heuristic evaluation of the chart settings experience to assess its usability against established UX principles. This review helped identify issues related to clarity, feedback, consistency, and cognitive load, highlighting where the interface diverged from expected patterns and user mental models. The heuristic findings complemented the behavioural and qualitative insights, providing a structured way to pinpoint usability gaps and inform focused design improvements.

Key findings from the heuristic evaluation

  1. Low discoverability of chart configuration controls.
  2. Insufficient feedback after configuration changes.
  3. High cognitive load within the settings panel.
  4. Ambiguity in control labels and terminology.
  5. Inconsistent interaction patterns between chart and table views.
  6. Limited error prevention and recovery.

5. Internal user testing and client interviews to gather direct insights.

Internal user testing & user quotes

  • “I expected to be able to change things directly on the chart, but I wasn’t sure where to go.”
  • “There are a lot of options here… It's not clear which options are interactive, it is a process of trial and error.”
  • “I think this changes the colours, but I’m not totally sure what it’s grouping by.”
  • “Full screen charts are not following the global filters.”
  • “I find it time-consuming completing basic tasks such as changing the X and Y axis.”

2/2 Research & Insights

Key Insights

What we learned about chart configuration


1. Chart configuration controls are not discoverable or self-evident

Users do not immediately recognize where or how to configure charts. Critical actions are hidden behind low-affordance icons and secondary UI elements, causing users to search rather than act with confidence.

Evidence

  • Cursor gravitation toward the top-right area without decisive action.
  • Hesitation and back-and-forth movements before opening settings.
  • User quotes expressing uncertainty about where to make changes.

Insight

Users expect configuration to be closer to the chart itself and more visibly actionable.

2. Users lack confidence that their actions produce the intended result.

Even when users interact with controls, they are often unsure what has changed or whether the system behaved as expected.

Evidence

  • “I think this changes the colours, but I’m not totally sure what it’s grouping by.”
  • Insufficient feedback after configuration changes (heuristic).
  • Users abandoning settings mid-interaction.

Insight

The interface does not adequately confirm cause-and-effect, leading to trial-and-error behavior.

3. The settings panel creates unnecessary cognitive load for frequent tasks.

Common actions (e.g. changing axes, grouping, or colours) require navigating a dense settings panel where all options appear equally important.

Evidence

  • “There are a lot of options here… it is a process of trial and error.”
  • High cognitive load identified in heuristic evaluation.
  • Repeated interactions and drop-offs in Amplitude.

Insight

The system optimizes for configurability over usability, making basic tasks feel heavy and time-consuming.

4. Users’ mental model favors direct manipulation, not indirect configuration.

Users expect to interact with the chart, not configure it from a separate control surface.

Evidence

  • “I expected to be able to change things directly on the chart.”
  • Users clicking around the chart area before opening settings.
  • Gear icon not perceived as the primary entry point for chart changes.

Insight

There is a mismatch between the product’s configuration model and users’ mental model of “editing the chart.”

5. Chart ↔ table switching feels like a disruptive context change.

Switching views breaks continuity instead of feeling like an alternative representation of the same data.

Evidence

  • Too many clicks to be able to toggle between a chart and a grid.
  • Inconsistent interaction patterns between chart and table views

Insight

Users expect chart and table views to share configuration, filters, and mental context.

6. Lack of visual hierarchy obscures what matters most.

Because all controls appear visually similar, users struggle to identify interactive elements and the most common or critical actions are hidden.

Evidence

  • Underused but important controls (Amplitude).
  • Low affordance of key actions (heuristics).
  • Users overlooking primary configuration options.

Insight

Without a clear hierarchy, users must read and interpret instead of recognize and act.

7. Friction compounds across the workflow, not just in isolated moments.

What appears as “minor” discoverability issues accumulate, making routine analytical tasks feel slow and effortful.

Evidence

  • “I find it time-consuming completing basic tasks such as changing the X and Y axis.”
  • Repeated hesitation and backtracking in session recordings.
  • Multiple methods pointing to the same breakdowns.

Insight

The experience doesn’t fail catastrophically, it fails gradually, eroding efficiency and trust over time.

Synthesized insight


The chart settings experience is powerful but opaque.

Users struggle not because the system lacks capability, but because controls are hidden, feedback is unclear, and interaction patterns don’t align with users’ expectations of direct, visual manipulation. This leads to hesitation, trial-and-error behavior, and a sense that even simple tasks require unnecessary effort.


Design Goals

To address these insights, I defined four design goals:


1. Making chart editing discoverable.

Users should clearly understand charts are editable without prior knowledge or prompting.

2. Align interactions with users’ mental models.

Chart interactions should allow direct manipulation, consistent with analyst expectations.

3. Preserve clarity as data changes.

Chart titles, subtitles, and data indicators should dynamically reflect the dataset being displayed.

4. Reduce friction and improve task efficiency.

Exploring, filtering, and exporting data should be seamless, minimizing unnecessary context switching.


Desing Process

The redesign focused on surfacing key interactions, clarifying affordances, and streamlining workflows:

The design decisions were informed by Hick’s Law, focusing on reducing decision complexity by prioritising common actions, grouping related controls, and revealing advanced options only when needed. This helped minimise cognitive load and improve speed and confidence when configuring charts.


The solutions were guided by usability metrics across effectiveness, efficiency, and satisfaction, ensuring that design decisions addressed task success, reduced effort, and improved user confidence. This approach helped prioritise changes that made core chart interactions easier to complete, quicker to perform, and more satisfying to use.


1. Making chart editing discoverable.

  • Moved the cog icon to the chart header alongside other actions.
  • Promoted X/Y axis changes, dataset tabs, chart/grid toggle, and historical/forecast options to first-level interactions
  • Removed the need for users to search hidden menus.

2. Aligning interactions with user mental models.

  • Introduced hover interactions between chart lines and the legend.
  • Hovering a dataset highlights the corresponding legend item, and vice versa.
  • Hidden legend item menu button revealed on hover to keep the UI clean.

3. Preserving clarity as data changes.

  • Introduced visual cues and tooltips for missing data.
  • Dynamic chart titles and subtitles updated based on the current dataset.

4. Reducing friction and improving efficiency.

  • Consolidated time-related controls (buttons and slider)
  • Redesigned chart settings popover with clear differentiation between interactive and static elements

Chart redesign


Impact & Validation

Measuring Success

  • Amplitude metrics confirmed 12% increased usage of chart interactions and 34% higher task completion rates.
  • Users were able to discover and manipulate datasets more efficiently.
  • 17% overall platform engagement improved.

Validation Methods.

  • A/B testing compared pre- and post-redesign interaction patterns.
  • User interviews gathered feedback on discoverability and clarity.

Reflection & Learnings

This project reinforced how crucial discoverability and affordances are in complex data tools. Even powerful features remain underused if users can’t immediately see or understand them.

I also learned the value of combining qualitative insights with quantitative metrics to validate design decisions and measure impact. Small changes in interaction placement and visual cues can dramatically improve usability, and continuous testing is essential to ensure the interface meets user expectations.

Finally, collaboration with analysts, PMs, and engineers was vital to ensure solutions were technically feasible and aligned with real user workflows.


Similar success stories