Series A Startup • SHIPPED 2024
Note: To comply with a Non-Disclosure Agreement (NDA), the company name, specific product names, and all financial/operational data in this case study have been anonymized. The UI structures, UX process, and design decisions accurately reflect my original work.
Timeline
3 Months (Sept 2024 – Nov 2024)
Role & Team
Company
A Series A startup building the world's fastest commercial EV chargers.
Responsibility
As Lead Designer, Directed the product strategy and UX overhaul from legacy dashboards to Pulse a natural-language AI agent. Owned the end-to-end conversational UX, dynamic visualization routing, and the 'Trust Layer' that drove adoption across executive teams.
Skills
Product Strategy
System Thinking
AI/LLM UX
Interaction Design
Prompt Engineering
AI Tools Used
Cursor (for prototyping semantic SQL layers), ChatGPT (for generating synthetic test data and Transcript Synthesis)
Impact

~75%
drop in weekly ad-hoc engineering data requests.
< 30 sec
avg. time from question to answer (down from 2–3 days).
5 teams
onboarded and actively using Pulse within 3 months of launch.
Overview
At Exponent Energy, we build the world's fastest commercial EV chargers. But as we scaled our network across three states, our operational speed hit a wall. Five different teams needed data, but it was locked across four disconnected systems (BMS, EVSE, Field Ops, CRM). The only people who could connect them were our data engineers.
Problem
We initially built standard dashboards using Redash. They solved the predictable, recurring questions and cut engineering requests by 40%. Then the harder questions came back. Like whenever a question required cross-referencing two systems (e.g., "Did the v2.3 firmware update cause cell overvoltage?"), the dashboard was useless.
The Bottleneck: Teams had to wait 2–4 days for a data engineer to write complex SQL joins.
The Literacy Gap: We tried teaching teams SQL, which resulted in an 8.3% success rate. If a tool requires expertise to use, it’s the wrong tool.


"I'm basically a human cron job. That's not what I should be spending my time on."
- Akash, Data Engineering Lead
The Request Log - June 2024 (six weeks after Redash dashboards launched)

The Engineering MVP: A Technical Success, a UX Failure
To automate these requests, our data engineers mapped our database to a semantic layer and plugged in an LLM. It was brilliant backend work, but they launched it with a basic ChatGPT-style interface.
The Comprehension Gap: The LLM output raw text and markdown tables. Business teams found it hard to read and couldn't interact with the data.
The Trust Gap: Hallucinations or missing context made executives distrust the numbers immediately.
The Blank Canvas: Users stared at an empty chat box and didn't know what to ask.

Solution
I led the design of Pulse, an omnipresent AI-powered data agent. By combining a passive, multi-turn investigation canvas with an LLM backend, Pulse turns natural language into cross-system SQL. Accessible instantly via a global ingress, Pulse proactively suggests role-based queries and returns media-rich visualizations without requiring users to write a single line of code.
RESEARCH & PROTOTYPING
Standard UX research wasn't enough. I ran 14 contextual sessions across 5 teams, sitting with people during their actual work. But I also had to bridge design, engineering, and AI logic.
Domain Interviews
Mapped each team's daily 'wake-up questions' vs. event-triggered deep-dives. 14 sessions across 5 teams.
Guardrails + Evals
Worked with the ML engineer to define what Pulse should NOT answer and established accuracy eval criteria.
AI Pipeline Review
Sat with data engineers to analyze 47 failed prototype queries. Built the semantic layer from their schema knowledge.
Prototyping with AI: I used Claude to synthesize transcripts, and built interactive React prototypes using Cursor to test real API latency, JSON parsing and loading states with users, rather than relying on static Figma screens.
DESIGN
If they can't trust the number, the product is dead.
The Problem: During testing, a business lead saw a revenue number that was slightly off from his spreadsheet. He immediately stopped trusting the tool. The AI wasn't wrong it was just omitting a specific financial segment but the raw MVP UI didn't explain that.
The Solution: I designed a strict 4-zone UX hierarchy that forces transparency. The AI's interpretation acts as a "gate" before the result, and a source footer explains the data provenance.

Users didn't know how to talk to the database.
Pure conversational AI failed in early testing. Faced with MVP's empty search bar, 6 out of 8 users froze.
The Solution: Role-Based Prompt Chips
I replaced the generic search bar with contextual, frequency-ranked prompt chips based on the user's role. It didn't limit what they could ask; it taught them what was possible to ask.
Bridging the gap between human language and database schemas.
People ask for "bad chargers." Databases require fault_rate > 5%.
The Solution: Query Reframe & Graceful Failures
Instead of returning a dead-end error or dumping raw data, I designed a disambiguation flow. If a user types vague terms, Pulse pauses, exposes its mapping, and asks for confirmation. This feature dropped queries needing engineer follow-up from 40% down to 12%.
Real analysts never ask just one question.
Our V1 used a full-page "Insight Canvas" that replaced the previous result every time a user asked a follow-up. Users lost their investigative thread entirely.
"I found the answer to my first question and it opened three more. But every time I clicked a suggestion, the first result disappeared. I lost track of what I'd found." - Ops Lead
The Solution: Thread Memory
I pivoted the core architecture to a chat-thread model. Previous queries persist and scroll up. Users can say "compare that to Hyderabad," and Pulse understands the context.
Dynamic UI for distinct operational roles. Closing the "Direct ChatGPT" gap.
Standard LLMs return Markdown text. But data isn't a conversation; it requires visualization. I collaborated with engineering to alter the system prompt. Instead of text, the AI now returns structured JSON with an "intent tag."
My UI reads this JSON and automatically routes it to the optimal React component (Ranked Table, Line Chart, Profile Card, etc.).

Context-Aware Exports
When Sales asked for data "for a deck," the semantic layer detected it, appended industry benchmarks, and surfaced export buttons.

Profile Cards
When Customer Support typed a vehicle ID, the UI didn't return a table it routed to a comprehensive, 2x2 profile card with quick-action buttons.

Reflection
Define the semantic layer before the interface
"Session" meant a charge event to Ops but a monitoring cycle to Hardware. A business glossary is deliverable zero.
Skeptics are your best test group
The business lead burned by the billing discrepancy gave us the feedback that actually built the trust layer.
Design for failure states in parallel
Two ops members who hit cold errors in week one didn't come back for three weeks. Internal tools have no acquisition funnel first impressions matter.
Modes need hard walls
A monitoring home screen should have zero investigation affordances. A conversation thread should have zero ambient monitoring data.