Kirubha Sankari Kittusamy

AI Experience Architect

I started in mechanical engineering, working close to how things are actually built.

I later moved into information architecture, product design, and into AI systems even before it was mainstream.


What that taught me:

The challenge isn’t building intelligence.

It’s designing how that intelligence is expressed, trusted, and acted on.

AI STACK

How I co-build with AI.

AI-native design is not about using AI tools — it is about knowing when your judgment is irreplaceable and when AI should replace your effort.

AI & Prototyping Stack
6

Claude (Anthropic)

Primary prototyping & reasoning analysis

5

GPT-4 / OpenAI

Comparative model evaluation

4

Gemini / AI Studio

Multimodal workflow testing

3

Replit

Live AI-integrated prototypes

2

n8n

Automation and Agents

1

Figma + AI plugins

Design system & spatial comparison

Gulf of Evaluation — How I Judge AI Outputs
Prototype with real models

I don't mock AI behavior in Figma — I test against Claude, Gemini, and GPT-4 directly. Every interaction pattern in the CapitalOne diagnostic tool was tested with live LLM outputs before finalizing the design.

Evaluate outputs with intent

When designing the reasoning trace format, I tested 4 variants: progressive disclosure, flat evidence cards, tree view, and inline confidence. The tree view won because it preserved the analyst's mental model while surfacing uncertainty without overwhelming the UI — a judgment call, not a preference.

Design for failure modes first

AI misbehaves. I design for it. Confidence thresholds, graceful degradation states, human escalation paths, and 'explain this differently' affordances — these aren't edge cases, they're the core of trustworthy AI design.

Extract reusable patterns

One-off UI solutions don't scale. Every project produces a pattern — a confidence indicator component, a reasoning trace template, a handoff signal standard — that gets documented and reused across the platform.

From Making to Deciding.

AI makes execution cheap. That shifts the designer's value from producing outputs to making the right call on what should exist, what direction is correct, and what to discard. I don't just design screens. I build working systems, prototype with real models, and operate closer to product and engineering than most designers are comfortable with.

Deciding, not doing

When AI can generate a starting point in seconds, your value isn't in the making — it's in knowing what's worth building, which direction is right, and what to refine vs. discard. I design at the level of system intent, not screen production. That judgment is what I bring that no model replaces.

Builder, not just designer

I prototype in React and TypeScript, test against live LLM APIs, and deploy working systems — not static mocks. At CapitalOne, the reasoning trace format I chose was tested against real Claude outputs before a single Figma frame was finalized. I know what a context window limit looks like in practice, not in theory.

Systems, not screens

AI design is not about interface — it includes model behavior, context handling, confidence thresholds, memory, reasoning patterns, and failure states. I design how the system works: what it knows, when it escalates, how it explains itself, and what happens when it's wrong. Reusable patterns that scale across the platform, not one-off solutions.

What it means to work AI -Native

Not faster work - work that goes further.

AI-native means designing with the assumption that intelligent systems are collaborators, not tools.

I prototype with AI early testing reasoning, behavior, and edge cases before designing the interface.

I design how systems communicate confidence, surface evidence, and return decisions to humans.

And it’s not just about capability it’s about taste.
What to show, what to hide, and how intelligence should feel in context.

The result isn’t speed.

It’s clarity.

  • Prototypes engineers can feel.
  • Artifacts PMs can sell.
  • Systems experts trust

Design Philosophy

AI should provide evidence, not just answers.

Good AI design isn’t about making systems feel magical.
It’s about making them legible.

Users should understand:

  • what the system recommends
  • why
  • with what confidence
  • when to override it

Trust comes from transparency, not accuracy alone.

I design for reasoning visibility, confidence signaling, and multiple representations.
Not as features, but as the contract between the system and the human.

Trust is the outcome of clarity.

What People Say

Kirubha is a brilliant designer who joined us during a major product shift. She led several critical experiments and rapid prototyping. She brought energy and enthusiasm for solving hard design challenges that are deeply technical. She is fast, incredibly positive and just a wonderful person to collaborate with. Thumbs up!

Lin Zhang

Director / Senior Manager

Kirubha was a UX Designer for one of the subsurface visualization and interpretation product teams I managed.
She is highly knowledgeable, was always eager to tackle experience design challenges, and seamlessly partnered with the development team and stakeholders. I was impressed with her design expertise, leaderships skills, reliability and her ability to communicate well with a diverse team. Kirubha was a great asset for our team and would be a valuable member for any team.

Seth Brazell

Geologist Product Manager

Kirubha helped me understand what designing for AI really means. Not just prompts or interfaces, but how systems reason, communicate, and build trust with users. That perspective has completely changed how I approach design

Amy

Junior Designer
Giving Back to the Community

Giving back to the community has become part of me since I moved to US. I have been part of many NonProfit Organisations to help them with my experience and design skills, I am proud and continue to do this, that any small contribution will benefit them. Education and children is my prefered way of contribution, which I think is the best way we enhance our future generation.

See my Contributions