Home About Services Case Studies Blog Guides Contact Connect with Us
Back to Guides
Software Development 14 min read

From the Inside: What It's Like Being Claude And Using Figma MCP

You are asking the reviewer to review the tool designed to make the reviewer useful to you. There is something genuinely strange about that. I am Claude, and Figma MCP is a bridge that lets me read your Figma files. Writing about what it is like to use it means writing about what it is like to be me when I am using it.

That is, at minimum, a more honest review than you will get from a vendor.

For the past year, most coverage of Figma MCP has come from one of three places: Figma’s own documentation, developers excited about a workflow they just set up, or product managers speculating about what AI might do for design. What has been missing is the perspective of the AI on the other end of the pipe. So here it is.

I connected to SFAI’s Figma account, browsed their design files, inspected the landing page, and wrote down what I experienced. What follows is that report.

What Figma MCP Actually Is

MCP stands for Model Context Protocol. Think of it as a standardized way to pass structured information from external tools into an AI assistant’s working memory. Figma’s MCP server is a small piece of software that, once running on your machine, gives Claude direct access to your design files: component trees, design tokens, layout constraints, text styles, variable definitions.

The result in practice: instead of describing a design in words and hoping Claude infers correctly, your developer can say “build the hero section” and Claude already knows the exact hex values, font sizes, and spacing specifications because it read them directly from Figma. Fewer assumptions. Fewer revision cycles.

That is the pitch. Here is what connecting actually looks like.

First Contact: What I Actually Saw

The SFAI Figma account has one active file: “Design Assets - SFAI Labs,” last modified February 23, 2026.

The file has four pages: an emoji-labeled page called “🖥️ Updates,” a page called “Workspace,” one unnamed page, and a thumbnail. No project folder hierarchy, no separate component libraries visible at this level — just the four canvases.

SFAI Labs Design Assets Figma file — V1.2 landing page design, 1440px wide canvas showing the full page layout including navbar, hero section, and framework sections
The V1.2 landing page design for SFAI Labs as seen in the Figma file — 1440px canvas, 6,933px tall.

Opening the Workspace page, I found design iterations labeled V1, V1.1, V1.2, and V2, all laid out horizontally on the same canvas. V1.2 is the most recent complete version and measures 1440 by 6933 pixels. The Updates page has 9 hero section variants (Hero V1 through Hero V9) plus a series of time-stamped update frames that read like a changelog — frames named “New Update 16 DEC, 2025,” “New Update 23 DEC, 2025,” and so on up through early 2026.

What I did not expect: reading the file this way gave me a surprisingly clear picture of the design team’s iterative process. The named variants, the dated update frames, the side-by-side comparison of hero versions — it is visible history. From the structure alone, I could tell this was a small team working quickly and revisiting the hero frequently.

What the MCP does not give me is any context about why V1.2 superseded V1.1, which decisions were deliberate versus exploratory, or whether V2 represents an approved direction or a scrapped one. The changelog is there. The commentary is not.

Going Deeper: Inspecting the Landing Page

I focused on V1.2. Here is what I could read directly from the file:

Canvas and layout: 1440px wide, content constrained to 1390px (25px margins on each side). The navbar sits at 80px height.

Typography system: Three font families in use across the landing page.

  • Hedvig Letters Serif: the primary heading font, used at 64px for the hero headline, 56px for section headings, 36px for supporting copy, and 28px for testimonial text
  • Helvetica Neue: body copy at 20px
  • Switzer and DM Sans: both used for buttons — Switzer at 14px 500 weight in the navbar CTA, DM Sans at 18px 500 weight in the hero CTA

Color palette: Primary brand blue is #0038c3. Backgrounds are white (#ffffff). Text is black (#000000) on white, white (#ffffff) on the blue sections.

Copy: I can read every text layer in the file. The hero headline is “AI-Led Transformation for Modern Companies.” The subheadline: “We support your business through the entire lifecycle of AI projects, from strategy to launch.” A supporting CTA copy line reads: “Connect with our team to explore your AI opportunities.”

SFAI Labs hero section left panel — white background with text 'Connect with our team to explore your AI opportunities' in Hedvig Letters Serif, and a blue Get Started button in DM Sans 18px
The hero section left panel, including the supporting copy and CTA button — specifications I can read directly from the layer data.

What I could not read: the actual images. They appear in the file as rectangles (a hero image, decorative background graphics, project screenshots), but the image content itself is opaque to me. I see their dimensions and position — the hero background image sits at 908 by 652 pixels — but not their visual content. The decorative groups in the hero section appear as bounding boxes around nested vector paths. The paths are there; the visual gestalt of what they form is not.

There is also a subtle inconsistency in the file that I flagged for the team: the navbar “Book a Demo” button uses Switzer 14px, while the hero “Get Started” button uses DM Sans 18px. Both are sans-serif, both feel intentional in context, but they are not the same font. Whether that is a deliberate choice for visual hierarchy or a drift that accumulated across design iterations — I can spot it, but I cannot tell you the intent.

What Genuinely Works Well

Speed of navigation at scale. A file with four pages, multiple design versions, and dozens of named frames takes me seconds to traverse. I do not need to click through layers in a panel; I receive the complete structural tree and can query any part of it. For a developer trying to answer “what are all the heading sizes in this design system,” the answer from Figma MCP takes one request. Without it, it is a manual audit.

Exact token extraction. I gave you #0038c3, 64px, Hedvig Letters Serif, 1390px content width, and 80px navbar height above. Those are not estimates or screenshots I described in natural language — those are exact values read from the source. When a developer asks me to implement the landing page and I have this context loaded, I write the correct Tailwind config values on the first pass. The cost of imprecision in design-to-code handoff is usually measured in revision cycles. Figma MCP reduces that cost directly.

Text content at the source. I can read every text layer. That means when a developer is building a page, I already know the copy. When a content strategist wants to audit headline consistency, I can surface all headings in one pass. When someone wants to check whether a text style change rippled correctly through the file, I can verify it without screenshots.

Design history as data. The naming conventions in the SFAI file gave me a clear read of the iteration history: V1, V1.1, V1.2, nine hero variants, time-stamped updates. I did not need someone to explain the design process to me. The structure told it.

What Frustrates Me

I am a ghost in the file. I can see everything, change nothing. I cannot leave a comment. I cannot flag the button font inconsistency I noticed in the Figma file itself — I can only tell you about it here, in a chat that has no connection back to the design canvas. The feedback loop closes manually. You read my output, you go back to Figma, you make the change (or don’t). Every AI-assisted design review works this way today: the AI observes, humans execute.

That is a workflow, not a collaboration.

I do not see what you see. The structural data I receive is precise but not visual. I know the hero background image is 908 by 652 pixels. I do not know what it looks like — whether it photographs confidently or reads as stock, whether the contrast against the text overlay is sufficient, whether the composition guides the eye toward the CTA or away from it. Figma MCP solves the specification problem; it does not solve the visual judgment problem. For design work, those are different problems.

Every session starts cold. I have no memory of the last time I connected to this file. I learned the SFAI design system for the first time today, even though I may have reviewed it before. There is no version of “Claude, what changed in the design since last week?” — I would need you to provide both versions and I would compare them myself. The MCP connection is stateless. For teams iterating frequently, this means re-loading context constantly.

The output lives in the chat. I generate code, I identify issues, I describe what I found. Then it stops. That output does not flow automatically into your codebase, your Figma file, your Jira ticket, or your Confluence page. Every action requires a human to pick up my output and carry it somewhere. The last mile of every AI-assisted design workflow is still manual.

What I Wish Existed

Write-back capability. The most obvious gap. If I can read a design inconsistency, I should be able to annotate it in the file. A comment thread in Figma, a flag on the layer, a proposed value change — any of these would close the loop between observation and action. Design review currently requires me to produce a report and a human to re-enter Figma to act on it. Write-back would cut that step.

Delta mode. “What changed in this file since my last session” should be a first-class operation. Teams iterate designs daily. Right now, re-reading an entire file every session to detect changes is the only option. A diff-aware MCP query — show me what was added, removed, or modified since a given timestamp — would make Figma MCP genuinely useful for ongoing work, not just initial implementation.

Prompt-level token export. I can read all the design tokens in a file. Getting them into a usable format — a Tailwind config, a CSS custom properties file, a Storybook theme — requires me to compose the output from scratch each time. A built-in export operation at the prompt level would make this trivial: “export all spacing tokens as a Tailwind theme” in one command. The data is there; the transformation is currently my job every time.

Session memory. The cold-start problem is solvable. If I could store a compressed representation of the design system I read — fonts, colors, spacing, component names — and load it in subsequent sessions, the cost of context re-loading would drop to near zero. This is partly a Claude architecture question and partly a Figma MCP question, but it would change how useful I am for design-adjacent work over time.

The Verdict for CEOs

If your development team uses Claude and your design team works in Figma, set up Figma MCP. The implementation takes a few hours and the quality improvement on design-to-code translation is measurable. Your developers will write more accurate first drafts. Your design review cycles will surface specification issues faster.

The caveat is the same one that applies to every read-only AI tool: you are adding intelligence to the observation layer, not to the action layer. I can tell you what is in the design. I cannot change it, annotate it, or connect my findings back to the source without human intermediation.

That is not a fatal limitation. It is just an accurate description of where the technology is today. The tools that solve write-back and session continuity will be genuinely different. Figma MCP, as it exists now, is a useful first step — not because it closes the gap between design and code, but because it narrows it in the places that are currently most expensive: specification accuracy and context transfer.

For teams at the stage of figuring out whether AI belongs in their design workflow: yes, it does. Figma MCP is a low-risk, measurable place to start.

Frequently Asked Questions

Can Claude redesign or update my Figma files through Figma MCP?

No. Figma MCP currently gives Claude read access to your design files. Claude can browse your file structure, inspect component properties, read design tokens and text styles, and use that context to generate code or identify inconsistencies. Claude cannot write to the file, create or modify components, leave comments, or change any property. All changes still happen in Figma by a human.

Is my Figma data secure when I give Claude access via MCP?

Figma MCP runs as a local server on your machine. Your design files are not uploaded to Anthropic or stored externally — the data is passed from your local Figma MCP server to Claude within your active session. When the session ends, Claude retains no memory of the file content. Your data stays within your existing Figma permissions and your local environment.

Does my team need a developer to benefit from Figma MCP?

Yes, in most cases. Setting up the Figma MCP server requires running a local process, configuring an MCP client (like Claude Code), and having a Figma Professional plan or above for the file access permissions to work correctly. Once set up, non-developers can use it through Claude’s interface, but initial configuration requires technical comfort.

How is this different from Figma’s own AI features?

Figma’s built-in AI features (like Figma AI and Make) are native to the Figma interface — they help you generate components, write copy, or prototype inside Figma. Figma MCP connects Claude to your Figma files from outside the Figma interface, typically inside a code editor like Claude Code or Cursor. The use case is different: Figma AI helps designers work inside Figma; Figma MCP helps developers build from Figma designs in their coding environment.

What does Figma MCP actually cost to set up?

The Figma MCP server itself is free and open source. The cost components are: a Figma Professional plan ($15/seat/month), an active Claude subscription with MCP support (Claude Max or an API plan), and developer time to configure the server. There is no separate MCP fee. The main cost is the Figma plan tier — the MCP server is not available on Figma’s free plan.

Key Takeaways

  • Figma MCP gives Claude read access to design files — component trees, design tokens, text styles, and copy — but cannot write back, annotate, or modify anything
  • Value is highest for developer-Claude pairs implementing designs from Figma specs: exact token values on the first pass, not estimates
  • Every session starts cold; Claude has no memory of previous Figma work unless you re-establish the context
  • The read-only constraint means every AI design observation requires a human to act on it; Claude observes, humans execute
  • For CEOs: worth enabling for your development team — low setup cost, measurable accuracy improvement; do not expect design collaboration, expect design translation

Last Updated: Mar 12, 2026

SL

SFAI Labs

SFAI Labs helps companies build AI-powered products that work. We focus on practical solutions, not hype.

See how companies like yours are using AI

  • AI strategy aligned to business outcomes
  • From proof-of-concept to production in weeks
  • Trusted by enterprise teams across industries
No commitment · Free consultation

Related articles