PromptGPU

promptgpu landing page

Natural language prompts are becoming a universal interface. We ask AI to write code, generate images, even design interfaces. But what about GPU shaders? Shaders are notoriously difficult to write—they require deep graphics knowledge, WGSL or GLSL fluency, and iterative debugging on unfamiliar APIs. What if you could describe a visual effect in plain English and watch it render in real-time on your GPU, no shader background required?

That's PromptGPU.

The Problem

Shader programming has a high barrier to entry. A developer wanting to create real-time visual effects faces:

  1. Language complexity — WGSL, GLSL, HLSL—each has its own syntax, quirks, and performance considerations
  2. Graphics API learning curve — WebGL, WebGPU, Metal—understanding device initialization, buffer management, pipeline states
  3. Mathematical abstraction — Trigonometry, vector math, noise functions aren't second nature to most developers
  4. Slow iteration — Traditional shader development means compile, link, render, debug, repeat
  5. Browser fragmentation — Not all browsers support the latest GPU APIs

For creative developers and generative AI enthusiasts, this friction is a blocker. You have an idea. You can't quickly test it. So you don't build it.

The Solution: Prompt → GPU → 60fps

PromptGPU removes the friction. It's a browser-based shader playground where:

  1. You describe what you want visually ("swirling pink noise with blue stripes")
  2. Claude or GPT-4o writes a production-ready WGSL shader
  3. Your GPU renders it at 60fps in the browser
  4. You iterate through dialogue—refine colors, adjust speed, add interactivity

No shader knowledge required. No command-line tools. Just a prompt and a canvas.

Architecture: Minimal, Modular, Fast

The tech stack reflects a design principle: move heavy lifting to the GPU, keep the browser lightweight.

The Stack

  • Frontend: Next.js 16 with App Router and Turbopack for fast rebuilds
  • UI State: Zustand for global store (model selection, chat history, current shader)
  • Styling: Tailwind CSS 4 + IBM Plex Mono for that terminal-hacker aesthetic
  • GPU: WebGPU API (Chrome/Edge 113+) + WGSL shading language
  • AI: Vercel AI SDK powering both Claude and GPT-4o
  • Validation: Zod for schema validation of AI outputs

The Engine Layer

The GPU engine lives in engine/ and is deliberately minimal—zero external dependencies. This keeps the shader playground fast.

Device Management: Initialize WebGPU adapter and device, with graceful fallback for unsupported browsers.

Renderer: Renders a fullscreen quad (just 6 vertices, no vertex buffer overhead). Manages the uniform buffer with time, resolution, mouse position, and deltaTime. Handles RAF loop and shader compilation error extraction.

Shader Runner: Tracks mouse position in canvas coordinates and converts to normalized [-1, 1] space. The injectParamsPreamble() function auto-adds boilerplate if the LLM omits it, keeping prompts focused on visual logic rather than syntax.

The entire rendering pipeline fits in ~300 lines of TypeScript. No bloat.

Uniform Buffer Layout

Every shader gets access to 32 bytes of built-in parameters:

  • offset 0: time (f32)
  • offset 8: resolution (vec2f)
  • offset 16: mouse (vec2f)
  • offset 24: deltaTime (f32)

This minimal set is enough for most generative effects: animation, interactivity, and responsive design.

AI Integration

When you hit send, the flow is:

  1. Client sends POST /api/generate with prompt, chat history, and model choice
  2. Server invokes Claude or GPT-4o with a carefully crafted SHADER_SYSTEM_PROMPT
  3. LLM generates WGSL code using generateObject() with Zod validation
  4. Response returns structured data: fragment shader + description
  5. Engine compiles the shader and hot-swaps it into the render loop

The Zod schema ensures the fragment field is always valid WGSL. No invalid shaders reach the GPU.

The User Experience

The studio layout splits 60/40: canvas on the left, chat on the right. You see changes in real-time as you describe them—instant visual feedback.

Model switching is instant. Toggle Claude ↔ GPT-4o without losing conversation history. This is valuable for comparative iteration: sometimes one model generates cleaner code, sometimes the other captures your intent better.

Play/pause controls let you freeze animations for inspection or run them at 60fps for full effect.

Error handling is graceful. If the LLM generates invalid WGSL, the error bar displays the compilation error without crashing the app.

Browser Support & Limitations

WebGPU is the future of GPU graphics on the web, but it's new. Chrome and Edge 113+ support it. Firefox and Safari don't yet.

The landing page shows a CSS fallback (animated gradient) on unsupported browsers. The studio displays a clear message. As WebGPU support expands, PromptGPU becomes accessible to more creators.

Performance Insights

  • Shader compilation happens off the main thread—zero jank
  • 60fps baseline on most hardware (tested on M1 MacBook, RTX 3080, integrated Intel)
  • Mouse latency <16ms (end-to-end from input to screen)
  • AI response varies (~2-3s for shader generation)

The tight feedback loop makes iteration feel snappy.

What Makes This Different

Shadertoy and similar editors are programmer-first: you write the shader, the tool renders it.

PromptGPU is AI-first: the tool writes the shader, you guide it.

This opens shader art to:

  • Generative artists exploring algorithm space quickly
  • Educators teaching GPU concepts without syntax friction
  • Hobbyists who always wondered "what if I could shader?"
  • Designers prototyping interactive visual effects

What's Next

Currently building:

  • Expanded shader library (fractals, particles, fluid simulation)
  • Multi-GPU rendering
  • Community shader gallery
  • Mobile support (once WebGPU lands on mobile browsers)

Future possibilities:

  • Shader composition (chain effects)
  • Video/GIF export
  • Real-time collaborative editing
  • Learning from user prompts to improve generations

Why This Matters

Barriers fall when tools democratize access. Photography became ubiquitous when cameras became phones. Music production exploded when DAWs went free.

Shader art was locked behind graphics expertise—until now. PromptGPU won't replace hand-crafted mastery, but it removes the gatekeeping. Anyone curious enough to ask "what would this look like as a shader?" can find out in seconds.

That's powerful.