Abdul Q Abdul Q
Private Preview

Subtext

Prompt Intelligence for Power Users

2024 – 2025 (ongoing) Chrome Extension • React • FastAPI • OpenAI

Context & Role

PRODUCT

A Chrome extension + backend service for analyzing and improving AI prompts

MY ROLE

Solo founder / designer / full-stack developer

TIMELINE

2024–2025 (ongoing)

STACK

Chrome Extension, React, FastAPI, Firebase/Firestore, Redis, OpenAI API

Problem

As AI tools like ChatGPT became part of daily workflows, I noticed three recurring issues:

1. People don't know what makes a "good" prompt

Prompts are written by trial-and-error, with no systematic feedback.

2. High-value prompts get lost

Great prompts live in random chats, docs, or sticky notes. There's no memory, analytics, or versioning.

3. Teams can't standardize

Everyone rewrites the same instructions instead of sharing battle-tested prompt patterns and guidelines.

Core Question

How can we turn prompt-writing from a messy, individual habit into something trackable, improvable, and shareable?

Hypothesis

If we build a lightweight layer on top of existing AI chats that:

  • Captures prompts in context
  • Analyzes them with clear criteria
  • Turns those into reusable, team-ready patterns

Then people will write better prompts faster, get more consistent outputs, and start building a shared library of "prompt playbooks."

Users & Use Cases

Primary Users

🎓

Students & Solo Builders

Using ChatGPT all day

👥

Product Teams

Experimenting with AI workflows

🚀

Founders

Writing prompts for support, content, or coding agents

Example Scenarios

A founder wants to refine a system prompt for their support chatbot

A student wants to make sure their prompts sound less "AI-ish" and more human

A small team wants to capture and share the 10 prompts that work best for their product

Solution Overview

Subtext is a GitHub-style "lint + analytics" layer, but for prompts, embedded directly in ChatGPT via a Chrome extension.

UI Screenshot Placeholder

How it Works

1

User clicks a floating action button (FAB) injected into the ChatGPT UI

2

Subtext sends the prompt to the backend for analysis

3

The backend returns:

  • A structured scorecard (clarity, constraints, persona, context, tone, risks)
  • Rewrite suggestions and alternative phrasings
  • A quick guide explaining why the prompt is strong or weak
4

User can preview and apply improvements directly in the chat box

Over time, prompts and their "performance metadata" can be stored (via Firestore) and reused as templates.

Architecture & Implementation

Frontend (Chrome Extension + UI)

  • Injected into ChatGPT using a Shadow DOM host to avoid style conflicts
  • Floating action button opens panels for: "Analyze Prompt", "Preview Rewrites", "Guides & Patterns"
  • Uses XMLHttpRequest to communicate with backend API
  • Toast notifications for success/error states
  • Accessible labels & keyboard focus management
Extension UI Screenshot Placeholder

Backend (FastAPI 1.2.0)

Endpoints

  • • Prompt analysis (single call)
  • • Full analysis with OpenAI
  • • Prompt clarification
  • • Prompt guides

Firestore Collections

  • • Users
  • • Prompts
  • • Events
  • • Templates
  • • Vectors
  • Redis: Caches repeated analyses to reduce cost and latency
  • OpenAI + embeddings: Evaluates prompts and stores vector embeddings for similarity search
  • Safety & robustness: Circuit breaker for OpenAI failures with fallback responses and retries
  • Security: Input validation & sanitization for all user data
Architecture Diagram Placeholder

Design Process

Discovery

I shadowed my own usage and friends' workflows:

  • Screenshotted messy prompts
  • Marked when a prompt "worked" vs "failed"
  • Identified patterns in strong prompts: clear role/persona, specific constraints, context and examples, explicit success criteria
Discovery Research Placeholder

Defining the "Scorecard"

I turned those patterns into a simple rubric. Each analysis returns ratings on these dimensions + explanation:

Clarity
Context
Constraints
Persona & Audience
Examples
Risks / Ambiguity

UX Constraints

Extension must feel lightweight, not like a second app

Never block the user from sending their prompt

Show value in under 3 seconds

Iteration

Early versions were too verbose: walls of text about how to improve.

I iterated to:

  • Short bullet improvements
  • A single "Try this rewrite" button
  • Optional "deep dive" for users who want the why

Current Status & Impact

Status: MVP Fully Working in Development

Chrome extension injects UI into ChatGPT
FastAPI backend handles analysis reliably
Firestore + Redis integration confirmed
Circuit breaker logic in place

Impact (Qualitative)

Users become more aware of:

💡

How vague their prompts are

💡

When they forgot to specify tone or format

💡

When they're asking for too many things at once

Rewrites often read more natural and human, improving the chance of passing AI-detection in writing-heavy contexts.

Key Learnings

Prompt quality is teachable

Users quickly internalize patterns after a few analyses.

Inline > separate tool

People won't go to another tab just to "fix" a prompt. Embedding into ChatGPT was crucial.

Explain the "why," not just the "what"

The educational layer (small guides) turned out as important as the raw score.

Local-first and privacy matter

Next steps include moving more analysis on-device or via local LLMs for users who care about privacy.

What's Next

Short-term Roadmap

1

Template Library

Save successful prompts and organize them by use case (coding, research, content, teaching).

2

Team Features

Shared prompt libraries, usage analytics, and tags for org-wide "prompt standards."

3

Local Models

Optional integration with local LLMs (e.g., via Ollama) for offline / private analysis.

4

Multi-tool Expansion

Extend beyond ChatGPT to other AI tools (Notion AI, Gemini, etc.), making Subtext a general "prompt intelligence layer" for any AI interface.