Provider Setup
Connect PromptLab to any AI provider in under two minutes. API keys are stored locally in extension storage and used only to send requests to your selected provider.
How provider configuration works
PromptLab routes all API calls through its background service worker. Your API key never touches the panel UI — it lives in extension storage, sandboxed away from the page context. When you run an enhance or A/B test, the extension reads your stored key and sends the request directly to your chosen provider.
Opening the Options page
Click the PromptLab icon in your browser toolbar to open the side panel.
Open the Settings modal inside the panel and click Manage Provider Keys. This is the fastest path to the Options page.
Click the provider chip at the top of the Options page. The form below updates to show that provider's configuration fields.
Enter your credentials, click Save, then click Test Connection to verify. A success message confirms the connection.
Configure a provider
Go to Anthropic API settings and create a new key. Copy it — it's only shown once.
Select the Anthropic chip in Options. Paste your key into the API Key field.
API Key sk-ant-api03-••••••••••••••••••••
Select from the dropdown. Recommended starting point: claude-sonnet-4-20250514 — strong capability at moderate cost.
Go to platform.openai.com/api-keys, click Create new secret key, and copy it.
Select the OpenAI chip. Paste your key into the API Key field.
API Key sk-proj-••••••••••••••••••••••••••
Use the default model recommended in PromptLab's dropdown, or check OpenAI's model docs for the latest options. Recommendations change frequently as new versions ship.
Go to aistudio.google.com/app/apikey and click Create API key. A Google account is required.
Select the Gemini chip. Paste your key into the API Key field.
API Key AIzaSy••••••••••••••••••••••••••••
Current recommended models: gemini-2.5-pro for complex prompts, gemini-2.5-flash for faster lighter tasks. Check Google's model docs for the latest.
Go to openrouter.ai/keys, sign in, and generate an API key.
OpenRouter uses a prepaid credit system. Add a small amount at openrouter.ai/credits. Many models also have free tiers.
Select the OpenRouter chip in Options. Paste your key. In the Model field, enter a model slug from openrouter.ai/models.
anthropic/claude-sonnet-4-20250514 openai/gpt-4o meta-llama/llama-3.1-70b-instruct mistralai/mistral-7b-instruct:free
Download and install Ollama for your OS from ollama.com/download.
Open a terminal and pull any model from the Ollama library. Examples:
ollama pull llama3.2 # 3B, fast, ~2GB ollama pull llama3.1:8b # 8B, balanced, ~5GB ollama pull mistral # Mistral 7B, ~4GB ollama pull qwen2.5-coder:7b # code-focused, ~5GB
Ollama starts automatically after install. Confirm it's up:
curl http://localhost:11434/api/tags # Returns a JSON list of your installed models
Select the Ollama chip. The Base URL defaults to http://localhost:11434 — leave this unless you're running Ollama on a different port or remote host. Enter the model name exactly as it appears in your pull command.
Base URL http://localhost:11434 Model llama3.2
ollama serve.Troubleshooting
Double-check that your API key was copied completely with no trailing spaces. Confirm your account has available credits or quota. For Ollama, confirm the service is running with ollama serve.
Save your settings in Options, then reload the extension from chrome://extensions and try again.
By default Ollama only accepts connections from localhost. You may need to set OLLAMA_ORIGINS before starting Ollama to allow the extension origin:
OLLAMA_ORIGINS=chrome-extension://*,moz-extension://* ollama serve
Model slugs are case-sensitive and must exactly match the identifier shown on openrouter.ai/models. A typo will produce a runtime error rather than a helpful message.