Skip to main content

Integrate Morpheus AI with OpenCode

Learn how to set up Morpheus AI as a custom provider in OpenCode, giving you access to powerful, decentralized AI models for coding assistance. This guide walks you through credential setup, provider configuration, and model selection.

What is OpenCode?

OpenCode is an open-source AI coding agent that helps you write code in your terminal, IDE, or desktop application. It’s a powerful alternative to proprietary coding assistants like Claude Code, offering complete flexibility and privacy for developers.

Open Source & Free

Over 41,000 GitHub stars and 450 contributors building the future of AI-powered coding.

Privacy First

OpenCode doesn’t store any of your code or context data, making it ideal for sensitive projects.

Any Model, Any Provider

Connect to 75+ LLM providers including Claude, GPT, Gemini, local models, and now Morpheus AI.

Multi-Platform

Available as terminal interface, desktop app (macOS, Windows, Linux), and IDE extensions.

Key Features

  • LSP Enabled: Automatically loads the right Language Server Protocols for enhanced LLM understanding
  • Multi-Session: Run multiple AI agents in parallel on the same project
  • Share Links: Share session links for collaboration and debugging
  • Claude Pro Support: Use your existing Claude Pro or Max subscription
  • 400,000 Monthly Users: Trusted by developers worldwide for production use
By integrating Morpheus AI with OpenCode, you combine the power of decentralized, free AI inference with OpenCode’s privacy-first, open-source architecture.

Overview

OpenCode is an open-source AI coding assistant that supports multiple AI providers. By integrating Morpheus AI, you gain access to free, decentralized AI inference through the Morpheus marketplace with models optimized for code generation, reasoning, and development tasks.
The Morpheus API Gateway is currently in Open Beta, providing free access to AI inference without requiring wallet connections or staking MOR tokens.

Prerequisites

Before you begin, ensure you have:
  • OpenCode installed on your system opencode.ai/docs
  • A Morpheus AI account at app.mor.org
  • Basic familiarity with JSON configuration files
  • Access to your system’s terminal or command line
1

Get Your Morpheus AI API Key

Visit app.mor.org and create your API key.
  1. On the main page (API Keys) Click Create New Key
  2. Provide a descriptive name for the key
  3. Copy the generated API key (starts with sk-)
Store your API key securely. You won’t be able to view it again after the initial creation. Never commit API keys to version control.
2

Install OpenCode

If you haven’t already, install OpenCode on your system:
brew install opencode
Verify installation by running opencode --version in your terminal.
3

Launch OpenCode

Start OpenCode for the first time:
opencode
You’re now ready to configure Morpheus AI as a provider.

Configuring the Provider

Create or update your OpenCode configuration to define the Morpheus AI provider and available models.

Configuration Location

Full Provider Configuration

Add the following configuration to your opencode.json file:
opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "morpheus-ai": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Morpheus AI",
      "options": {
        "baseURL": "https://api.mor.org/api/v1"
      },
      "models": {
        "glm-4.6": {
          "name": "GLM 4.6",
          "limit": {
            "context": 200000,
            "output": 65536
          }
        },
        "glm-4.6-web": {
          "name": "GLM 4.6 (Web)",
          "limit": {
            "context": 200000,
            "output": 65536
          }
        },
        "kimi-k2-thinking": {
          "name": "Kimi K2 Thinking",
          "limit": {
            "context": 256000,
            "output": 16384
          }
        },
        "kimi-k2-thinking-web": {
          "name": "Kimi K2 Thinking (Web)",
          "limit": {
            "context": 256000,
            "output": 16384
          }
        },
        "qwen3-coder-480b-a35b-instruct": {
          "name": "Qwen3 Coder 480B",
          "limit": {
            "context": 262144,
            "output": 16384
          },
          "options": {
            "timeout": 600000
          }
        },
        "qwen3-coder-480b-a35b-instruct-web": {
          "name": "Qwen3 Coder 480B (Web)",
          "limit": {
            "context": 262144,
            "output": 16384
          },
          "options": {
            "timeout": 600000
          }
        },
        "hermes-3-llama-3.1-405b": {
          "name": "Hermes 3 Llama 3.1 405B",
          "limit": {
            "context": 128000,
            "output": 8192
          },
          "options": {
            "timeout": 600000
          }
        },
        "hermes-3-llama-3.1-405b-web": {
          "name": "Hermes 3 Llama 3.1 405B (Web)",
          "limit": {
            "context": 128000,
            "output": 8192
          },
          "options": {
            "timeout": 600000
          }
        },
        "qwen3-235b": {
          "name": "Qwen3 235B",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "qwen3-235b-web": {
          "name": "Qwen3 235B (Web)",
          "limit": {
            "context": 131072,
            "output": 8192
          }
        },
        "qwen3-next-80b": {
          "name": "Qwen3 Next 80B",
          "limit": {
            "context": 131072,
            "output": 4096
          }
        },
        "qwen3-next-80b-web": {
          "name": "Qwen3 Next 80B (Web)",
          "limit": {
            "context": 131072,
            "output": 4096
          }
        },
        "llama-3.3-70b": {
          "name": "Llama 3.3 70B",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "llama-3.3-70b-web": {
          "name": "Llama 3.3 70B (Web)",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "mistral-31-24b": {
          "name": "Mistral 31 24B",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "mistral-31-24b-web": {
          "name": "Mistral 31 24B (Web)",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "venice-uncensored": {
          "name": "Venice Uncensored",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "venice-uncensored-web": {
          "name": "Venice Uncensored (Web)",
          "limit": {
            "context": 128000,
            "output": 8192
          }
        },
        "qwen3-4b": {
          "name": "Qwen3 4B",
          "limit": {
            "context": 131072,
            "output": 4096
          }
        },
        "qwen3-4b-web": {
          "name": "Qwen3 4B (Web)",
          "limit": {
            "context": 131072,
            "output": 4096
          }
        },
        "llama-3.2-3b": {
          "name": "Llama 3.2 3B",
          "limit": {
            "context": 32000,
            "output": 4096
          }
        },
        "llama-3.2-3b-web": {
          "name": "Llama 3.2 3B (Web)",
          "limit": {
            "context": 32000,
            "output": 4096
          }
        },
        "hermes-4-14b": {
          "name": "Hermes 4 14B",
          "limit": {
            "context": 128000,
            "output": 4096
          }
        }
      }
    }
  }
}

Understanding the Configuration

  • npm: The AI SDK package used (@ai-sdk/openai-compatible for OpenAI-compatible APIs)
  • name: Display name shown in the OpenCode UI
  • options.baseURL: The Morpheus AI API endpoint (https://api.mor.org/api/v1)
Each model includes:
  • name: Human-readable model name displayed in the UI
  • limit.context: Maximum input tokens the model accepts
  • limit.output: Maximum tokens the model can generate
  • options.timeout: Optional timeout in milliseconds (used for large models)
Models with the -web suffix have enhanced capabilities:
  • Web search integration
  • Current information access
  • Browser-optimized responses
Standard models without -web are optimized for pure reasoning and code generation tasks.

Minimal Configuration

If you prefer a simpler setup with just essential models:
opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "morpheus-ai": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Morpheus AI",
      "options": {
        "baseURL": "https://api.mor.org/api/v1"
      },
      "models": {
        "glm-4.6": {
          "name": "GLM 4.6"
        },
        "kimi-k2-thinking": {
          "name": "Kimi K2 Thinking"
        },
        "qwen3-coder-480b-a35b-instruct": {
          "name": "Qwen3 Coder 480B"
        }
      }
    }
  }
}
Start with a minimal configuration and add more models as needed. This keeps your model selection clean and focused.

Adding Morpheus AI Credentials

OpenCode stores API credentials securely in ~/.local/share/opencode/auth.json. Use the /connect command to add your Morpheus AI API key.

Using the /connect Command

1

Execute the connect command

In the OpenCode terminal, run:
/connect
2

Select 'Other' as the provider

When prompted to select a provider, choose Other:
┌  Add credential

│ ◆  Select provider
│  ...
│  ● Other

3

Enter the provider ID

Type morpheus-ai as the provider identifier:
┌  Add credential

│ ◇  Enter provider id
│  morpheus-ai

The provider ID must exactly match morpheus-ai for the configuration to work correctly.
4

Paste your API key

Enter your Morpheus AI API key when prompted:
┌  Add credential

│ ▲  This only stores a credential for morpheus-ai - you will need to configure it in opencode.json

│ ◇  Enter your API key
│  sk-xxxxxxxxxxxxx

Your Morpheus AI credentials are now securely stored and ready to use with your configured provider!

Setting Default Models

Configure your preferred models for different types of tasks:
opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "model": "morpheus-ai/glm-4.6",
  "small_model": "morpheus-ai/llama-3.3-70b",
  "provider": {
    "morpheus-ai": {
      ...
    }
  }
}
model
string
Default model used for main coding tasks, complex reasoning, and detailed responses.
small_model
string
Faster, smaller model used for quick tasks like syntax checking, simple completions, and rapid iterations.

Using Morpheus AI in OpenCode

Once configured, restart OpenCode to load the new provider settings.

Restart OpenCode

# Press Ctrl+C to exit the current session
# Then restart
opencode

Select a Model

Use the /models command to view and select from available Morpheus AI models:
/models
You’ll see all configured Morpheus AI models listed. Select one to start using it for your coding tasks.
Switch models at any time using the /models command. Different models excel at different tasks - experiment to find the best fit for your workflow.

Start Coding

With Morpheus AI configured, you can now:
  • Generate code - Ask for implementations, algorithms, or functions
  • Debug issues - Get help troubleshooting errors and bugs
  • Refactor code - Request improvements and optimizations
  • Explain code - Understand complex codebases
  • Write tests - Generate test cases and test suites
> Create a Python function that validates email addresses using regex

> Debug this TypeScript error: Type 'number' is not assignable to type 'string'

> Refactor this code to use async/await instead of callbacks

> Explain how this binary search algorithm works

Available Models

The Morpheus marketplace offers a diverse range of models optimized for different tasks:

Premium Models (Best Performance)

ModelContextOutputBest For
Qwen3 Coder 480B262K16KComplex code generation, architecture design
Hermes 3 Llama 405B128K8KAdvanced reasoning, system design
Kimi K2 Thinking256K16KComplex problem solving, multi-step reasoning
GLM 4.6200K65KLong-form code generation, documentation

Balanced Models (Great All-Rounders)

ModelContextOutputBest For
Qwen3 235B131K8KGeneral coding tasks
Llama 3.3 70B128K8KCode completion, debugging
Qwen3 Next 80B131K4KFast responses, quick iterations
Mistral 31 24B128K8KBalanced performance, good reasoning

Lightweight Models (Fast & Efficient)

ModelContextOutputBest For
Hermes 4 14B128K4KQuick completions, syntax help
Qwen3 4B131K4KSimple tasks, rapid iteration
Llama 3.2 3B32K4KLightweight tasks, low-latency responses

Specialized Models

ModelContextOutputBest For
Venice Uncensored128K8KUnrestricted content generation
Model availability depends on provider availability in the Morpheus marketplace. The API automatically routes to the highest-rated provider for your selected model. Check the API documentation for the latest model availability.

Verifying Your Setup

Check Authentication Status

Verify that your Morpheus AI credentials are stored correctly:
opencode auth list
You should see morpheus-ai in the list of configured credentials.
If morpheus-ai appears in the list, your credentials are configured correctly.

Test the Connection

Start a conversation in OpenCode and ask a simple question:
> What is Python?
If you receive a response from the selected Morpheus AI model, your integration is working correctly!

Troubleshooting

Cause: Provider ID mismatch, configuration syntax error, or OpenCode not restarted.Solution:
  1. Verify the provider ID in both /connect and opencode.json exactly matches morpheus-ai
  2. Check that your JSON configuration is valid (no missing commas, brackets)
  3. Restart OpenCode completely
  4. Verify credentials with opencode auth list
# Validate JSON syntax
cat ~/.config/opencode/opencode.json | python -m json.tool

# Check credentials
opencode auth list
Cause: Invalid or expired API key, or incorrect provider configuration.Solution:
  1. Ensure your API key is valid and active at app.mor.org
  2. Verify the provider ID matches what you used in /connect
  3. Try regenerating your API key if needed
  4. Reconfigure credentials using /connect command
# Remove old credentials and reconfigure
opencode auth remove morpheus-ai
opencode
/connect
Cause: Network connectivity issues, firewall blocking, or service unavailability.Solution:
  1. Check your internet connection
  2. Verify the baseURL is correct: https://api.mor.org/api/v1
  3. Ensure your firewall allows HTTPS connections
  4. Check the Morpheus AI service status
For large models like Qwen3 Coder 480B and Hermes 3 Llama 405B, timeouts are pre-configured to 600000ms (10 minutes). If you still experience timeouts, consider using a smaller model or checking your network connection.
Cause: Large model selected, high marketplace demand, or network latency.Solution:
  • Use smaller models for quick tasks (e.g., llama-3.3-70b, hermes-4-14b)
  • Configure a small_model in your opencode.json for rapid iterations
  • Try different models to find the best balance of performance and speed
  • Check your network latency to the Morpheus API
{
  "model": "morpheus-ai/qwen3-coder-480b-a35b-instruct",
  "small_model": "morpheus-ai/hermes-4-14b"
}
Cause: Wrong file location or OpenCode not looking in the right directory.Solution:
  1. For global configuration: ~/.config/opencode/opencode.json
  2. For project-specific: ./opencode.json in your project root
  3. Ensure the file has correct permissions
  4. Verify JSON syntax is valid
# Create config directory if missing
mkdir -p ~/.config/opencode

# Set proper permissions
chmod 644 ~/.config/opencode/opencode.json

Best Practices

Choose the right model

Select models based on your task complexity. Use large models for complex reasoning and smaller models for quick completions.

Configure small_model

Set a lightweight model as your small_model for faster responses on simple tasks, improving your coding workflow.

Secure your API key

Never commit your API key to version control. Keep it in the secure OpenCode auth storage or environment variables.

Switch models freely

Use the /models command to try different models. Each excels at different tasks - find what works best for you.

Use Web variants strategically

Leverage -web suffix models when you need current information or web-enhanced responses. Use standard models for pure coding tasks.

Monitor performance

Pay attention to response times and quality. Adjust your model selection based on your workflow needs.

Advanced Configuration

Project-Specific Models

Override global settings for specific projects:
project-root/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "model": "morpheus-ai/qwen3-coder-480b-a35b-instruct",
  "small_model": "morpheus-ai/qwen3-4b"
}
Use project-specific configurations when working on specialized projects that benefit from particular models.

Custom Model Limits

Adjust context and output limits based on your needs:
opencode.json
{
  "provider": {
    "morpheus-ai": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Morpheus AI",
      "options": {
        "baseURL": "https://api.mor.org/api/v1"
      },
      "models": {
        "custom-model": {
          "name": "Custom Model",
          "limit": {
            "context": 100000,
            "output": 4096
          },
          "options": {
            "timeout": 300000,
            "temperature": 0.7
          }
        }
      }
    }
  }
}

Additional Resources

Security Notes

Follow these security best practices to protect your API credentials:
  • Local storage: Your API key is stored locally in ~/.local/share/opencode/auth.json
  • Never commit keys: Don’t commit API keys or configuration files with credentials to public repositories
  • Add to .gitignore: Include both ~/.config/opencode/opencode.json and ~/.local/share/opencode/auth.json in your .gitignore if sharing config files
  • Rotate compromised keys: If your API key is compromised, immediately rotate it via the Morpheus AI dashboard
  • Use project configs carefully: Ensure project-specific opencode.json files don’t contain sensitive information

Next Steps

Once configured, you can:
1

Switch between models

Use the /models command to select different models for various tasks and compare their performance.
2

Create project configurations

Set up project-specific opencode.json files with models optimized for each project’s needs.
3

Optimize your workflow

Adjust context and output limits based on your typical use cases to balance performance and capability.
4

Monitor usage

Track your usage and billing in the Morpheus AI dashboard at app.mor.org.

Summary

You’ve successfully integrated Morpheus AI with OpenCode! Here’s what you’ve accomplished:
Credentials configured: Added your Morpheus AI API key securely to OpenCode
Provider setup: Configured the Morpheus AI provider with access to powerful coding models
Model selection: Learned how to choose and switch between models for different tasks
Best practices: Understand security, performance optimization, and troubleshooting
Free inference: Enabled access to decentralized AI models during the Open Beta
Happy coding with Morpheus AI! 🚀