Skip to main content

Deploy AnythingLLM

Before using the one-click template, here is a minimal Docker Compose file for running AnythingLLM locally or on your own server.

If you're new to Docker Compose, check out our guide on how to self-host a Docker Compose app. You can also browse more examples in our Docker Compose library.

Self-host AnythingLLM with Docker Compose (minimal)

services:
anythingllm:
image: mintplexlabs/anythingllm:latest
ports:
- "3001:3001"
environment:
# Optional – configure providers via env or UI
OPENAI_API_KEY: ""
volumes:
- anythingllm-data:/app/server/storage

volumes:
anythingllm-data:

This setup stores all documents and vector embeddings in the persistent volume.


Deploy AnythingLLM on Hostim.dev (One-Click)

AnythingLLM is a self-hosted tool that lets you chat with documents using your favorite large language model – including GPT-4, Claude, and more. Host your own local vector store and keep your data private.

📚 Chat with files, wikis, or knowledge bases – fully self-hosted, no subscriptions.

Try it Yourself

Guest project runs for 1 hour. Log in to save and extend to 5 days.

Why Host AnythingLLM on Hostim.dev?

  • One-click Docker deployment
  • Supports OpenAI, Claude, LocalAI, and others
  • Persistent volume included
  • Auto domain + HTTPS
  • Built-in metrics and logs

What's included

ResourceDetails
Appmintplex-labs/anything-llm image
Volume/app/server/storage
DomainFree *.hostim.dev subdomain
SSLLet's Encrypt (auto-enabled)
Port3001

How to Deploy

  1. Go to your Hostim.dev dashboard.
  2. Click Create ProjectUse a Template.
  3. Select the AnythingLLM template.
  4. Choose a resource plan.
  5. Hit Deploy.

Post-Deploy Notes

  • Set your password at /settings/security
  • Configure LLM provider keys under Environment Variables
  • Upload documents into /app/server/storage
  • Add a custom domain under Networking

FAQ

Where does AnythingLLM store uploaded documents?

All files and generated embeddings are stored in /app/server/storage , backed by a persistent volume.

Does AnythingLLM require a GPU?

No. It works with OpenAI, Claude, and other cloud LLMs. You only need a GPU if running a local model via LocalAI or Ollama.

How do I connect OpenAI, Claude, or other providers?

Add your API keys under Environment Variables or inside the web UI under Settings → LLM Providers.

How do I reset the admin password?

Remove the users.json file in the storage folder and restart the container.

How do I back up my knowledge base?

Back up the entire volume: documents, embeddings, and workspace metadata are all inside /app/server/storage.

My uploads are not indexing. What should I check?

Verify the selected vector DB setting and ensure the API key (if using a remote embedding provider) is valid.

Can I run AnythingLLM behind a reverse proxy?

Yes. Forward HTTPS traffic to port 3001 and preserve WebSocket headers.

How do I update to the latest version?

Docker: docker compose pull && docker compose up -d Hostim.dev: redeploy the app.


Alternatives

  • OpenWebUI — ChatGPT-style interface for local LLMs
  • AnythingLLM Desktop — Local-only version
  • PrivateGPT — Document Q&A with local models

Source + Docs


Looking for something else? Browse all templates →


Try it now

Deploy AnythingLLM Now – in less than 60 seconds