Files
khoj/.devcontainer/Dockerfile
Debanjum b1f2737c9a Drop native offline chat support with llama-cpp-python
It is recommended to chat with open-source models by running an
open-source server like Ollama, Llama.cpp on your GPU powered machine
or use a commercial provider of open-source models like DeepInfra or
OpenRouter.

These chat model serving options provide a mature Openai compatible
API that already works with Khoj.

Directly using offline chat models only worked reasonably with pip
install on a machine with GPU. Docker setup of khoj had trouble with
accessing GPU. And without GPU access offline chat is too slow.

Deprecating support for an offline chat provider directly from within
Khoj will reduce code complexity and increase developement velocity.
Offline models are subsumed to use existing Openai ai model provider.
2025-07-31 18:25:32 -07:00

42 lines
1.6 KiB
Docker

ARG PYTHON_VERSION=3.10
FROM mcr.microsoft.com/devcontainers/python:${PYTHON_VERSION}
# Install Node.js and Yarn
RUN curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - && \
apt-get install -y nodejs
# Setup working directory
WORKDIR /workspace
# --- Python Server App Dependencies ---
# Create Python virtual environment
RUN python3 -m venv /opt/venv
# Add venv to PATH for subsequent RUN commands and for the container environment
ENV PATH="/opt/venv/bin:${PATH}"
# Copy files required for Python dependency installation.
COPY pyproject.toml README.md ./
# Setup python environment
# Use the pre-built torch cpu wheel
ENV PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" \
# Avoid downloading unused cuda specific python packages
CUDA_VISIBLE_DEVICES="" \
# Use static version to build app without git dependency
VERSION=0.0.0
# Install Python dependencies from pyproject.toml in editable mode
RUN sed -i "s/dynamic = \\[\"version\"\\]/version = \"$VERSION\"/" pyproject.toml && \
pip install --no-cache-dir ".[dev]"
# --- Web App Dependencies ---
# Copy web app manifest files
COPY src/interface/web/package.json src/interface/web/yarn.lock /tmp/web/
# Install web app dependencies
# note: yarn will be available from the "features" in devcontainer.json
RUN yarn install --cwd /tmp/web --cache-folder /opt/yarn-cache
# The .venv and node_modules are now populated in the image.
# The rest of the source code will be mounted by VS Code from your local checkout,
# overlaying any files copied here if they are part of the workspace mount.