Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
035aa8d
Java language support
CyanideByte Jun 25, 2024
663ef33
Fixed removing newlines
CyanideByte Jun 25, 2024
58d5727
Local III `0.3.4`
KillianLucas Jun 30, 2024
6735370
Speedup, increased max_output default, server improvements
KillianLucas Jul 2, 2024
48983e6
Async server improvements
KillianLucas Jul 2, 2024
da057d4
Async server improvements
KillianLucas Jul 2, 2024
d0a59e2
Server improvements, common hallucination fix
KillianLucas Jul 4, 2024
b26054c
Common hallucination fix
KillianLucas Jul 4, 2024
f3833b7
Better async server
KillianLucas Jul 4, 2024
45b1b27
Better async server
KillianLucas Jul 4, 2024
b9a50f5
Server documentation
KillianLucas Jul 4, 2024
d8a76af
add support for linux
PiyushDuggal-source Jul 4, 2024
6c29e47
Use platform.system() for OS detection
PiyushDuggal-source Jul 4, 2024
f7f0ba3
`wtf` 🇺🇸
KillianLucas Jul 4, 2024
a11a2fb
🇺🇸
KillianLucas Jul 4, 2024
9792749
🇺🇸
KillianLucas Jul 5, 2024
32083b1
`wtf` respects profiles 🇺🇸
KillianLucas Jul 5, 2024
91ad4e1
`i love you` 🇺🇸
KillianLucas Jul 5, 2024
9663aa1
`i love you` 🇺🇸
KillianLucas Jul 5, 2024
da91a72
🇺🇸 Context
KillianLucas Jul 5, 2024
3fab198
Merge pull request #1326 from CyanideByte/java-language
KillianLucas Jul 5, 2024
3a3ee3e
🇺🇸 Context
KillianLucas Jul 5, 2024
96632d2
Merge pull request #1332 from PiyushDuggal-source/linux-support
KillianLucas Jul 5, 2024
b27befd
Edit code blocks with `e` after it asks for confirmation
KillianLucas Jul 9, 2024
66165aa
The return of the confirmation chunk
KillianLucas Jul 9, 2024
6bdd6be
Hallucination protection
KillianLucas Jul 9, 2024
1570059
More GeneratorExit Protection
KillianLucas Jul 9, 2024
4df6b5b
More GeneratorExit Protection
KillianLucas Jul 9, 2024
d8fb8ab
Run command takes port
KillianLucas Jul 9, 2024
2b52072
Server retries
KillianLucas Jul 9, 2024
9a2d085
Server fixes
KillianLucas Jul 10, 2024
8a553ed
Server fixes
KillianLucas Jul 10, 2024
2831f43
Server fixes
KillianLucas Jul 10, 2024
0120f22
Added server test
KillianLucas Jul 11, 2024
f170c35
Added server retries
KillianLucas Jul 11, 2024
722a58f
Added server retries
KillianLucas Jul 11, 2024
7557524
Get context window and max tokens from `litellm` if not set
KillianLucas Jul 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,20 @@ FROM python:3.11.8
# Set environment variables
# ENV OPENAI_API_KEY ...

ENV HOST 0.0.0.0
# ^ Sets the server host to 0.0.0.0, Required for the server to be accessible outside the container

# Copy required files into container
RUN mkdir -p interpreter
RUN mkdir -p interpreter scripts
COPY interpreter/ interpreter/
COPY scripts/ scripts/
COPY poetry.lock pyproject.toml README.md ./

# Expose port 8000
EXPOSE 8000

# Install server dependencies
RUN pip install -e ".[server]"
RUN pip install ".[server]"

# Start the server
ENTRYPOINT ["interpreter", "--server"]
18 changes: 0 additions & 18 deletions benchmarks/simple.py

This file was deleted.

171 changes: 171 additions & 0 deletions docs/server/usage.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
---
title: Server Usage
---

## Starting the Server

### From Command Line
To start the server from the command line, use:

```bash
interpreter --server
```

### From Python
To start the server from within a Python script:

```python
from interpreter import AsyncInterpreter

async_interpreter = AsyncInterpreter()
async_interpreter.server.run(port=8000) # Default port is 8000, but you can customize it
```

## WebSocket API

### Establishing a Connection
Connect to the WebSocket server at `ws://localhost:8000/`.

### Message Format
Messages must follow the LMC format with start and end flags. For detailed specifications, see the [LMC messages documentation](https://docs.openinterpreter.com/protocols/lmc-messages).

Basic message structure:
```json
{"role": "user", "type": "message", "start": true}
{"role": "user", "type": "message", "content": "Your message here"}
{"role": "user", "type": "message", "end": true}
```

### Control Commands
To control the server's behavior, send the following commands:

1. Stop execution:
```json
{"role": "user", "type": "command", "content": "stop"}
```
This stops all execution and message processing.

2. Execute code block:
```json
{"role": "user", "type": "command", "content": "go"}
```
This executes a generated code block and allows the agent to proceed.

**Important**: If `auto_run` is set to `False`, the agent will pause after generating code blocks. You must send the "go" command to continue execution.

### Completion Status
The server indicates completion with the following message:
```json
{"role": "server", "type": "status", "content": "complete"}
```
Ensure your client watches for this message to determine when the interaction is finished.

### Error Handling
If an error occurs, the server will send an error message in the following format:
```json
{"role": "server", "type": "error", "content": "Error traceback information"}
```
Your client should be prepared to handle these error messages appropriately.

### Example WebSocket Interaction
Here's a simple example demonstrating the WebSocket interaction:

```python
import websockets
import json
import asyncio

async def websocket_interaction():
async with websockets.connect("ws://localhost:8000/") as websocket:
# Send a message
await websocket.send(json.dumps({"role": "user", "type": "message", "start": True}))
await websocket.send(json.dumps({"role": "user", "type": "message", "content": "What's 2 + 2?"}))
await websocket.send(json.dumps({"role": "user", "type": "message", "end": True}))

# Receive and process messages
while True:
message = await websocket.recv()
data = json.loads(message)

if data.get("type") == "message":
print(data.get("content", ""), end="", flush=True)
elif data.get("type") == "error":
print(f"Error: {data.get('content')}")
elif data == {"role": "assistant", "type": "status", "content": "complete"}:
break

asyncio.run(websocket_interaction())
```

## HTTP API

### Modifying Settings
To change server settings, send a POST request to `http://localhost:8000/settings`. The payload should conform to [the interpreter object's settings](https://docs.openinterpreter.com/settings/all-settings).

Example:
```python
import requests

settings = {
"llm": {"model": "gpt-4"},
"custom_instructions": "You only write Python code.",
"auto_run": True,
}
response = requests.post("http://localhost:8000/settings", json=settings)
print(response.status_code)
```

### Retrieving Settings
To get current settings, send a GET request to `http://localhost:8000/settings/{property}`.

Example:
```python
response = requests.get("http://localhost:8000/settings/custom_instructions")
print(response.json())
# Output: {"custom_instructions": "You only write react."}
```

## Advanced Usage: Accessing the FastAPI App Directly

The FastAPI app is exposed at `async_interpreter.server.app`. This allows you to add custom routes or host the app using Uvicorn directly.

Example of adding a custom route and hosting with Uvicorn:

```python
from interpreter import AsyncInterpreter
from fastapi import FastAPI
import uvicorn

async_interpreter = AsyncInterpreter()
app = async_interpreter.server.app

@app.get("/custom")
async def custom_route():
return {"message": "This is a custom route"}

if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```

## Using Docker

You can also run the server using Docker. First, build the Docker image from the root of the repository:

```bash
docker build -t open-interpreter .
```

Then, run the container:

```bash
docker run -p 8000:8000 open-interpreter
```

This will expose the server on port 8000 of your host machine.

## Best Practices
1. Always handle the "complete" status message to ensure your client knows when the server has finished processing.
2. If `auto_run` is set to `False`, remember to send the "go" command to execute code blocks and continue the interaction.
3. Implement proper error handling in your client to manage potential connection issues, unexpected server responses, or server-sent error messages.
4. Use the AsyncInterpreter class when working with the server in Python to ensure compatibility with asynchronous operations.
5. When deploying in production, consider using the Docker container for easier setup and consistent environment across different machines.
Loading