Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/guides/profiles.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ Profiles are Python files that configure Open Interpreter. A wide range of field

You can access your Profiles by running `interpreter --profiles`. This will open the directory where all of your Profiles are stored.

If you want to make your own profile, start with the [Template Profile](https://github.com/OpenInterpreter/open-interpreter/blob/main/interpreter/terminal_interface/profiles/defaults/template_profile.py).

To apply a Profile to an Open Interpreter session, you can run `interpreter --profile <name>`

# Example Profile
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
"""
This is the template Open Interpreter profile.

A starting point for creating a new profile.

Learn about all the available settings - https://docs.openinterpreter.com/settings/all-settings

"""

# Import the interpreter
from interpreter import interpreter

# You can import other libraries too
from datetime import date

# You can set variables
today = date.today()

# LLM Settings

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turned all-settings into a list in this format if we want to provide them all here and then recommend removing irrelevant ones? Definitely could be overwhelming though, so not sure if needed:

LLM Settings - https://docs.openinterpreter.com/settings/all-settings#language-model

interpreter.llm.model = "groq/llama-3.1-70b-versatile" # Specifies which language model to use
interpreter.llm.temperature = 0.7 # Sets the randomness level of the model's output
interpreter.llm.context_window = 110000 # Manually set the context window size in tokens
interpreter.llm.max_tokens = 4096 # Sets the maximum number of tokens for model generation
interpreter.llm.max_output = 1000 # Set the maximum number of characters for code outputs
interpreter.llm.api_base = "https://api.example.com" # Specify custom API base URL
interpreter.llm.api_key = "your_api_key_here" # Set your API key for authentication
interpreter.llm.api_version = '2.0.2' # Optionally set the API version to use
interpreter.llm.supports_functions = False # Inform if the model supports function calling
interpreter.llm.supports_vision = False # Inform if the model supports vision

Interpreter Settings - https://docs.openinterpreter.com/settings/all-settings#interpreter

interpreter.loop = True # Runs Open Interpreter in a loop for task completion
interpreter.verbose = True # Run the interpreter in verbose mode for debugging
interpreter.safe_mode = 'ask' # Enable or disable experimental safety mechanisms
interpreter.auto_run = True # Automatically run without user confirmation
interpreter.max_budget = 0.01 # Sets the maximum budget limit for the session in USD
interpreter.offline = True # Run the model locally
interpreter.system_message = "You are Open Interpreter..." # Modify the system message (not recommended)
interpreter.anonymized_telemetry = False # Opt out of telemetry
interpreter.user_message_template = "{content} Please send me some code..." # Template applied to the User's message
interpreter.always_apply_user_message_template = False # Whether to apply User Message Template to every message
interpreter.code_output_template = "Code output: {content}\nWhat does this output mean?" # Template applied to code output
interpreter.empty_code_output_template = "The code above was executed..." # Message sent when code produces no output
interpreter.code_output_sender = "user" # Determines who sends code output messages

Computer Settings - https://docs.openinterpreter.com/settings/all-settings#computer

interpreter.computer.offline = True # Run the computer in offline mode
interpreter.computer.verbose = True # Used for debugging interpreter.computer
interpreter.computer.emit_images = True # Controls whether the computer should emit images

Import Computer API - https://docs.openinterpreter.com/code-execution/computer-api

interpreter.computer.import_computer_api = True

Toggle OS Mode - https://docs.openinterpreter.com/guides/os-mode

interpreter.os = False

Set Custom Instructions to improve your Interpreter's performance at a given task

interpreter.custom_instructions = """
Today's date is {today}.
"""

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should include all settings because then we'll have to maintain documenting them in multiple places. A clear link to the docs will let people easily see up-to-date info. Having the highest priority settings should give people enough help to get started.

Any settings that you'd like to see included?

interpreter.llm.model = "groq/llama-3.1-70b-versatile"
interpreter.llm.context_window = 110000
interpreter.llm.max_tokens = 4096
interpreter.llm.api_base = "https://api.example.com"
interpreter.llm.api_key = "your_api_key_here"
interpreter.llm.supports_functions = False
interpreter.llm.supports_vision = False


# Interpreter Settings
interpreter.offline = False
interpreter.loop = True
interpreter.auto_run = False

# Toggle OS Mode - https://docs.openinterpreter.com/guides/os-mode
interpreter.os = False

# Import Computer API - https://docs.openinterpreter.com/code-execution/computer-api
interpreter.computer.import_computer_api = True


# Set Custom Instructions to improve your Interpreter's performance at a given task
interpreter.custom_instructions = f"""
Today's date is {today}.
"""