Skip to content

Commit

Permalink
Merge branch 'master' into retriver_asher
Browse files Browse the repository at this point in the history
  • Loading branch information
Wendong-Fan authored Oct 5, 2024
2 parents b3d165d + 8a2f0b7 commit 6f4208f
Show file tree
Hide file tree
Showing 5 changed files with 136 additions and 149 deletions.
25 changes: 19 additions & 6 deletions camel/models/openai_compatibility_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
# limitations under the License.
# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. ===========

import os
from typing import Any, Dict, List, Optional, Union

from openai import OpenAI, Stream
Expand All @@ -25,14 +26,14 @@


class OpenAICompatibilityModel:
r"""Constructor for model backend supporting OpenAI compatibility."""
r"""LLM API served by OpenAI-compatible providers."""

def __init__(
self,
model_type: str,
model_config_dict: Dict[str, Any],
api_key: str,
url: str,
api_key: Optional[str] = None,
url: Optional[str] = None,
token_counter: Optional[BaseTokenCounter] = None,
) -> None:
r"""Constructor for model backend.
Expand All @@ -51,13 +52,25 @@ def __init__(
"""
self.model_type = model_type
self.model_config_dict = model_config_dict
self._token_counter = token_counter
self._url = url or os.environ.get("OPENAI_COMPATIBILIY_API_BASE_URL")
self._api_key = api_key or os.environ.get(
"OPENAI_COMPATIBILIY_API_KEY"
)
if self._url is None:
raise ValueError(
"For OpenAI-compatible models, you must provide the `url`."
)
if self._api_key is None:
raise ValueError(
"For OpenAI-compatible models, you must provide the `api_key`."
)
self._client = OpenAI(
timeout=60,
max_retries=3,
api_key=api_key,
base_url=url,
base_url=self._url,
api_key=self._api_key,
)
self._token_counter = token_counter

def run(
self,
Expand Down
73 changes: 73 additions & 0 deletions docs/get_started/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Installation

## [Option 1] Install from PyPI
To install the base CAMEL library:
```bash
pip install camel-ai
```
Some features require extra dependencies:
- To install with all dependencies:
```bash
pip install 'camel-ai[all]'
```
- To use the HuggingFace agents:
```bash
pip install 'camel-ai[huggingface-agent]'
```
- To enable RAG or use agent memory:
```bash
pip install 'camel-ai[tools]'
```

## [Option 2] Install from Source
### Install from Source with Poetry
```bash
# Make sure your python version is later than 3.10
# You can use pyenv to manage multiple python verisons in your sytstem
# Clone github repo
git clone https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
# If you didn't install peotry before
pip install poetry # (Optional)
# We suggest using python 3.10
poetry env use python3.10 # (Optional)
# Activate CAMEL virtual environment
poetry shell
# Install the base CAMEL library
# It takes about 90 seconds
poetry install
# Install CAMEL with all dependencies
poetry install -E all # (Optional)
# Exit the virtual environment
exit
```

### Install from Source with Conda and Pip
```bash
# Create a conda virtual environment
conda create --name camel python=3.10
# Activate CAMEL conda environment
conda activate camel
# Clone github repo
git clone -b v0.2.1a https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
# Install CAMEL from source
pip install -e .
# Or if you want to use all other extra packages
pip install -e '.[all]' # (Optional)
```
162 changes: 24 additions & 138 deletions docs/get_started/setup.md
Original file line number Diff line number Diff line change
@@ -1,105 +1,29 @@
# Installation and Setup
## 🕹 Installation

### [Option 1] Install from PyPI
To install the base CAMEL library:
```bash
pip install camel-ai
```
Some features require extra dependencies:
- To install with all dependencies:
```bash
pip install 'camel-ai[all]'
```
- To use the HuggingFace agents:
```bash
pip install 'camel-ai[huggingface-agent]'
```
- To enable RAG or use agent memory:
```bash
pip install 'camel-ai[tools]'
```

### [Option 2] Install from Source
#### Install from Source with Poetry
```bash
# Make sure your python version is later than 3.10
# You can use pyenv to manage multiple python verisons in your sytstem
# Clone github repo
git clone https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
# If you didn't install peotry before
pip install poetry # (Optional)
# We suggest using python 3.10
poetry env use python3.10 # (Optional)
# Activate CAMEL virtual environment
poetry shell
# Install the base CAMEL library
# It takes about 90 seconds
poetry install
# Install CAMEL with all dependencies
poetry install -E all # (Optional)
# Exit the virtual environment
exit
```

#### Install from Source with Conda and Pip
```bash
# Create a conda virtual environment
conda create --name camel python=3.10
# Activate CAMEL conda environment
conda activate camel
# Clone github repo
git clone -b v0.2.1a https://github.com/camel-ai/camel.git
# Change directory into project directory
cd camel
# Install CAMEL from source
pip install -e .
# Or if you want to use all other extra packages
pip install -e '.[all]' # (Optional)
```


## 🕹 API Setup
# API Setup
Our agents can be deployed with either OpenAI API or your local models.

### [Option 1] Using OpenAI API
Assessing the OpenAI API requires the API key, which you may obtained from [here](https://platform.openai.com/account/api-keys). We here provide instructions for different OS.
## [Option 1] Using OpenAI API
Accessing the OpenAI API requires the API key, which you could get from [here](https://platform.openai.com/account/api-keys). We here provide instructions for different OS.

#### Unix-like System (Linux / MacOS)
### Unix-like System (Linux / MacOS)
```bash
echo 'export OPENAI_API_KEY="your_api_key"' >> ~/.zshrc

# If you are using other proxy services like Azure
echo 'export OPENAI_API_BASE_URL="your_base_url"' >> ~/.zshrc # (Optional)
# # If you are using other proxy services like Azure [TODO]
# echo 'export OPENAI_API_BASE_URL="your_base_url"' >> ~/.zshrc # (Optional)

# Let the change take place
source ~/.zshrc
```

Replace `~/.zshrc` with `~/.bashrc` if you are using bash.

#### Windows
### Windows
If you are using Command Prompt:
```bash
set OPENAI_API_KEY="your_api_key"

# If you are using other proxy services like Azure
set OPENAI_API_BASE_URL="your_base_url" # (Optional)
# If you are using other proxy services like Azure [TODO]
# set OPENAI_API_BASE_URL="your_base_url" # (Optional)
```
Or if you are using PowerShell:
```powershell
Expand All @@ -110,68 +34,30 @@ $env:OPENAI_API_BASE_URL="your_base_url" # (Optional)
```
These commands on Windows will set the environment variable for the duration of that particular Command Prompt or PowerShell session only. You may use `setx` or change the system properties dialog for the change to take place in all the new sessions.

### General method

### [Option 2] Using Local Models
In the current landscape, for those seeking highly stable content generation, OpenAI's GPT-3.5 turbo, GPT-4o are often recommended. However, the field is rich with many other outstanding open-source models that also yield commendable results. CAMEL can support developers to delve into integrating these open-source large language models (LLMs) to achieve project outputs based on unique input ideas.
#### Example: Using Ollama to set Llama 3 locally
Create a file named `.env` in your project directory, with the following setting.

- Download [Ollama](https://ollama.com/download).
- After setting up Ollama, pull the Llama3 model by typing the following command into the terminal:
```bash
ollama pull llama3
OPENAI_API_KEY=<your-openai-api-key>
```
- Create a ModelFile similar the one below in your project directory.
```bash
FROM llama3

# Set parameters
PARAMETER temperature 0.8
PARAMETER stop Result
Then, load the environment variables in your python script:

# Sets a custom system message to specify the behavior of the chat assistant
```python
from dotenv import load_dotenv
import os

# Leaving it blank for now.
load_dotenv()

SYSTEM """ """
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
```
- Create a script to get the base model (llama3) and create a custom model using the ModelFile above. Save this as a .sh file:
```bash
#!/bin/zsh

# variables
model_name="llama3"
custom_model_name="camel-llama3"

#get the base model
ollama pull $model_name
## [Option 2] Using other APIs

If you are using other APIs that are not provided by OpenAI, you can refer to [Models/Using Models by API calling](../key_modules/models.md#using-models-by-api-calling)

## [Option 3] Using Local Models
If you are using local models, you can refer to [Models/Using Local Models](../key_modules/models.md#using-on-device-open-source-models)

#create the model file
ollama create $custom_model_name -f ./Llama3ModelFile
```
- Navigate to the directory where the script and ModelFile are located and run the script. Enjoy your Llama3 model, enhanced by CAMEL's excellent agents.
```python
from camel.agents import ChatAgent
from camel.messages import BaseMessage
from camel.models import ModelFactory
from camel.types import ModelPlatformType
ollama_model = ModelFactory.create(
model_platform=ModelPlatformType.OLLAMA,
model_type="llama3",
url="http://localhost:11434/v1",
model_config_dict={"temperature": 0.4},
)
assistant_sys_msg = BaseMessage.make_assistant_message(
role_name="Assistant",
content="You are a helpful assistant.",
)
agent = ChatAgent(assistant_sys_msg, model=ollama_model, token_limit=4096)
user_msg = BaseMessage.make_user_message(
role_name="User", content="Say hi to CAMEL"
)
assistant_response = agent.step(user_msg)
print(assistant_response.msg.content)
```
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Main Documentation
:caption: Get Started
:name: getting_started

get_started/installation.md
get_started/setup.md

.. toctree::
Expand Down
24 changes: 19 additions & 5 deletions docs/key_modules/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ The following table lists currently supported model platforms by CAMEL.
| Azure OpenAI | gpt-4-turbo | Y |
| Azure OpenAI | gpt-4 | Y |
| Azure OpenAI | gpt-3.5-turbo | Y |
| OpenAI Compatible | Depends on the provider | ----- |
| Mistral AI | mistral-large-2 | N |
| Mistral AI | open-mistral-nemo | N |
| Mistral AI | codestral | N |
Expand Down Expand Up @@ -49,9 +50,9 @@ The following table lists currently supported model platforms by CAMEL.
| Together AI | https://docs.together.ai/docs/chat-models | ----- |
| LiteLLM | https://docs.litellm.ai/docs/providers | ----- |

## 3. Model Calling Template
## 3. Using Models by API calling

Here is the example code to use a chosen model. To utilize a different model, you can simply change three parameters the define your model to be used: `model_platform`, `model_type`, `model_config_dict` .
Here is an example code to use a specific model (gpt-4o-mini). If you want to use another model, you can simply change these three parameters: `model_platform`, `model_type`, `model_config_dict` .

```python
from camel.models import ModelFactory
Expand All @@ -69,15 +70,28 @@ model = ModelFactory.create(

# Define an assitant message
system_msg = BaseMessage.make_assistant_message(
role_name="Assistant",
content="You are a helpful assistant.",
role_name="Assistant",
content="You are a helpful assistant.",
)

# Initialize the agent
ChatAgent(system_msg, model=model)
```

## 4. Open Source LLMs
And if you want to use an OpenAI-compatible API, you can replace the `model` with the following code:

```python
from camel.models.openai_compatibility_model import OpenAICompatibilityModel

model = OpenAICompatibilityModel(
model_type="a-string-representing-the-model-type",
model_config_dict={"max_tokens": 4096}, # and other parameters you want
url=os.environ.get("OPENAI_COMPATIBILIY_API_BASE_URL"),
api_key=os.environ.get("OPENAI_COMPATIBILIY_API_KEY"),
)
```

## 4. Using On-Device Open Source Models
In the current landscape, for those seeking highly stable content generation, OpenAI’s gpt-4o-mini, gpt-4o are often recommended. However, the field is rich with many other outstanding open-source models that also yield commendable results. CAMEL can support developers to delve into integrating these open-source large language models (LLMs) to achieve project outputs based on unique input ideas.

While proprietary models like gpt-4o-mini and gpt-4o have set high standards for content generation, open-source alternatives offer viable solutions for experimentation and practical use. These models, supported by active communities and continuous improvements, provide flexibility and cost-effectiveness for developers and researchers.
Expand Down

0 comments on commit 6f4208f

Please sign in to comment.