Benjamin Consolvo
commited on
Commit
·
537fcab
1
Parent(s):
79b5725
readme updates
Browse files
README.md
CHANGED
|
@@ -13,13 +13,13 @@ short_description: Let AI agents plan your next vacation!
|
|
| 13 |
|
| 14 |
# 🏖️ VacAIgent: Let AI agents plan your next vacation!
|
| 15 |
|
| 16 |
-
VacAIgent leverages the CrewAI agentic framework to automate and enhance the trip planning experience, integrating a user-friendly Streamlit interface. This project demonstrates how autonomous AI agents can collaborate and execute complex tasks efficiently. It takes advantage of the inference endpoint called [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference) with an OpenAI-compatible API key
|
| 17 |
|
| 18 |
_Forked and enhanced from the_ [_crewAI examples repository_](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner). You can find the application hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/Intel/vacaigent):
|
| 19 |
|
| 20 |
[](https://huggingface.co/spaces/Intel/vacaigent)
|
| 21 |
|
| 22 |
-
**Check out the video below for code walkthrough** 👇
|
| 23 |
|
| 24 |
<a href="https://youtu.be/nKG_kbQUDDE">
|
| 25 |
<img src="https://img.youtube.com/vi/nKG_kbQUDDE/hqdefault.jpg" alt="Watch the video" width="100%">
|
|
@@ -30,17 +30,17 @@ _Forked and enhanced from the_ [_crewAI examples repository_](https://github.com
|
|
| 30 |
## Installing and Using the Application
|
| 31 |
|
| 32 |
### Pre-Requisites
|
| 33 |
-
1. Get the API key from
|
| 34 |
-
2. Get the API from
|
| 35 |
-
3. Bring your OpenAI
|
| 36 |
4. Bring your model endpoint URL and LLM model ID
|
| 37 |
|
| 38 |
### Installation steps
|
| 39 |
|
| 40 |
-
|
| 41 |
```sh
|
| 42 |
-
git clone https://
|
| 43 |
-
cd
|
| 44 |
```
|
| 45 |
Then, install the necessary libraries:
|
| 46 |
```sh
|
|
@@ -56,7 +56,7 @@ MODEL_ID="meta-llama/Llama-3.3-70B-Instruct"
|
|
| 56 |
MODEL_BASE_URL="https://api.inference.denvrdata.com/v1/"
|
| 57 |
```
|
| 58 |
|
| 59 |
-
Here we are using the model [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) by default, and the model endpoint is
|
| 60 |
|
| 61 |
**Note**: You can alternatively add these secrets directly to Hugging Face Spaces Secrets, under the Settings tab, if deploying the Streamlit application directly on Hugging Face.
|
| 62 |
|
|
@@ -80,10 +80,12 @@ For enhanced privacy and customization, you could easily substitute cloud-hosted
|
|
| 80 |
|
| 81 |
VacAIgent is open-sourced under the MIT license.
|
| 82 |
|
| 83 |
-
|
| 84 |
|
| 85 |
-
Connect to LLMs on Intel
|
| 86 |
|
| 87 |
-
Chat with 6K+ fellow developers on the Intel DevHub Discord
|
| 88 |
|
| 89 |
-
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
# 🏖️ VacAIgent: Let AI agents plan your next vacation!
|
| 15 |
|
| 16 |
+
VacAIgent leverages the CrewAI agentic framework to automate and enhance the trip planning experience, integrating a user-friendly Streamlit interface. This project demonstrates how autonomous AI agents can collaborate and execute complex tasks efficiently for the purpose of planning a vacation. It takes advantage of the inference endpoint called [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference) with an OpenAI-compatible API key.
|
| 17 |
|
| 18 |
_Forked and enhanced from the_ [_crewAI examples repository_](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner). You can find the application hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/Intel/vacaigent):
|
| 19 |
|
| 20 |
[](https://huggingface.co/spaces/Intel/vacaigent)
|
| 21 |
|
| 22 |
+
**Check out the video below for a code walkthrough, and steps written out below** 👇
|
| 23 |
|
| 24 |
<a href="https://youtu.be/nKG_kbQUDDE">
|
| 25 |
<img src="https://img.youtube.com/vi/nKG_kbQUDDE/hqdefault.jpg" alt="Watch the video" width="100%">
|
|
|
|
| 30 |
## Installing and Using the Application
|
| 31 |
|
| 32 |
### Pre-Requisites
|
| 33 |
+
1. Get the API key from [scrapingant](https://scrapingant.com/) for HTML web-scraping.
|
| 34 |
+
2. Get the API from [serper]( https://serper.dev/) for Google Search API.
|
| 35 |
+
3. Bring your OpenAI-compatible API key
|
| 36 |
4. Bring your model endpoint URL and LLM model ID
|
| 37 |
|
| 38 |
### Installation steps
|
| 39 |
|
| 40 |
+
To host the interface locally, first, clone the repository:
|
| 41 |
```sh
|
| 42 |
+
git clone https://huggingface.co/spaces/Intel/vacaigent
|
| 43 |
+
cd vacaigent
|
| 44 |
```
|
| 45 |
Then, install the necessary libraries:
|
| 46 |
```sh
|
|
|
|
| 56 |
MODEL_BASE_URL="https://api.inference.denvrdata.com/v1/"
|
| 57 |
```
|
| 58 |
|
| 59 |
+
Here we are using the model [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) by default, and the model endpoint is hosted on Denvr Dataworks; but you can bring your own OpenAI-compatible API key, model ID, and model endpoint URL.
|
| 60 |
|
| 61 |
**Note**: You can alternatively add these secrets directly to Hugging Face Spaces Secrets, under the Settings tab, if deploying the Streamlit application directly on Hugging Face.
|
| 62 |
|
|
|
|
| 80 |
|
| 81 |
VacAIgent is open-sourced under the MIT license.
|
| 82 |
|
| 83 |
+
## Follow Up
|
| 84 |
|
| 85 |
+
Connect to LLMs on Intel Gaudi AI accelerators with just an endpoint and an OpenAI-compatible API key, using the inference endpoint [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference), powered by OPEA. At the time of writing, the endpoint is available on cloud provider [Denvr Dataworks](https://www.denvrdata.com/intel).
|
| 86 |
|
| 87 |
+
Chat with 6K+ fellow developers on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t).
|
| 88 |
|
| 89 |
+
Follow [Intel Software on LinkedIn](https://www.linkedin.com/showcase/intel-software/).
|
| 90 |
+
|
| 91 |
+
For more Intel AI developer resources, see [developer.intel.com/ai](https://developer.intel.com/ai).
|