Update README.md
Browse files
README.md
CHANGED
|
@@ -8,9 +8,17 @@ tags:
|
|
| 8 |
|
| 9 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/12mfdeAJzW1NdKrN_J--L.png" alt="firefunction" width="400"/>
|
| 10 |
|
| 11 |
-
FireFunction is a state-of-the-art function calling model with a commercially viable license.
|
| 12 |
|
| 13 |
-
💡 The model is also hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v1) platform.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
```sh
|
| 15 |
OPENAI_API_BASE=https://api.fireworks.ai/inference/v1
|
| 16 |
OPENAI_API_KEY=<YOUR_FIREWORKS_API_KEY>
|
|
@@ -28,14 +36,6 @@ MODEL=accounts/fireworks/models/firefunction-v1
|
|
| 28 |
|
| 29 |
# Intended Use and Limitations
|
| 30 |
|
| 31 |
-
### Key Highlights
|
| 32 |
-
⭐️ Near GPT-4 level quality for real-world use cases of structured information generation and routing decision-making
|
| 33 |
-
|
| 34 |
-
💨 Blazing fast speed. Inference speeds are roughly 4x that of GPT-4 when using FireFunction hosted on the Fireworks platform
|
| 35 |
-
|
| 36 |
-
🔄 Support for "any" paramter in tool_choice. Firefunction is the only model that we're aware that supports an option for the model to always choose a function - particularly helpful for routing use cases
|
| 37 |
-
|
| 38 |
-
|
| 39 |
### Primary Use
|
| 40 |
Although the model was trained on a variety of tasks, it performs best on:
|
| 41 |
* single-turn request routing to a function picked from a pool of up to 20 function specs.
|
|
|
|
| 8 |
|
| 9 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/12mfdeAJzW1NdKrN_J--L.png" alt="firefunction" width="400"/>
|
| 10 |
|
| 11 |
+
FireFunction is a state-of-the-art function calling model with a commercially viable license. Key info and highlights
|
| 12 |
|
| 13 |
+
💡 The model is also hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v1) platform. Offered for free during a limited beta period
|
| 14 |
+
|
| 15 |
+
⭐️ Near GPT-4 level quality for real-world use cases of structured information generation and routing decision-making
|
| 16 |
+
|
| 17 |
+
💨 Blazing fast speed. Inference speeds are roughly 4x that of GPT-4 when using FireFunction hosted on the Fireworks platform
|
| 18 |
+
|
| 19 |
+
🔄 Support for "any" paramter in tool_choice. Firefunction is the only model that we're aware that supports an option for the model to always choose a function - particularly helpful for routing use cases
|
| 20 |
+
|
| 21 |
+
✅ The model is also API compatible with [OpenAI function calling](https://platform.openai.com/docs/guides/function-calling).
|
| 22 |
```sh
|
| 23 |
OPENAI_API_BASE=https://api.fireworks.ai/inference/v1
|
| 24 |
OPENAI_API_KEY=<YOUR_FIREWORKS_API_KEY>
|
|
|
|
| 36 |
|
| 37 |
# Intended Use and Limitations
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
### Primary Use
|
| 40 |
Although the model was trained on a variety of tasks, it performs best on:
|
| 41 |
* single-turn request routing to a function picked from a pool of up to 20 function specs.
|