Initial GGML model commit
Browse files
README.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
inference: false
|
| 3 |
+
license: other
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
<!-- header start -->
|
| 7 |
+
<div style="width: 100%;">
|
| 8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
| 9 |
+
</div>
|
| 10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
| 11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
| 12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
| 13 |
+
</div>
|
| 14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
| 15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
| 16 |
+
</div>
|
| 17 |
+
</div>
|
| 18 |
+
<!-- header end -->
|
| 19 |
+
|
| 20 |
+
# Bigcode's StarcoderPlus GGML
|
| 21 |
+
|
| 22 |
+
These files are GGML format model files for [Bigcode's StarcoderPlus](https://huggingface.co/bigcode/starcoderplus).
|
| 23 |
+
|
| 24 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
| 25 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
| 26 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
| 27 |
+
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
|
| 28 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
| 29 |
+
* [ctransformers](https://github.com/marella/ctransformers)
|
| 30 |
+
|
| 31 |
+
## Repositories available
|
| 32 |
+
|
| 33 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
|
| 34 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starcoderplus-GGML)
|
| 35 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigcode/starcoderplus)
|
| 36 |
+
|
| 37 |
+
<!-- compatibility_ggml start -->
|
| 38 |
+
## Compatibility
|
| 39 |
+
|
| 40 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
| 41 |
+
|
| 42 |
+
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
|
| 43 |
+
|
| 44 |
+
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
|
| 45 |
+
|
| 46 |
+
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
| 47 |
+
|
| 48 |
+
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
|
| 49 |
+
|
| 50 |
+
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
|
| 51 |
+
|
| 52 |
+
## Explanation of the new k-quant methods
|
| 53 |
+
|
| 54 |
+
The new methods available are:
|
| 55 |
+
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
| 56 |
+
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
| 57 |
+
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
| 58 |
+
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
| 59 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
| 60 |
+
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
| 61 |
+
|
| 62 |
+
Refer to the Provided Files table below to see what files use which methods, and how.
|
| 63 |
+
<!-- compatibility_ggml end -->
|
| 64 |
+
|
| 65 |
+
## Provided files
|
| 66 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
| 67 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 68 |
+
| starcoderplus.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
|
| 69 |
+
| starcoderplus.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
| 70 |
+
| starcoderplus.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
| 71 |
+
| starcoderplus.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
| 72 |
+
| starcoderplus.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
| 76 |
+
|
| 77 |
+
## How to run in `llama.cpp`
|
| 78 |
+
|
| 79 |
+
I use the following command line; adjust for your tastes and needs:
|
| 80 |
+
|
| 81 |
+
```
|
| 82 |
+
./main -t 10 -ngl 32 -m starcoder-plus.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
| 83 |
+
```
|
| 84 |
+
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
| 85 |
+
|
| 86 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
| 87 |
+
|
| 88 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
| 89 |
+
|
| 90 |
+
## How to run in `text-generation-webui`
|
| 91 |
+
|
| 92 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
| 93 |
+
|
| 94 |
+
<!-- footer start -->
|
| 95 |
+
## Discord
|
| 96 |
+
|
| 97 |
+
For further support, and discussions on these models and AI in general, join us at:
|
| 98 |
+
|
| 99 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
| 100 |
+
|
| 101 |
+
## Thanks, and how to contribute.
|
| 102 |
+
|
| 103 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
| 104 |
+
|
| 105 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
| 106 |
+
|
| 107 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
| 108 |
+
|
| 109 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
| 110 |
+
|
| 111 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
| 112 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
| 113 |
+
|
| 114 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
| 115 |
+
|
| 116 |
+
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
|
| 117 |
+
|
| 118 |
+
Thank you to all my generous patrons and donaters!
|
| 119 |
+
|
| 120 |
+
<!-- footer end -->
|
| 121 |
+
|
| 122 |
+
# Original model card: Bigcode's StarcoderPlus
|
| 123 |
+
|
| 124 |
+
No original model card was provided.
|