Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • lora_weights: "decapoda-research/llama-7b-hf"
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.5.0.dev0

PROMPT FORMAT

### Instruction:
<prompt>

Input
  
### Output:
We will begin by duplicating the repository and then utilize the generate.py script to test the model:

!git clone https://github.com/tloen/alpaca-lora.git
%cd alpaca-lora
!git checkout a48d947

The Gradio app launched by the script will allow us to utilize the weights of our model:

!python generate.py \
    --load_8bit \
    --base_model 'decapoda-research/llama-7b-hf' \
    --lora_weights 'Andyrasika/lora-bitcoin-tweets-sentiment' \
    --share_gradio
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Andyrasika/lora-bitcoin-tweets-sentiment