Text Generation
Transformers
PyTorch
llama
uncensored
text-generation-inference

Change use_cache to True which significantly speeds up inference

#2
by TheBloke - opened
No description provided.
ehartford changed pull request status to merged
Quixi AI org

Thank you!

Sign up or log in to comment