Spaces:
				
			
			
	
			
			
		Sleeping
		
	
	
	
			
			
	
	
	
	
		
		
		Sleeping
		
	Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -11,4 +11,37 @@ license: apache-2.0 | |
| 11 | 
             
            short_description: A replica of the basic smoltalk chatbot I run locally
         | 
| 12 | 
             
            ---
         | 
| 13 |  | 
| 14 | 
            -
             | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 11 | 
             
            short_description: A replica of the basic smoltalk chatbot I run locally
         | 
| 12 | 
             
            ---
         | 
| 13 |  | 
| 14 | 
            +
            # Basic SmolLM2 chatbot:
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            This is a very basic chatbot using HuggingFaceTB/SmolLM2-[x]-Instruct hosted on the same host as the app. It is basically a replica of the chatbot I have for running locally.
         | 
| 17 | 
            +
             | 
| 18 | 
            +
            ## Hardware Scale Up
         | 
| 19 | 
            +
             | 
| 20 | 
            +
            - I recommend running this on a little better hardware than this is set up with.
         | 
| 21 | 
            +
            - I'm using the free tier space, but it needs a few more CPU to write fast enough to be useful.
         | 
| 22 | 
            +
            - It is rather slow on this setup, but when I run this on my laptop, it works very well on CPU without GPU.
         | 
| 23 | 
            +
             | 
| 24 | 
            +
            ## To run locally
         | 
| 25 | 
            +
             | 
| 26 | 
            +
            - Download the files or clone the repo.
         | 
| 27 | 
            +
            - Make sure you have a supported version of transformers and torch installed (or run `pip3 install -r requirements.txt` from the root folder of this repo).
         | 
| 28 | 
            +
            - Run `python3 app.py` from the root folder of this repo.
         | 
| 29 | 
            +
            - Set your browser to http://0.0.0.0:7860
         | 
| 30 | 
            +
             | 
| 31 | 
            +
             | 
| 32 | 
            +
            ## Configuration options
         | 
| 33 | 
            +
             | 
| 34 | 
            +
            - In app.py, there are settings  of the screen.
         | 
| 35 | 
            +
                - If you run this locally on a laptop with at least 5CPU cores, I would recommend saving all your local work then set `MODEL` to HuggingFaceTB/SmolLM2-360M-Instruct.
         | 
| 36 | 
            +
                - If this works without signs of resource saturation, try setting `MODEL` it to HuggingFaceTB/SmolLM2-1.7B-Instruct. This will write well and works fine on my laptop that is about 2 years old.
         | 
| 37 | 
            +
             | 
| 38 | 
            +
            ```
         | 
| 39 | 
            +
            MAX_NEW_TOKENS = 250
         | 
| 40 | 
            +
            MODEL="HuggingFaceTB/SmolLM2-135M-Instruct"
         | 
| 41 | 
            +
            # MODEL="HuggingFaceTB/SmolLM2-360M-Instruct"
         | 
| 42 | 
            +
            # MODEL="HuggingFaceTB/SmolLM2-1.7B-Instruct"
         | 
| 43 | 
            +
            TEMPERATURE = 0.6
         | 
| 44 | 
            +
            TOP_P = 0.95
         | 
| 45 | 
            +
            REPETITION_PENALTY = 1.2
         | 
| 46 | 
            +
            ```
         | 
| 47 | 
            +
             |