runtime error
Exit code: 139. Reason: load_backend: loaded CPU backend from /app/libggml-cpu-icelake.so build: 5332 (7c28a74e) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu system info: n_threads = 2, n_threads_batch = 2, total_threads = 16 system_info: n_threads = 2 (n_threads_batch = 2) / 16 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | main: binding port with default address family main: HTTP server is listening, hostname: 0.0.0.0, port: 7860, http threads: 15 main: loading model srv load_model: loading model '/sahabatai.gguf' gguf_init_from_file_impl: invalid magic characters: '<!do', expected 'GGUF' llama_model_load: error loading model: llama_model_loader: failed to load model from /sahabatai.gguf llama_model_load_from_file_impl: failed to load model common_init_from_params: failed to load model '/sahabatai.gguf' srv load_model: failed to load model, '/sahabatai.gguf' srv operator(): operator(): cleaning up before exit... main: exiting due to model loading error double free or corruption (out)
Container logs:
Fetching error logs...