File size: 2,938 Bytes
2a9bf79
 
 
2cb010b
707cfcb
2a9bf79
 
f0344e0
2a9bf79
 
 
 
 
 
 
 
 
 
 
f0344e0
2a9bf79
 
 
 
 
 
 
f0344e0
2a9bf79
 
 
 
 
 
 
 
 
 
 
 
 
 
1cb00ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a9bf79
 
 
 
 
 
 
1cb00ea
2a9bf79
 
 
 
 
 
 
 
f0344e0
2a9bf79
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
library_name: transformers
tags:
- safetensors
- pruna-ai
---

# Model Card for pruna-test/test-save-tiny-random-llama4-smashed

This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.

## Usage

First things first, you need to install the pruna library:

```bash
pip install pruna
```

You can [use the transformers library to load the model](https://huggingface.co/pruna-test/test-save-tiny-random-llama4-smashed?library=transformers) but this might not include all optimizations by default.

To ensure that all optimizations are applied, use the pruna library to load the model using the following code:

```python
from pruna import PrunaModel

loaded_model = PrunaModel.from_pretrained(
    "pruna-test/test-save-tiny-random-llama4-smashed"
)
# we can then run inference using the methods supported by the base model
```


For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/hf-internal-testing/tiny-random-llama4?library=transformers).
 Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.

## Smash Configuration

The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.

```bash
{
    "awq": false,
    "c_generate": false,
    "c_translate": false,
    "c_whisper": false,
    "deepcache": false,
    "diffusers_int8": false,
    "fastercache": false,
    "flash_attn3": false,
    "fora": false,
    "gptq": false,
    "half": false,
    "hqq": false,
    "hqq_diffusers": false,
    "ifw": false,
    "llm_int8": false,
    "pab": false,
    "qkv_diffusers": false,
    "quanto": false,
    "stable_fast": false,
    "torch_compile": false,
    "torch_dynamic": false,
    "torch_structured": false,
    "torch_unstructured": false,
    "torchao": false,
    "whisper_s2t": false,
    "batch_size": 1,
    "device": "cpu",
    "device_map": null,
    "save_fns": [],
    "load_fns": [
        "transformers"
    ],
    "reapply_after_load": {}
}
```

## 🌍 Join the Pruna AI community!

[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/JFQmtFKCjd)
[![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)