Update parquet files
Browse files- .gitattributes +1 -0
- README.md +0 -156
- data/train-00000-of-00001.parquet → default/train/0000.parquet +0 -0
- model_scores.png +0 -3
    	
        .gitattributes
    ADDED
    
    | @@ -0,0 +1 @@ | |
|  | 
|  | |
| 1 | 
            +
            default/train/0000.parquet filter=lfs diff=lfs merge=lfs -text
         | 
    	
        README.md
    DELETED
    
    | @@ -1,156 +0,0 @@ | |
| 1 | 
            -
            ---
         | 
| 2 | 
            -
            license: apache-2.0
         | 
| 3 | 
            -
            dataset_info:
         | 
| 4 | 
            -
              features:
         | 
| 5 | 
            -
              - name: task_id
         | 
| 6 | 
            -
                dtype: string
         | 
| 7 | 
            -
              - name: prompt
         | 
| 8 | 
            -
                dtype: string
         | 
| 9 | 
            -
              - name: entry_point
         | 
| 10 | 
            -
                dtype: string
         | 
| 11 | 
            -
              - name: test
         | 
| 12 | 
            -
                dtype: string
         | 
| 13 | 
            -
              - name: description
         | 
| 14 | 
            -
                dtype: string
         | 
| 15 | 
            -
              - name: language
         | 
| 16 | 
            -
                dtype: string
         | 
| 17 | 
            -
              - name: canonical_solution
         | 
| 18 | 
            -
                sequence: string
         | 
| 19 | 
            -
              splits:
         | 
| 20 | 
            -
              - name: train
         | 
| 21 | 
            -
                num_bytes: 505355
         | 
| 22 | 
            -
                num_examples: 161
         | 
| 23 | 
            -
              download_size: 174830
         | 
| 24 | 
            -
              dataset_size: 505355
         | 
| 25 | 
            -
            configs:
         | 
| 26 | 
            -
            - config_name: default
         | 
| 27 | 
            -
              data_files:
         | 
| 28 | 
            -
              - split: train
         | 
| 29 | 
            -
                path: data/train-*
         | 
| 30 | 
            -
            ---
         | 
| 31 | 
            -
             | 
| 32 | 
            -
            # Benchmark summary
         | 
| 33 | 
            -
             | 
| 34 | 
            -
            We introduce HumanEval for Kotlin, created from scratch by human experts.
         | 
| 35 | 
            -
            Solutions and tests for all 161 HumanEval tasks are written by an expert olympiad programmer with 6 years of experience in Kotlin, and independently checked by a programmer with 4 years of experience in Kotlin. 
         | 
| 36 | 
            -
            The tests we implement are eqivalent to the original HumanEval tests for Python.
         | 
| 37 | 
            -
             | 
| 38 | 
            -
            # How to use 
         | 
| 39 | 
            -
             | 
| 40 | 
            -
            The benchmark is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.
         | 
| 41 | 
            -
             | 
| 42 | 
            -
            When testing models on this benchmark, during the code generation step we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation — specifically, we remove all comments and signatures.
         | 
| 43 | 
            -
             | 
| 44 | 
            -
            The code for running an example model on the benchmark using the early stopping and post-processing is available below.
         | 
| 45 | 
            -
             | 
| 46 | 
            -
            ```python
         | 
| 47 | 
            -
            import json
         | 
| 48 | 
            -
            import re
         | 
| 49 | 
            -
             | 
| 50 | 
            -
            from datasets import load_dataset
         | 
| 51 | 
            -
            import jsonlines
         | 
| 52 | 
            -
            import torch
         | 
| 53 | 
            -
            from transformers import (
         | 
| 54 | 
            -
                AutoTokenizer,
         | 
| 55 | 
            -
                AutoModelForCausalLM,
         | 
| 56 | 
            -
                StoppingCriteria,
         | 
| 57 | 
            -
                StoppingCriteriaList,
         | 
| 58 | 
            -
            )
         | 
| 59 | 
            -
            from tqdm import tqdm 
         | 
| 60 | 
            -
            from mxeval.evaluation import evaluate_functional_correctness
         | 
| 61 | 
            -
             | 
| 62 | 
            -
             | 
| 63 | 
            -
            class StoppingCriteriaSub(StoppingCriteria):
         | 
| 64 | 
            -
                def __init__(self, stops, tokenizer):
         | 
| 65 | 
            -
                    (StoppingCriteria.__init__(self),)
         | 
| 66 | 
            -
                    self.stops = rf"{stops}"
         | 
| 67 | 
            -
                    self.tokenizer = tokenizer
         | 
| 68 | 
            -
             | 
| 69 | 
            -
                def __call__(
         | 
| 70 | 
            -
                    self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
         | 
| 71 | 
            -
                ) -> bool:
         | 
| 72 | 
            -
                    last_three_tokens = [int(x) for x in input_ids.data[0][-3:]]
         | 
| 73 | 
            -
                    decoded_last_three_tokens = self.tokenizer.decode(last_three_tokens)
         | 
| 74 | 
            -
             | 
| 75 | 
            -
                    return bool(re.search(self.stops, decoded_last_three_tokens))
         | 
| 76 | 
            -
             | 
| 77 | 
            -
             | 
| 78 | 
            -
            def generate(problem):
         | 
| 79 | 
            -
                criterion = StoppingCriteriaSub(stops="\n}\n", tokenizer=tokenizer)
         | 
| 80 | 
            -
                stopping_criteria = StoppingCriteriaList([criterion])
         | 
| 81 | 
            -
                
         | 
| 82 | 
            -
                problem = tokenizer.encode(problem, return_tensors="pt").to('cuda')
         | 
| 83 | 
            -
                sample = model.generate(
         | 
| 84 | 
            -
                    problem,
         | 
| 85 | 
            -
                    max_new_tokens=256,
         | 
| 86 | 
            -
                    min_new_tokens=128,
         | 
| 87 | 
            -
                    pad_token_id=tokenizer.eos_token_id,
         | 
| 88 | 
            -
                    do_sample=False,
         | 
| 89 | 
            -
                    num_beams=1,
         | 
| 90 | 
            -
                    stopping_criteria=stopping_criteria,
         | 
| 91 | 
            -
                )
         | 
| 92 | 
            -
                
         | 
| 93 | 
            -
                answer = tokenizer.decode(sample[0], skip_special_tokens=True)
         | 
| 94 | 
            -
                return answer
         | 
| 95 | 
            -
             | 
| 96 | 
            -
             | 
| 97 | 
            -
            def clean_asnwer(code):
         | 
| 98 | 
            -
                # Clean comments
         | 
| 99 | 
            -
                code_without_line_comments = re.sub(r"//.*", "", code)
         | 
| 100 | 
            -
                code_without_all_comments = re.sub(
         | 
| 101 | 
            -
                    r"/\*.*?\*/", "", code_without_line_comments, flags=re.DOTALL
         | 
| 102 | 
            -
                )
         | 
| 103 | 
            -
                #Clean signatures
         | 
| 104 | 
            -
                lines = code.split("\n")
         | 
| 105 | 
            -
                for i, line in enumerate(lines):
         | 
| 106 | 
            -
                    if line.startswith("fun "):
         | 
| 107 | 
            -
                        return "\n".join(lines[i + 1:])
         | 
| 108 | 
            -
                        
         | 
| 109 | 
            -
                return code
         | 
| 110 | 
            -
             | 
| 111 | 
            -
             | 
| 112 | 
            -
            model_name = "JetBrains/CodeLlama-7B-Kexer"
         | 
| 113 | 
            -
            dataset = load_dataset("jetbrains/Kotlin_HumanEval")['train']
         | 
| 114 | 
            -
            problem_dict = {problem['task_id']: problem for problem in dataset}
         | 
| 115 | 
            -
             | 
| 116 | 
            -
            model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to('cuda')
         | 
| 117 | 
            -
            tokenizer = AutoTokenizer.from_pretrained(model_name)
         | 
| 118 | 
            -
             | 
| 119 | 
            -
            output = []
         | 
| 120 | 
            -
            for key in tqdm(list(problem_dict.keys()), leave=False):
         | 
| 121 | 
            -
                problem = problem_dict[key]["prompt"]
         | 
| 122 | 
            -
                answer = generate(problem)
         | 
| 123 | 
            -
                answer = clean_asnwer(answer)
         | 
| 124 | 
            -
                output.append({"task_id": key, "completion": answer, "language": "kotlin"})
         | 
| 125 | 
            -
             | 
| 126 | 
            -
            output_file = f"answers"
         | 
| 127 | 
            -
            with jsonlines.open(output_file, mode="w") as writer:
         | 
| 128 | 
            -
                for line in output:
         | 
| 129 | 
            -
                    writer.write(line)
         | 
| 130 | 
            -
             | 
| 131 | 
            -
            evaluate_functional_correctness(
         | 
| 132 | 
            -
                sample_file=output_file,
         | 
| 133 | 
            -
                k=[1],
         | 
| 134 | 
            -
                n_workers=16,
         | 
| 135 | 
            -
                timeout=15,
         | 
| 136 | 
            -
                problem_file=problem_dict,
         | 
| 137 | 
            -
            )
         | 
| 138 | 
            -
             | 
| 139 | 
            -
            with open(output_file + '_results.jsonl') as fp:
         | 
| 140 | 
            -
                total = 0
         | 
| 141 | 
            -
                correct = 0
         | 
| 142 | 
            -
                for line in fp:
         | 
| 143 | 
            -
                    sample_res = json.loads(line)
         | 
| 144 | 
            -
                    print(sample_res)
         | 
| 145 | 
            -
                    total += 1
         | 
| 146 | 
            -
                    correct += sample_res['passed']
         | 
| 147 | 
            -
             | 
| 148 | 
            -
            print(f'Pass rate: {correct/total}')
         | 
| 149 | 
            -
             | 
| 150 | 
            -
            ```
         | 
| 151 | 
            -
             | 
| 152 | 
            -
             | 
| 153 | 
            -
            # Results
         | 
| 154 | 
            -
             | 
| 155 | 
            -
            We evaluated multiple coding models using this benchmark, and the results are presented in the figure below:
         | 
| 156 | 
            -
            
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
    	
        data/train-00000-of-00001.parquet → default/train/0000.parquet
    RENAMED
    
    | 
            File without changes
         | 
    	
        model_scores.png
    DELETED
    
    | Git LFS Details
 | 

