update lvwerra namespace
Browse files
README.md
CHANGED
|
@@ -9,8 +9,8 @@ You can load the CodeParrot model and tokenizer directly in `transformers`:
|
|
| 9 |
```Python
|
| 10 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
| 11 |
|
| 12 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
| 13 |
-
model = AutoModelWithLMHead.from_pretrained("
|
| 14 |
|
| 15 |
inputs = tokenizer("def hello_world():", return_tensors="pt")
|
| 16 |
outputs = model(**inputs)
|
|
@@ -21,13 +21,13 @@ or with a `pipeline`:
|
|
| 21 |
```Python
|
| 22 |
from transformers import pipeline
|
| 23 |
|
| 24 |
-
pipe = pipeline("text-generation", model="
|
| 25 |
outputs = pipe("def hello_world():")
|
| 26 |
```
|
| 27 |
|
| 28 |
## Training
|
| 29 |
|
| 30 |
-
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/
|
| 31 |
|
| 32 |
|Config|Value|
|
| 33 |
|-------|-----|
|
|
@@ -57,6 +57,6 @@ The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probabil
|
|
| 57 |
|
| 58 |
## Resources
|
| 59 |
|
| 60 |
-
- Dataset: [full](https://huggingface.co/datasets/
|
| 61 |
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
|
| 62 |
- Spaces: [generation](), [highlighting]()
|
|
|
|
| 9 |
```Python
|
| 10 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
| 11 |
|
| 12 |
+
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small")
|
| 13 |
+
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small")
|
| 14 |
|
| 15 |
inputs = tokenizer("def hello_world():", return_tensors="pt")
|
| 16 |
outputs = model(**inputs)
|
|
|
|
| 21 |
```Python
|
| 22 |
from transformers import pipeline
|
| 23 |
|
| 24 |
+
pipe = pipeline("text-generation", model="codeparrot/codeparrot-small")
|
| 25 |
outputs = pipe("def hello_world():")
|
| 26 |
```
|
| 27 |
|
| 28 |
## Training
|
| 29 |
|
| 30 |
+
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/codeparrot/codeparrot-clean) with the following settings:
|
| 31 |
|
| 32 |
|Config|Value|
|
| 33 |
|-------|-----|
|
|
|
|
| 57 |
|
| 58 |
## Resources
|
| 59 |
|
| 60 |
+
- Dataset: [full](https://huggingface.co/datasets/codeparrot/codeparrot-clean), [train](https://huggingface.co/datasets/codeparrot/codeparrot-clean-train), [valid](https://huggingface.co/datasets/codeparrot/codeparrot-clean-valid)
|
| 61 |
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
|
| 62 |
- Spaces: [generation](), [highlighting]()
|