Update README.md
Browse files
README.md
CHANGED
|
@@ -466,7 +466,7 @@ Try it out by running the following snippet.
|
|
| 466 |
> a FP8 triton kernel for fast accelerated matmuls
|
| 467 |
> (`w8a8_block_fp8_matmul_triton`) will be used
|
| 468 |
> without any degradation in accuracy. However, if you want to
|
| 469 |
-
> run your model in BF16 see (#transformers-bf16)
|
| 470 |
|
| 471 |
<details>
|
| 472 |
<summary>Python snippet</summary>
|
|
|
|
| 466 |
> a FP8 triton kernel for fast accelerated matmuls
|
| 467 |
> (`w8a8_block_fp8_matmul_triton`) will be used
|
| 468 |
> without any degradation in accuracy. However, if you want to
|
| 469 |
+
> run your model in BF16 see ([here](#transformers-bf16))
|
| 470 |
|
| 471 |
<details>
|
| 472 |
<summary>Python snippet</summary>
|