Spaces:
Running
Running
Commit
·
6b05c3d
1
Parent(s):
0b0eb36
fix: Update auto-selection to match MCP server defaults
Browse filesChanged auto-selection in New Evaluation screen to match MCP server:
- API models (litellm/inference): cpu-basic (was: cpu)
- Local models (transformers): a10g-small (was: gpu_a100)
This ensures consistent defaults between UI and MCP cost estimator.
app.py
CHANGED
|
@@ -2757,9 +2757,9 @@ No historical data available for **{model}**.
|
|
| 2757 |
# litellm and inference are for API models → CPU
|
| 2758 |
# transformers is for local models → GPU
|
| 2759 |
if provider in ["litellm", "inference"]:
|
| 2760 |
-
return gr.update(value="cpu")
|
| 2761 |
elif provider == "transformers":
|
| 2762 |
-
return gr.update(value="
|
| 2763 |
else:
|
| 2764 |
return gr.update(value="auto")
|
| 2765 |
|
|
|
|
| 2757 |
# litellm and inference are for API models → CPU
|
| 2758 |
# transformers is for local models → GPU
|
| 2759 |
if provider in ["litellm", "inference"]:
|
| 2760 |
+
return gr.update(value="cpu-basic")
|
| 2761 |
elif provider == "transformers":
|
| 2762 |
+
return gr.update(value="a10g-small")
|
| 2763 |
else:
|
| 2764 |
return gr.update(value="auto")
|
| 2765 |
|