Datasets:
repo stringlengths 5 51 | instance_id stringlengths 11 56 | base_commit stringlengths 40 40 | patch stringlengths 400 333k | test_patch stringlengths 0 895k | problem_statement stringlengths 27 55.6k | hints_text stringlengths 0 72k | created_at int64 1,447B 1,739B | labels sequencelengths 0 7 β | category stringclasses 4
values | edit_functions sequencelengths 1 10 | added_functions sequencelengths 0 20 | edit_functions_length int64 1 10 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
joselc/life-sim-first-try | joselc__life-sim-first-try-2 | 77d5c01d3090655b776faf9a0c14b352a36b9695 | diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000..56b8df8
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,25 @@
+# Description
+
+Please include a summary of the changes and the related issue.
+
+Fixes # (issue)
+
+## Type of change
+
+P... | diff --git a/tests/hexagons/test_plant.py b/tests/hexagons/test_plant.py
index cd752d1..fbaa086 100644
--- a/tests/hexagons/test_plant.py
+++ b/tests/hexagons/test_plant.py
@@ -1,6 +1,7 @@
"""Unit tests for the PlantHexagon class."""
import unittest
+from unittest.mock import Mock, patch
from src.hexagons.plant im... | Plants seeds can die before growing
A plant seed can die before evolving to a growing plant. This behavior will be decided randomly during the seed phase of the plant. A threshold will be established to decide if the seed evolves or not. The seed will be able to evolve at the end of its seed phase, where it will be dec... | 1,739,100,868,000 | [] | Feature Request | [
"src/hexagons/plant_states.py:PlantStateManager.__init__",
"src/hexagons/plant_states.py:PlantStateManager.update",
"src/mesh/hex_mesh.py:HexMesh.__init__",
"src/mesh/hex_mesh.py:HexMesh.update"
] | [
"src/hexagons/plant_states.py:PlantStateManager.seed_survival_threshold",
"src/hexagons/plant_states.py:PlantStateManager._check_seed_survival",
"src/mesh/hex_mesh.py:HexMesh._is_position_valid",
"src/mesh/hex_mesh.py:HexMesh._convert_plants_to_ground"
] | 4 | |
UXARRAY/uxarray | UXARRAY__uxarray-1117 | fe4cae1311db7fb21187b505e06018334a015c48 | diff --git a/uxarray/grid/connectivity.py b/uxarray/grid/connectivity.py
index 78e936117..54bd1017e 100644
--- a/uxarray/grid/connectivity.py
+++ b/uxarray/grid/connectivity.py
@@ -146,13 +146,14 @@ def _build_n_nodes_per_face(face_nodes, n_face, n_max_face_nodes):
"""Constructs ``n_nodes_per_face``, which contain... | Optimize Face Centroid Calculations
If `Grid.face_lon` does not exist, `_populate_face_centroids()`, actually `_construct_face_centroids()` in it, takes extremely long for large datasets. For instance, the benchmark/profiling below is for a ~4GB SCREAM dataset, around 5 mins:
@rajeeja FYI: I'm already working on this ... | 1,734,798,627,000 | [
"run-benchmark"
] | Performance Issue | [
"uxarray/grid/connectivity.py:_build_n_nodes_per_face",
"uxarray/grid/coordinates.py:_construct_face_centroids"
] | [] | 2 | ||
nautobot/nautobot | nautobot__nautobot-6832 | 903171e8d6ba05d4239b374dfc965ff7be09041c | diff --git a/changes/6808.changed b/changes/6808.changed
new file mode 100644
index 0000000000..a363f1c45c
--- /dev/null
+++ b/changes/6808.changed
@@ -0,0 +1,1 @@
+Improved returned data when syncing a Git repository via the REST API.
diff --git a/nautobot/docs/user-guide/platform-functionality/gitrepository.md b/naut... | diff --git a/nautobot/extras/tests/test_api.py b/nautobot/extras/tests/test_api.py
index af6efc332e..bb4f8ba759 100644
--- a/nautobot/extras/tests/test_api.py
+++ b/nautobot/extras/tests/test_api.py
@@ -1207,6 +1207,10 @@ def test_run_git_sync_with_permissions(self, _):
url = reverse("extras-api:gitrepository-... | Improve returned data when syncing a Git repository via the REST API
### As ...
Austin - Network Automation Engineer
### I want ...
to be able to see more useful return data when syncing a Git repository via the REST API. Today, e.g.:
```
curl -s -X POST \
-H "Authorization: Token $TOKEN" \
-H "Content-Type: applic... | 1,738,165,750,000 | [] | Feature Request | [
"nautobot/extras/api/views.py:GitRepositoryViewSet.sync"
] | [] | 1 | |
gitlabform/gitlabform | gitlabform__gitlabform-929 | ad13d613f5b41ac4fea8f44b6fbcca7c67274150 | diff --git a/gitlabform/gitlab/projects.py b/gitlabform/gitlab/projects.py
index 4cf21b23..649121ba 100644
--- a/gitlabform/gitlab/projects.py
+++ b/gitlabform/gitlab/projects.py
@@ -126,21 +126,6 @@ def get_project_settings(self, project_and_group_name):
except NotFoundException:
return dict()
... | Migrate `project_settings` config to use python-gitlab library
Goal of this issue is to migrate the `project_settings` configuration section to use [python-gitlab library](https://python-gitlab.readthedocs.io/en/stable/) instead of using homegrown API in this project.
High level Tasks:
- Update `gitlabform/processors... | 1,737,329,350,000 | [] | Feature Request | [
"gitlabform/gitlab/projects.py:GitLabProjects.put_project_settings",
"gitlabform/processors/project/project_settings_processor.py:ProjectSettingsProcessor.__init__"
] | [
"gitlabform/processors/project/project_settings_processor.py:ProjectSettingsProcessor._process_configuration",
"gitlabform/processors/project/project_settings_processor.py:ProjectSettingsProcessor.get_project_settings",
"gitlabform/processors/project/project_settings_processor.py:ProjectSettingsProcessor._print... | 2 | ||
ultralytics/ultralytics | ultralytics__ultralytics-17810 | d8c43874ae830a36d2adeac4a44a8ce5697e972c | diff --git a/ultralytics/utils/ops.py b/ultralytics/utils/ops.py
index 25e83c61c3a..ac53546ed1b 100644
--- a/ultralytics/utils/ops.py
+++ b/ultralytics/utils/ops.py
@@ -75,9 +75,8 @@ def segment2box(segment, width=640, height=640):
(np.ndarray): the minimum and maximum x and y values of the segment.
"""
... | Training labels not applied properly to training data
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
# Bug
Labels are not included in the gener... | π Hello @TheOfficialOzone, thank you for bringing this to our attention π! We understand that you're encountering an issue with labels not being applied correctly during the training of a segmentation model on the Ultralytics repository.
For us to assist you effectively, please ensure that you've provided a [minimum... | 1,732,632,265,000 | [
"enhancement",
"segment"
] | Bug Report | [
"ultralytics/utils/ops.py:segment2box"
] | [] | 1 | |
Chainlit/chainlit | Chainlit__chainlit-1575 | 8b2d4bacfd4fa2c8af72e2d140d527d20125b07b | diff --git a/backend/chainlit/config.py b/backend/chainlit/config.py
index b90f162f07..18ee6be8db 100644
--- a/backend/chainlit/config.py
+++ b/backend/chainlit/config.py
@@ -311,6 +311,8 @@ class CodeSettings:
@dataclass()
class ProjectSettings(DataClassJsonMixin):
allow_origins: List[str] = Field(default_facto... | diff --git a/cypress/e2e/copilot/.chainlit/config.toml b/cypress/e2e/copilot/.chainlit/config.toml
index e2a93af08f..9c42755715 100644
--- a/cypress/e2e/copilot/.chainlit/config.toml
+++ b/cypress/e2e/copilot/.chainlit/config.toml
@@ -13,7 +13,7 @@ session_timeout = 3600
cache = false
# Authorized origins
-allow_or... | Security: allowed origins should not be * by default
CORS headers should be restricted to the current domain at least, by default.
| @dosu Where do we have to look in the settings/code to set this to a sensible/safe default value?
<!-- Answer -->
To set the allowed origins for CORS headers to a sensible/safe default value, you need to look at the `allow_origins` setting in the `config.toml` file.
```toml
# Authorized origins
allow_origins = ["*"]
`... | 1,733,733,602,000 | [
"size:M"
] | Security Vulnerability | [
"backend/chainlit/server.py:get_html_template",
"backend/chainlit/socket.py:build_anon_user_identifier",
"backend/chainlit/socket.py:connect",
"backend/chainlit/socket.py:connection_successful",
"backend/chainlit/socket.py:window_message"
] | [] | 5 |
strawberry-graphql/strawberry | strawberry-graphql__strawberry-3720 | 882e98940a7b5df01ea50180af27085bd1e443f5 | diff --git a/.alexrc b/.alexrc
index ea3756ce5a..0d9c4005d9 100644
--- a/.alexrc
+++ b/.alexrc
@@ -9,6 +9,7 @@
"executed",
"executes",
"execution",
+ "reject",
"special",
"primitive",
"invalid",
diff --git a/RELEASE.md b/RELEASE.md
new file mode 100644
index 0000000000..2a0c1a0554
--- /... | diff --git a/docs/integrations/litestar.md b/docs/integrations/litestar.md
index 002fb5cbee..77af89472a 100644
--- a/docs/integrations/litestar.md
+++ b/docs/integrations/litestar.md
@@ -327,6 +327,7 @@ extended by overriding any of the following methods:
- `def decode_json(self, data: Union[str, bytes]) -> object`
-... | Add handler method to process websocket connections
<!--- Provide a general summary of your changes in the title above. -->
Allow an application to process the connection request for `graphql_transport_ws` and `graphql_ws`
protocols, perform authentication, reject a connection and return custom response payload
... |
see #1652
Thanks for adding the `RELEASE.md` file!

Here's a preview of the changelog:
---
Adding the possiblity to process `payload` sent with a websockets `connection_init` message,
and provide a response payload included with the `connection_ack` messa... | 1,733,764,416,000 | [
"bot:has-release-file",
"bot:release-type-minor"
] | Feature Request | [
"strawberry/http/async_base_view.py:AsyncBaseHTTPView.run",
"strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py:BaseGraphQLTransportWSHandler.__init__",
"strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py:BaseGraphQLTransportWSHandler.handle_connection_init",
"strawberry/subs... | [
"strawberry/exceptions/__init__.py:ConnectionRejectionError.__init__",
"strawberry/http/async_base_view.py:AsyncBaseHTTPView.on_ws_connect"
] | 7 |
WebOfTrust/keria | WebOfTrust__keria-334 | 912e68144ee0f31536c77d67f732ee82469d210c | diff --git a/src/keria/app/agenting.py b/src/keria/app/agenting.py
index 3d6f96a4..1fc8a210 100644
--- a/src/keria/app/agenting.py
+++ b/src/keria/app/agenting.py
@@ -4,6 +4,7 @@
keria.app.agenting module
"""
+from base64 import b64decode
import json
import os
import datetime
@@ -53,7 +54,8 @@
def setup(nam... | diff --git a/tests/app/test_agenting.py b/tests/app/test_agenting.py
index 6f4cf910..e0338f38 100644
--- a/tests/app/test_agenting.py
+++ b/tests/app/test_agenting.py
@@ -5,6 +5,7 @@
Testing the Mark II Agent
"""
+from base64 import b64encode
import json
import os
import shutil
@@ -248,7 +249,7 @@ def test_agenc... | Provide a way to protect the boot endpoint
### Feature request description/rationale
I suggest there should be a way to protect the boot endpoint with some simple authentication. In my opinion, this is convenient even if the boot port is not exposed publicly on the internet. It would also make it easier for people to ... | This makes sense to me for test servers deployed publicly. Would just want to make sure a simple username/pass doesn't end up becoming the norm.
I wrote a simple Spring Cloud Gateway service in Java that provisions one-time use boot URLs and emails them to clients. Used it to protect our infrastructure in an alpha p... | 1,733,756,077,000 | [] | Feature Request | [
"src/keria/app/agenting.py:setup",
"src/keria/app/agenting.py:BootEnd.__init__",
"src/keria/app/agenting.py:BootEnd.on_post",
"src/keria/app/cli/commands/start.py:launch"
] | [
"src/keria/app/agenting.py:BootEnd.authenticate"
] | 4 |
huggingface/transformers | huggingface__transformers-22496 | 41d47db90fbe9937c0941f2f9cdb2ddd83e49a2e | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
index 91de6810b17e..96f91a0a43dd 100644
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -34,7 +34,12 @@
SequenceClassifierOut... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
index 883a2021b9bb..98bbbb3214a7 100644
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -1013,6 +1013,48 @@ def test_mask_time_prob(self):
en... | Whisper Prompting
### Feature request
Add prompting for the Whisper model to control the style/formatting of the generated text.
### Motivation
During training, Whisper can be fed a "previous context window" to condition on longer passages of text.
The original OpenAI Whisper implementation provides the use... | cc @hollance
Hello, I'd like to pick up this issue! | 1,680,278,096,000 | [] | Feature Request | [
"src/transformers/models/whisper/modeling_whisper.py:WhisperForConditionalGeneration.generate",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._decode",
"src/transformers/models/whisper/tokenization_whisper_fast.py:WhisperTokenizerFast._decode"
] | [
"src/transformers/models/whisper/processing_whisper.py:WhisperProcessor.get_prompt_ids",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer.get_prompt_ids",
"src/transformers/models/whisper/tokenization_whisper.py:WhisperTokenizer._strip_prompt",
"src/transformers/models/whisper/tokeniz... | 3 |
scikit-learn/scikit-learn | scikit-learn__scikit-learn-24145 | 55af30d981ea2f72346ff93602f0b3b740cfe8d6 | "diff --git a/doc/whats_new/v1.3.rst b/doc/whats_new/v1.3.rst\nindex 9cab0db995c5d..ec1301844b877 10(...TRUNCATED) | "diff --git a/sklearn/preprocessing/tests/test_polynomial.py b/sklearn/preprocessing/tests/test_poly(...TRUNCATED) | "Add sparse matrix output to SplineTransformer\n### Describe the workflow you want to enable\n\nAs B(...TRUNCATED) | 1,659,969,522,000 | [
"module:preprocessing"
] | Feature Request | ["sklearn/preprocessing/_polynomial.py:SplineTransformer.__init__","sklearn/preprocessing/_polynomia(...TRUNCATED) | [] | 3 |
LOC-BENCH: A Benchmark for Code Localization
LOC-BENCH is a dataset specifically designed for evaluating code localization methods in software repositories. LOC-BENCH provides a diverse set of issues, including bug reports, feature requests, security vulnerabilities, and performance optimizations.
Please refer to the official version Loc-Bench_V1 for evaluating code localization methods and for easy comparison with our approach.
Code: https://github.com/gersteinlab/LocAgent
π Details
Compared to the V0, it filters out examples that do not involve function-level code modifications and then supplements the dataset to restore it to the original size of 660 examples.
The table below shows the distribution of categories in the dataset.
| category | count |
|---|---|
| Bug Report | 275 |
| Feature Request | 216 |
| Performance Issue | 140 |
| Security Vulnerability | 29 |
π§ How to Use
You can easily load LOC-BENCH using Hugging Face's datasets library:
from datasets import load_dataset
dataset = load_dataset("czlll/Loc-Bench_V0.2", split="test")
π Citation
If you use LOC-BENCH in your research, please cite our paper:
@article{chen2025locagent,
title={LocAgent: Graph-Guided LLM Agents for Code Localization},
author={Chen, Zhaoling and Tang,Xiangru and Deng,Gangda and Wu,Fang and Wu,Jialong and Jiang,Zhiwei and Prasanna,Viktor and Cohan,Arman and Wang,Xingyao},
journal={arXiv preprint arXiv:2503.09089},
year={2025}
}
- Downloads last month
- 102