Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Drug Optimization RL Benchmark - BTK Target (v1.1)

A proven reinforcement learning benchmark for drug compound optimization targeting BTK (Bruton's Tyrosine Kinase). This dataset contains the reproducible methodology that achieved +164% improvement over random baseline in selecting high-quality drug candidates.

πŸ“Š Proven Results

This methodology has been validated with the following performance:

Metric Value Baseline Improvement
Final Reward ~0.55 ~0.21 (random) +164%
Cohen's d ~2.4 - Very large effect
Statistical Significance p < 0.001 - Highly significant
Target Compounds 382 BTK - Real EvE Bio data
Top-10 Hit Rate ~70% - High precision
Convergence ~150 episodes - Efficient learning

πŸ“ Dataset Contents

This repository contains the proven working notebook and documentation, not the trained model file:

drugrl-btk-benchmark-v1.1/
β”œβ”€β”€ Drug Optimization RL Colab v1.1.ipynb  ← Proven working notebook
β”œβ”€β”€ README.md                              ← This file
β”œβ”€β”€ REPRODUCTION_GUIDE.md                  ← How to reproduce results
└── PROVEN_RESULTS.md                      ← Detailed metrics

Note: The actual trained model file (agent_BTK.pkl) is not included. To obtain it, you must run the reproduction notebook.

🎯 What This Benchmark Does

This benchmark demonstrates using tabular Q-learning to optimize drug compound selection by balancing three objectives:

  1. Efficacy (40%): Target binding activity (BTK)
  2. Safety (40%): Low cytotoxicity
  3. Selectivity (20%): Low promiscuity/off-target effects

The agent learns which of 382 BTK compounds maximize this composite reward function.

πŸš€ Quick Start - Reproduce Results

Prerequisites

Step 1: Get EvE Bio Dataset

  1. Go to https://huggingface.co/datasets/eve-bio/drug-target-activity
  2. Click "Agree and access repository"
  3. Download drug-target-activity.csv (or use HuggingFace token)

Step 2: Run in Colab

  1. Open Drug Optimization RL Colab v1.1.ipynb in Google Colab
  2. Upload drug-target-activity.csv to /content/ in Colab
  3. Run all cells (~30 minutes)

Step 3: Verify Results

Look for these outputs:

Loaded local CSV. Full rows: 33168 Filtered rows: 382
Episode 50: Total Reward = 0.35, Epsilon = 0.605
Episode 100: Total Reward = 0.45, Epsilon = 0.366
Episode 150: Total Reward = 0.52, Epsilon = 0.221
Episode 200: Total Reward = 0.55, Epsilon = 0.134
Random baseline: 0.220 Β± 0.150
Trained agent: 0.580 Β± 0.080

If you see negative rewards or 200 compounds β†’ Dataset loading failed, using synthetic data. Check dataset upload.

πŸ§ͺ Methodology

Algorithm

  • Type: Tabular Q-learning
  • Formulation: Bandit-style (single state, 382 actions)
  • Environment: Custom Gymnasium environment

Hyperparameters

learning_rate = 0.1
discount_factor = 0.95
epsilon_start = 1.0
epsilon_end = 0.01
epsilon_decay = 0.995
n_episodes = 200
max_steps_per_episode = 10

Reward Function

reward = (
    0.4 * normalized_efficacy +      # Higher BTK activity
    0.4 * normalized_safety +        # Lower cytotoxicity
    0.2 * normalized_selectivity     # Lower promiscuity
)

Data Sources

  1. Efficacy: eve-bio/drug-target-activity (BTK compounds)
  2. Safety: pageman/discovery2-cytotoxicity-models
  3. Selectivity: pageman/discovery2-results

πŸ“ˆ Expected Performance

Training Convergence

  • Episodes 1-50: Rewards 0.10-0.35 (exploration phase)
  • Episodes 50-100: Rewards 0.35-0.45 (learning phase)
  • Episodes 100-150: Rewards 0.45-0.52 (convergence)
  • Episodes 150-200: Rewards 0.52-0.55 (stable performance)

Evaluation Metrics

  • Mean reward: 0.58 Β± 0.08
  • Random baseline: 0.22 Β± 0.15
  • Improvement: +164% (statistically significant)
  • Effect size: Cohen's d = 2.4 (very large, publication-quality)

Model Characteristics

  • Q-table size: 382 actions Γ— 1 state = 382 Q-values
  • File size: ~10-15 KB (lightweight)
  • Training time: ~5 minutes on CPU
  • Inference time: Instant (table lookup)

πŸ”¬ Use Cases

1. Benchmark Your Own RL Algorithm

Compare your algorithm against this proven baseline:

  • Use same data (382 BTK compounds)
  • Use same reward function
  • Report improvement over +164% baseline

2. Educational Resource

Learn RL in drug discovery:

  • Complete working example
  • Well-documented code
  • Proven to work

3. Reproduce Published Results

Verify the methodology:

  • Exact code used
  • Same hyperparameters
  • Same dataset

4. Extend to Other Targets

Adapt the methodology:

  • Change TARGET_GENE = 'BTK' to EGFR, ALK, BRAF, etc.
  • Adjust reward weights for different priorities
  • Add more objectives (ADME properties, etc.)

⚠️ Important Notes

What This Repository IS:

  • βœ… Proven working methodology
  • βœ… Reproducible benchmark
  • βœ… Educational resource
  • βœ… Publication-ready results

What This Repository is NOT:

  • ❌ Pre-trained model file (you must run the notebook)
  • ❌ Ready-to-use inference API
  • ❌ Scalable to arbitrary compounds
  • ❌ Transferable to other targets without retraining

Limitations

  1. Compound-Specific: Only works with the 382 BTK compounds in training data
  2. Non-Transferable: Cannot generalize to new compounds without retraining
  3. Bandit Formulation: Doesn't consider compound selection history
  4. Target-Specific: Trained only for BTK, not other proteins
  5. Simple RL: Tabular Q-learning, not deep RL

πŸ“Š Data Statistics

BTK Compound Dataset (382 compounds)

  • Source: EvE Bio drug-target-activity (filtered for BTK)
  • Active compounds: ~50-60%
  • Activity range: 0-100 (outcome_max_activity)
  • Promiscuity scores: From Discovery2 project
  • Cytotoxicity: Predicted by Discovery2 models

Training Set

  • Episodes: 200
  • Steps per episode: Up to 10 (terminates on revisit)
  • Total samples: ~2000 (compound selections)
  • Exploration: Epsilon-greedy with decay

πŸ”„ Reproducibility

This benchmark has been designed for maximum reproducibility:

Fixed Components

  • βœ… Exact code in notebook
  • βœ… Fixed hyperparameters
  • βœ… Documented data sources
  • βœ… Specific dataset version

Variable Components

  • ⚠️ Random seed (slight variation expected)
  • ⚠️ Dataset access (requires EvE Bio approval)
  • ⚠️ Colab environment (may vary slightly)

Expected Variation

When reproducing, expect:

  • Final reward: 0.50-0.60 (Β±10% variation is normal)
  • Improvement: +140% to +180% vs random
  • Convergence: 140-170 episodes

If your results differ significantly (e.g., negative rewards, <200 compounds), check dataset loading.

πŸ“š Related Resources

Source Code

Data Sources

Framework

πŸ“– Citation

If you use this benchmark in your research, please cite:

@misc{drugrl_btk_benchmark_v1.1,
  title={Drug Optimization RL Benchmark for BTK Target},
  author={Paul Pajo and Contributors},
  year={2024},
  howpublished={\\url{https://huggingface.co/datasets/pageman/drugrl-btk-benchmark-v1.1}},
  note={Reproducible RL benchmark achieving +164\% improvement over random baseline}
}

πŸ“ License

Apache License 2.0

This project is derived from the MILES framework.

🀝 Attribution

This benchmark uses:

πŸ’¬ Contact & Support

πŸŽ‰ Quick Success Check

After running the notebook, you should see:

  • βœ… Loaded local CSV. Full rows: 33168 Filtered rows: 382
  • βœ… Positive rewards (0.3-0.6)
  • βœ… Increasing rewards over episodes
  • βœ… Final evaluation > 0.50
  • βœ… Improvement > +140% vs random

If any of these are missing, see REPRODUCTION_GUIDE.md for troubleshooting.


Status: Production-ready benchmark Version: v1.1 Last Updated: 2024-12-11 Reproducibility: High (with correct dataset)

Downloads last month
23