LumiTrace: Temporal Low-Light Video Enhancement
LumiTrace is a state-of-the-art temporal video enhancement model designed to brighten low-light videos while maintaining temporal consistency and preserving fine details.
Model Description
LumiTrace combines the power of RetinexFormer (a Retinex-based image enhancement architecture) with custom temporal modules to process video sequences. It uses a 2-stage training strategy to achieve exceptional performance on challenging low-light scenarios.
Key Features
- π¬ Temporal Consistency: Processes 3-frame sequences to eliminate flickering
- π High Quality: Achieves 22+ dB PSNR and 0.83+ SSIM on LOL benchmarks
- β‘ Memory Efficient: Supports high-resolution inference via tiled processing
- π§ Production Ready: Includes video processing pipeline with automatic resolution standardization
Architecture
- Base: RetinexFormer (2.2M parameters)
- Temporal Modules: Custom 3D convolution + attention (0.8M parameters)
- Total Parameters: ~3M
- Input: 3-frame sequences (RGB, [0,1] normalized)
- Output: Enhanced center frame
Training Data
The model was trained on:
- LOL-v1: 485 training pairs, 15 test pairs
- LOL-v2-Real: 689 training pairs, 100 test pairs
Training used synthetic temporal sequences generated from static image pairs with brightness/spatial augmentation.
Training Procedure
Two-Stage Training Strategy
Stage 1 (50 epochs):
- Freeze RetinexFormer backbone
- Train only temporal modules
- Learning rate: 1e-4
- Loss: L2 reconstruction + temporal consistency
Stage 2 (60 epochs):
- Unfreeze all parameters
- Discriminative learning rates:
- RetinexFormer: 1e-6
- Temporal modules: 1e-4
- Loss: L2 + Temporal + Perceptual (VGG)
Performance
| Dataset | PSNR | SSIM |
|---|---|---|
| LOL-v1 | 22.70 dB | 0.8389 |
| LOL-v2-Real | 21.72 dB | 0.8199 |
Visual Results
Sample Comparisons
Side-by-side comparisons showing input (low-light) on the left and LumiTrace enhanced output on the right:
Video Demonstration
Input (Low-Light):
Enhanced Output:
GIFs demonstrate temporal consistency and flickering reduction. Full videos available in the videos folder.
Usage
Installation
git clone https://github.com/yourusername/LumiTrace
cd LumiTrace
pip install -r requirements.txt
Inference (Python)
from lumitrace.inference import VideoEnhancer
import yaml
# Load config
with open('configs/lol_v1_temporal.yml', 'r') as f:
config = yaml.safe_load(f)
# Initialize enhancer
enhancer = VideoEnhancer(
model_path='checkpoints/lol_v1/stage2/best.pth',
config=config
)
# Process video
enhancer.enhance_video(
input_path='input.mp4',
output_path='enhanced.mp4'
)
Inference (CLI)
./scripts/enhance_video.sh input.mp4 output.mp4
Limitations
- Trained primarily on indoor/static scenes (LOL dataset)
- May struggle with extreme motion or outdoor dynamic lighting
- Best performance on videos with resolution β€720p
- Requires GPU for real-time processing
Citation
If you use this model, please cite:
@software{lumitrace2024,
title={LumiTrace: Temporal Low-Light Video Enhancement},
author={Your Name},
year={2024},
url={https://github.com/yourusername/LumiTrace}
}
Acknowledgments
- Based on RetinexFormer
- Trained on LOL datasets
License
Apache 2.0

