LoRA Merger
Merge multiple SDXL LoRA .safetensors files with custom weights. Detects shape conflicts, rank mismatches, and recommends the right strategy. Runs entirely in your browser.
How LoRA merging works
Load & parse
Upload your .safetensors files. The tool parses each file's tensor header, detects rank, dtype (F16/BF16/F32), and reads all tensor data into memory.
Analyze compatibility
The tool checks every tensor key across all files. If the same key exists in two files with different shapes (e.g., different ranks), it's flagged as a conflict and skipped during merge.
Choose a strategy
Pick a preset based on your use case — same character, character + enhancers, or two-character blend. Each preset sets the right weights and merge method automatically.
Merge & download
Tensors are merged in-memory using float32 precision, then written back to F16. The resulting .safetensors file includes combined trigger word metadata.
Merge strategy guide
Same character, different dataset
Scenario: You trained the same character twice on different image collections and want to combine them.
Equal weights (1.0 / 1.0), Normalized Average, Union mode.
Both datasets contribute equally. Normalized average prevents one from dominating. Use the merged LoRA at 0.7–0.9 strength.
Character LoRA + Enhancers
Scenario: You have a solid character LoRA and want to blend in eye detail, skin texture, or lighting enhancement LoRAs.
Character: 1.0, Enhancers: 0.3–0.45 each. Normalized Average.
The character LoRA stays dominant. Enhancers add detail without overriding the character's core identity. Reduce enhancer weights further if the character drifts.
Two different characters
Scenario: Merging two separate character LoRAs so both are accessible from one file using their respective trigger words.
Both at 0.7–0.8, Weighted Sum method, Union mode.
Sum keeps each character's tensors intact rather than averaging them away. Some cross-character bleed is unavoidable — use at 0.85 strength to reduce it.
Rank mismatch situation
Scenario: You want to merge LoRAs trained at different ranks (e.g., rank 4 and rank 32).
Not directly possible — use Intersection to skip conflicting layers.
Different ranks produce tensors with different shapes for the same keys. The tool will flag all mismatched layers as conflicts. You can still merge the layers that ARE compatible (usually alpha scalars and some shared projections).
FAQ
What does 'shape conflict' mean?▾
A shape conflict means the same tensor key exists in two files but with different dimensions. This usually happens when LoRAs are trained at different ranks (e.g., rank 4 vs rank 32). The lora_down tensor for rank 4 has shape [4, 768] and for rank 32 it's [32, 768] — they can't be added together. The tool skips those tensors rather than crash.
Normalized average vs weighted sum — which should I use?▾
Normalized average divides by the sum of weights so the total influence stays at 100%. Use it when blending similar LoRAs (same character, same concept). Weighted sum adds tensors directly without normalizing — use it when merging distinct characters or styles that should coexist rather than blend.
Will the merged LoRA work in Forge / A1111 / ComfyUI?▾
Yes. The output is a standard .safetensors file with valid SDXL LoRA architecture. Any WebUI that supports SDXL LoRAs will load it. The tool always outputs F16 regardless of input dtype.
What happens to trigger words?▾
The output file's metadata contains the combined trigger words from all input files. Each character/concept still needs its own trigger words activated separately.
Are my files uploaded anywhere?▾
No. Everything runs in your browser using JavaScript. Your .safetensors files are read into memory locally and the merged output is generated locally. No data ever leaves your device.