Get Started

Run the pretrained model or train your own geometric adapter.

Option A: Load the Pretrained Model

The fastest path. Pulls the model from Hugging Face and runs inference.

pip install transformers torch accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "jesusvilela/igbundle-qwen2.5-7b-riemannian",
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
    "jesusvilela/igbundle-qwen2.5-7b-riemannian"
)

inputs = tokenizer("Explain the geometry of attention.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Requirements: ~8 GB VRAM for NF4 quantization, or ~16 GB for BFloat16. CPU inference works but is slow.

Option B: Clone and Train

Full source with geometric adapter, training pipeline, and evaluation.

1. Clone

git clone https://github.com/jesusvilela/IGBundle-LLM.git
cd IGBundle-LLM
pip install -r requirements.txt

2. Train

python train.py --config configs/igbundle_standard.yaml

Training uses Unsloth for 4-bit quantization and LoRA + GeometricAdapter. The adapter is injected at Layer 12 and adds ~0.9% parameters.

Training losses: Causal LM + Curvature (κ → -1) + Sheaf Consistency (JSD) + Bundle Structure. Natural gradient optimizer (Fisher Information Metric) for 30% faster convergence.

3. Evaluate

# Manifold Faithfulness Rate + ARC benchmarks
python eval_arc.py --checkpoint <path> --limit 100 --mfr

# Standard benchmarks via llama.cpp server
python export_gguf.py --checkpoint <path>
# Then point lm-evaluation-harness at localhost:8080

Option C: Neural Glass (Interactive UI)

Gradio-based chat interface with real-time geometric telemetry: curvature heatmap, entropy gauge, fiber distributions, thought trace.

python app_neural_glass.py

Opens at http://localhost:7860. Requires a CUDA GPU with ≥5 GB VRAM.

Project Structure

src/igbundle/
  geometry/      — Riemannian, hyperbolic, Poincare, KAN manifold
  modules/       — Geometric adapter, losses, attention, vision
  dynamics/      — Hamiltonian, FitzHugh-Nagumo, equilibrium propagation
  fibers/        — Fiber state, constraints, swarm executor
  steering/      — GSP controller (inference-time feedback)
  optimization/  — Symplectic optimizer, SPIDER variance reduction
  training/      — Geometric trainer, GRPO, losses
  quantum/       — Gibbs sampling, scrambling
  nn/            — KAN (Kolmogorov-Arnold Networks)

thesis/          — Academic thesis (PDF + sources)
tests/           — Geometry, pipeline, integration tests
configs/         — Training and ablation configs