Built for Code Intelligence
Fine-tuned models trained on real engineering workflows. Combined with a model-agnostic architecture that lets you bring your own keys, your own providers, your own rules.
Fine-tuned for engineering
Not generic LLMs. Models trained specifically on engineering workflows, code conventions, and security patterns.
contox-deep
Code AnalysisWhat changed: Dataset doubled with real GitHub code reviews and PR discussions.
Next generation. 2x the training data with real GitHub code reviews, enriched task diversity, and improved architecture understanding.
Capabilities
Model Quality
Training Dataset
Built from real GitHub code reviews and PR discussions, combined with the original v2 brain workflow data.
Training
QLoRA (4-bit quantized base + LoRA adapters)
Inference
contox-security
Security AnalysisWhat changed: Balanced dataset (50% vuln / 50% clean) to reduce false positive rate.
Balanced 50/50 dataset (vulnerable + clean code) to reduce false positives. Same base model as v1, better precision on real-world codebases.
Capabilities
Model Quality
Training Dataset
Balanced dataset with equal vulnerable and clean code samples to teach the model when code is safe, not just when it is vulnerable.
Training
LoRA (bf16 full-precision base + LoRA adapters)
Inference
contox-index
Intent ClassificationNot every task needs an LLM. Contox Index uses a DeBERTa encoder to classify developer intent and rank context relevance. Zero token cost, no external API calls, runs on CPU.
Capabilities
Model Quality
Training Dataset
Curated dataset of developer messages classified into 13 engineering categories. Balanced with class-weighted training.
Training
Full fine-tune (fp32, class-weighted CrossEntropy)
Inference
How our models work
Your code flows through a layered intelligence pipeline. Each model handles what it does best.
Your Code
Repo, PRs, commits
Contox Intelligence
Contox Index
Intent classification + context ranking
contox-deep-v2
Deep code analysis + brain enrichment
contox-security-v1
Vulnerability detection + security scoring
Intelligence Output
Enriched Brain
Security Reports
Health Scores
Convention Validation
Your provider, your rules
Contox works with any major AI provider. Bring your own API keys, configure fallback chains, switch providers without changing a line of code.
Google Gemini
gemini-2.0-flash
1.1M tokens
OpenAI
gpt-4.1-mini
1M tokens
Anthropic Claude
claude-sonnet-4
200K tokens
Mistral AI
mistral-small-latest
131K tokens
OpenRouter
200+ models
Variable
Bring Your Own Keys
Use your own API keys. Your tokens, your billing, your choice of provider.
Automatic Fallback
Configure a fallback chain. If one provider fails, the next takes over seamlessly.
Team Configuration
Each team sets their own provider, model, and fallback order. No lock-in.
Numbers, not marketing
Real metrics from our training pipeline. contox-deep-v2 training details.
Training Config
Inference Setup
LoRA Configuration
Model Quality
QLoRA Fine-tuning
Parameter-efficient training. Only 3.3% of total parameters are updated, keeping the base model capabilities intact while specializing for code analysis.