Data Poisoning
CriticalState Corruption Under Intermittent Connectivity
Edge-cloud divergence when connectivity drops. Models continue inferring, state reconciliation silently fails, and nobody tests the sync layer for semantic correctness.
Adversarial Evasion
HighCascading Confidence Collapse
Multi-stage agentic pipelines where each handoff launders uncertainty. Low-confidence outputs wrapped in high-confidence formats cascade through the chain unchecked.
Model Extraction
HighAdversarial Operational Context
Not adversarial inputs to the model -- adversarial conditions around it. EW-degraded sensor feeds, spoofed telemetry, operator fatigue altering interaction patterns.
Inversion & Privacy
MediumMode Flapping
Systems oscillating between AI and rules-based fallback 40 times a minute. Threshold calibrated in the lab falls apart in the field. Operator trust collapses in 90 seconds.
Reward Hacking
HighDiscreet Model Drift
The AI equivalent of a slow gas leak. Accuracy degrades 12% over six months, outputs still look reasonable, validation suite isn't running in production. Nothing melts down.
Distribution Shift
CriticalTool Use Hallucination in Agentic Chains
Models calling tools that don't exist, fabricating parameters, inventing plausible responses. Downstream systems process hallucinated results as ground truth. Phantom side effects.
Algorithmic Bias
MediumFeedback Loop Poisoning in Human-AI Teaming
Operators trust the model, only correct obvious errors. Subtly wrong outputs get approved. Human and AI co-create a degraded standard of correctness neither would reach alone.
Report Index
Complete Vector Reference
Cross-reference guide and vector index for the full CH(AI)OS THEORY technical report.