Quantization & Reasoning Performance
This section is being assembled. It collects public technical notes about post-training quantization (PTQ) variation, precision-tier ambiguity, and how surface linguistic fluency can remain stable while reasoning performance deteriorates across deployment regimes.
Drafting in progress