Qwen 3.5 122B Uncensored (Aggressive) Released with New K_P Quantisations

1 min read
r/LocalLLaMAcommunity Qwenmodel-developer

The community has been waiting for this release, and the Qwen 3.5 122B Aggressive variant is now available in GGUF format with new K_P quantisation options. Unlike previous uncensored attempts that altered the model's personality, this aggressive version strips away refusals while preserving the original model's core characteristics and reasoning capabilities.

The GGUF release is significant for local deployment practitioners because it enables running this large 122B model on consumer GPUs with quantisation trade-offs. The new K_P quantisation variants provide flexibility for different hardware constraints, allowing users to balance between model quality and VRAM requirements. This brings high-capability uncensored inference within reach of edge deployments that previously required cloud infrastructure.

For developers building agentic systems, content generation tools, or unrestricted research applications, this release removes a major deployment bottleneck. The ability to run a 122B model locally with multiple quantisation options represents a significant step toward practical, privacy-preserving AI deployment at scale.


Source: r/LocalLLaMA · Relevance: 9/10