At the NAACL 2025 conference, taking place from April 29 to May 4 in Albuquerque, New Mexico, USA, representatives of Yandex Research, HSE University, MIT, KAUST, and ISTA presented a new method called HIGGS for quantizing neural networks. The method was tested on Llama 3, Llama 4, and Qwen2.5 models and showed the best quality-to-model-size ratio among existing data-free methods (GPTQ and AWQ). HIGGS allows compressing language models like DeepSeek R1 with 671 billion parameters without significant loss in their performance. The method is available on platforms such as Hugging Face and GitHub.