: A feature to handle memory spikes during training by offloading to CPU RAM. 🔬 Key Technical Details
: Compresses 16-bit weights to 4 bits, effectively reducing VRAM usage by ~75% (e.g., a 65B parameter model can be loaded with ~35GB instead of ~130GB). NF4.rar
: To reduce the memory footprint of LLMs (like Llama) enough to fit on a single GPU (e.g., a 24GB RTX 3090) while maintaining full 16-bit performance. : A feature to handle memory spikes during