Google Research recently revealed TurboQuant, a compression algorithm that reduces the memory footprint of large language ...
Google has published TurboQuant, a KV cache compression algorithm that cuts LLM memory usage by 6x with zero accuracy loss, ...
Google’s TurboQuant has the internet joking about Pied Piper from HBO's "Silicon Valley." The compression algorithm promises ...
Intel and Nvidia showed off their respective AI-powered texture-compression technologies over the weekend, demonstrating ...
Google thinks it's found the answer, and it doesn't require more or better hardware. Originally detailed in an April 2025 ...
In its "Tuscan Wheels" demo, the company showed VRAM usage dropping from roughly 6.5GB with traditional BCN-compressed ...
Google's TurboQuant algorithm compresses LLM key-value caches to 3 bits with no accuracy loss. Memory stocks fell within ...
The biggest memory burden for LLMs is the key-value cache, which stores conversational context as users interact with AI ...
The compression algorithm works by shrinking the data stored by large language models, with Google’s research finding that it can reduce memory usage by at least six times “with zero accuracy loss.” [ ...
Neural Texture Compression (NTC) optimized memory usage for either neural rendering or high-resolution texture and game data.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results