news

Apr 06, 2026 New preprint: TriAttention achieves 2.5x throughput for long-context LLM reasoning via trigonometric KV cache compression, enabling deployment on a single consumer GPU.
Jan 23, 2025 Two papers accepted at ICLR 2025: BA-DDG (Spotlight) and ConvNova.
Jan 16, 2024 Our paper VFN on de novo protein design is accepted as ICLR 2024 Spotlight.