AnyPcc Icon

AnyPcc

Compressing Any Point Cloud with a Single Universal Model

1Peking University, 2PengCheng Laboratory

(a) AnyPcc Architecture. A single, unified model compresses point clouds from any source, with our Instance-Adaptive Fine-Tuning (IAFT) module boosting performance on out-of-distribution (OOD) data. (b) AnyPcc Benchmark. Our comprehensive benchmark features 15 diverse datasets, including both standard and extreme cases. When compared against five state-of-the-art methods, AnyPcc consistently achieves high compression efficiency across all types of point clouds.

TL;DR: AnyPcc compress any source point cloud with a single universal model.

Abstract

Generalization remains a critical challenge for deep learning-based point cloud geometry compression. We argue this stems from two key limitations: the lack of robust context models and the inefficient handling of out-of-distribution (OOD) data. To address both, we introduce AnyPcc, a universal point cloud compression framework. AnyPcc first employs a Universal Context Model that leverages priors from both spatial and channel-wise grouping to capture robust contextual dependencies. Second, our novel Instance-Adaptive Fine-Tuning (IAFT) strategy tackles OOD data by synergizing explicit and implicit compression paradigms. It fine-tunes a small subset of network weights for each instance and incorporates them into the bitstream, where the marginal bit cost of the weights is dwarfed by the resulting savings in geometry compression. Extensive experiments on a benchmark of 15 diverse datasets confirm that AnyPcc sets a new state-of-the-art in point cloud compression. Our code and datasets will be released to encourage reproducible research.