精细化现在,快速查询:一种解耦的隐式神经场细化范式

📄 中文摘要

隐式神经表示(INRs)因其能够连续建模空间和条件场而成为大型三维科学模拟的有前景替代方案,但面临着精度与速度的关键困境:深度多层感知器(MLPs)推理成本高,而高效的嵌入模型则缺乏足够的表达能力。为了解决这一问题,提出了解耦表示细化(DRR)架构范式。DRR利用深度细化网络和非参数变换,在一次离线过程中将丰富的表示编码为紧凑高效的嵌入结构。该方法将具有高表示能力的慢神经网络与快速推理路径解耦,从而提高了整体性能。

📄 English Summary

Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields

Implicit Neural Representations (INRs) have gained traction as effective surrogates for large-scale 3D scientific simulations due to their capacity to continuously model spatial and conditional fields. However, they encounter a significant fidelity-speed dilemma: deep Multi-Layer Perceptrons (MLPs) incur high inference costs, while efficient embedding-based models often lack sufficient expressiveness. To address this challenge, the Decoupled Representation Refinement (DRR) architectural paradigm is proposed. DRR employs a deep refiner network in conjunction with non-parametric transformations to encode rich representations into a compact and efficient embedding structure in a one-time offline process. This approach effectively decouples slow neural networks with high representational capacity from the fast inference path, enhancing overall performance.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等