EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
Viktor Rudnev(马克思普朗克研究院,萨尔大学), Mohamed Elgharib , Christian Theobalt, Vladislav Golyanik (马克思普朗克研究院)
项目主页:https://4dqv.mpi-inf.mpg.de/EventNeRF/
异步事件相机已在多个应用中找到了它的价值,因为它支持HDR,没有运动模糊 ,低延迟并低数据带宽。在过去的几年里这个领域取得了巨大成功,已有的基于事件的3D建模方法,可重建场景的稀疏点云数据。但是这样的稀疏情况限制了在多种场景下的应用,尤其是计算视觉与图形学中,目前仍然没有得到解决。我们的工作提出了第一个3D一致性的,稠密且真实感的新视角生成方法,而且仅仅使用一个单一的颜色事件流做为输入。我们方法的核心,是整个NeRF的训练使用事件自监督的方法,而且保持原始颜色事件通道的分辨率。接下来,我们的射线采样算法为事件所定制,实现数据有效的训练。我们的方法可以在RGB空间达到前所未有的效果。我们定性和定量地在合成数据和真实数据上评估了我们的方法,证实了它比起其他现有方法,可以生成稠密的多的而且视觉上好非常多的渲染效果。我们也展示了它对于非常挑战的场景的鲁棒性,比如快速运动和低光照条件下的重建结果。我们将开放我们的代码和数据集。
Asynchronously operating event cameras find many applications due to their high dynamic range, no motion blur, low latency and low data bandwidth. The field has seen remarkable progress during the last few years, and existing event-based 3D reconstruction approaches recover sparse point clouds of the scene. However, such sparsity is a limiting factor in many cases, especially in computer vision and graphics, that has not been addressed satisfactorily so far. Accordingly, this paper proposes the first approach for 3D-consistent, dense and photorealistic novel view synthesis using just a single colour event stream as input. At the core of our method is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels. Next, our ray sampling strategy is tailored to events and allows for data-efficient training. At test, our method produces results in the RGB space at unprecedented quality. We evaluate our method qualitatively and quantitatively on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings than the existing methods. We also demonstrate robustness in challenging scenarios with fast motion and under low lighting conditions. We will release our dataset and our source code to...