![]() ![]() ![]()
EI Compendex Source List(2022年1月)
EI Compendex Source List(2020年1月)
EI Compendex Source List(2019年5月)
EI Compendex Source List(2018年9月)
EI Compendex Source List(2018年5月)
EI Compendex Source List(2018年1月)
中国科学引文数据库来源期刊列
CSSCI(2017-2018)及扩展期刊目录
2017年4月7日EI检索目录(最新)
2017年3月EI检索目录
最新公布北大中文核心期刊目录
SCI期刊(含影响因子)
![]() ![]() ![]()
论文范文
1. Introduction 3D reconstruction based on structured light, including fringe pattern, infrared speckle, TOF, and laser scan, is widely used in industrial measurement, robot navigation, and virtual reality for its accurate measurement. In spite of the good performance in specific settings, it is troublesome for structured light to scan transparent objects. The transparent object which belongs to nonspecular surface can not reflect correct depth due to the properties of light absorption, reflection, and refraction. Therefore, some 3D acquisition systems have been specially developed for transparent object [1–3]. On the other hand, the popularity of consumer-grade RGB-D sensor, such as Kinect, makes it easier to combine depth and RGB information to improve a 3D scanning system. It occurs to us that we can recover the transparent surface by combining a passive reconstruction method as transparent objects appear in a stabler shape on color images. Since the transparent object is commonly with less texture, shape from silhouette (SFS) is considered more suitable to address the transparent issue. In addition, the flaw of SFS that fails to shape the concave objects can be remedied by structured light. Some researchers have tried to fuse the depth and silhouette information for 3D scan. Yemez and Wetherilt [4] present a 3D scan system which fuses laser scan and SFS to fill holes of the surface. Narayan et al. [5] fuse the visual hull and depth images on the 2D image domain. And their approach can obtain high-quality model for simple, concave, and transparent objects with interactive segmentation. However, both of them only achieve good results in the lab environment but are not applied for natural scene with complex background. Lysenkov et al. [6] propose a practical method for dealing with transparent objects in real world. Our idea is similar to theirs. We also try to look for approximate region of transparent object and some other nonspecular objects cued by noise from depth sensor before we use Grabcut [7] (classical image matting method) to extract their silhouettes on color images. The main contributions of this paper are (i)a complete system tackling the problem of volumetric 3D reconstruction of transparent objects based on multiple RGB-D images with known poses, (ii)a novel pipeline that localizes transparent object before recovering the model by SFS, (iii)a robust transparent object localization algorithm cued by both zero depth (ZD) and wrong depth (WD), (iv)our system which is able to cope with real-world data and does not need any interactive operations. ![]() |
|