![]() ![]() ![]()
EI Compendex Source List(2022年1月)
EI Compendex Source List(2020年1月)
EI Compendex Source List(2019年5月)
EI Compendex Source List(2018年9月)
EI Compendex Source List(2018年5月)
EI Compendex Source List(2018年1月)
中国科学引文数据库来源期刊列
CSSCI(2017-2018)及扩展期刊目录
2017年4月7日EI检索目录(最新)
2017年3月EI检索目录
最新公布北大中文核心期刊目录
SCI期刊(含影响因子)
![]() ![]() ![]()
论文范文
1 Introduction The compressed sensing (CS) framework for image acquisition exploits the inherent properties of a signal to reduce the number of samples required for reconstruction. Most signals are sparse in some domains and require fewer samples than specified by the Nyquist criterion to fully recover the signal.Unlike conventional Nyquist sampling, CS uses the sparsity information in the signal and acquires measurements in the domain in which the signal is sparse. The sampling matrices or projections are carefully designed to acquire maximum information from the signal. Random projections have been proven to recover the signal with the least number of samples above the minimum bounds, and with high probability. Some deterministic sampling matrices have also been investigated and proven efficient for a full recovery of the signal. Researchers have come up with many imaging architectures for CS implementation employing spatial light modulators (SLMs). These include the Rice single-pixel camera,which uses a digital mirror device (DMD),1 coded apertures, and CMOS SLMs to exploit the sparsity of the signal in the spatial domain. These architectures acquire compressive measurements in the spatial domain, but due to the sequential nature of measurement acquisition, they are not efficient for video. Most CS architectures use a single detector that creates a temporal bottleneck for applications requiring fast sampling rates. One way to reduce the effect of this bottleneck is to take multiple measurements at one instance by increasing the number of sensors. Each sensor will require a respective DMD, making an array of DMDs that is not a feasible solution in terms of cost and space. Another cost-efficient way is to exploit redundancies or its complement,temporal sparsity, in a video sequence. In order to exploit temporal sparsity in a video, weintuitively think of changes between frames. In many video sequences, change is sparse along the time axis. Many methods have been published for video sensing and reconstruction that exploit change sparsity, most of which acquire measurements for several frames and reconstruct them subsequently using Fourier or wavelet domain sparsity.One direct method is to acquire measurements for each frame separately. To minimize motion blur, direct acquisition requires the scene to be static before measurements are made for each frame, which is not practical in most cases. Another approach adopted is three-dimensional wavelet reconstruction.1 Samples for a group of frames are acquired and a wavelet basis is used for recovering all frames in the group at once. This method cannot be used for real time video streaming without incurring latency and delay that may significantly affect performance in many situations.Frame differencing has been used where the differences between consecutive frames are compressively measured, reconstructed, and added to the previous frame. This method not only accumulates residual error, but the mean square error (MSE) increases significantly when the difference is not sparse, as in the case of large changes in the scene. Another approach is based on modeling specific video sequence evolution as a linear dynamical system (LDS). This approach reduces the required samples for reconstruction considerably but is restricted to videos possessing an LDS representation, which is possible for only a few specific sequences. Some works based on block-based compressive sensing (CS), such as block-based compressive sensing with smooth projection Landweber reconstruction (BCS-SPL), divide the frame into nonoverlapping blocks and process each block separately. The basic technique splits the processing into smaller blocks and combines the reconstruction for the final result.6 This method does not take into account the temporal redundancies in a video. More advanced techniques based on BCS-SPL take into account motion estimation parameters to aid the econstruction process. Motion estimation/motion compensation BCS (ME/MS-BCS) selects a group of pictures (GOP) for estimating motion vectors and reconstruct the GOP using this information. This improves the subrate performance but incurs an undesirable time delay in addition to increasing reconstruction complexity.7–10 One other approach is adaptive block-based compressive sensing in which a frame is divided into a specific number of blocks and each block is assigned measurements based on changes and texture.This approach accumulates residual error and gives block artifacts after recovery. ![]() |
|