<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Plant Sci.</journal-id>
<journal-title>Frontiers in Plant Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Plant Sci.</abbrev-journal-title>
<issn pub-type="epub">1664-462X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpls.2022.849606</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Plant Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Green Visual Sensor of Plant: An Energy-Efficient Compressive Video Sensing in the Internet of Things</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Li</surname> <given-names>Ran</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1623598/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Yang</surname> <given-names>Yihao</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1679663/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Sun</surname> <given-names>Fengyuan</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1679700/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Computer and Information Technology, Xinyang Normal University</institution>, <addr-line>Xinyang</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Guangxi Key Laboratory of Wireless Wideband Communication and Signal Processing, Guilin University of Electronic Technology</institution>, <addr-line>Guilin</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Yu Xue, Nanjing University of Information Science and Technology, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Romany Mansour, The New Valley University, Egypt; Zijian Qiao, Ningbo University, China; Khan Muhammad, Sejong University, South Korea</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Ran Li <email>liran&#x00040;xynu.edu.cn</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Sustainable and Intelligent Phytoprotection, a section of the journal Frontiers in Plant Science</p></fn></author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>02</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>849606</elocation-id>
<history>
<date date-type="received">
<day>06</day>
<month>01</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>01</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Li, Yang and Sun.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Li, Yang and Sun</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Internet of Things (IoT) realizes the real-time video monitoring of plant propagation or growth in the wild. However, the monitoring time is seriously limited by the battery capacity of the visual sensor, which poses a challenge to the long-working plant monitoring. Video coding is the most consuming component in a visual sensor, it is important to design an energy-efficient video codec in order to extend the time of monitoring plants. This article presents an energy-efficient Compressive Video Sensing (CVS) system to make the visual sensor green. We fuse a context-based allocation into CVS to improve the reconstruction quality with fewer computations. Especially, considering the practicality of CVS, we extract the contexts of video frames from compressive measurements but not from original pixels. Adapting to these contexts, more measurements are allocated to capture the complex structures but fewer to the simple structures. This adaptive allocation enables the low-complexity recovery algorithm to produce high-quality reconstructed video sequences. Experimental results show that by deploying the proposed context-based CVS system on the visual sensor, the rate-distortion performance is significantly improved when comparing it with some state-of-the-art methods, and the computational complexity is also reduced, resulting in a low energy consumption.</p></abstract>
<kwd-group>
<kwd>Internet of Things</kwd>
<kwd>visual sensor</kwd>
<kwd>Compressive Video Sensing</kwd>
<kwd>context extraction</kwd>
<kwd>linear recovery</kwd>
<kwd>plant monitoring</kwd>
</kwd-group>
<counts>
<fig-count count="12"/>
<table-count count="2"/>
<equation-count count="26"/>
<ref-count count="49"/>
<page-count count="18"/>
<word-count count="9254"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>In the Internet of Things (IoT), the plant propagation process or plant growth can be monitored by visual sensors. One benefit from the framework of IoT, a large amount of data on the plant can be gathered in a central server, and the valuable information can be achieved by analyzing the data in real-time. However, with the limited processing capabilities and power/energy budget of visual sensors, it is a challenge for video monitoring of plant to compress large-scale video sequences by using the traditional codec, e.g., H.264/AVC and HEVC (Sullivan et al., <xref ref-type="bibr" rid="B34">2012</xref>), so the existing works have developed low-complexity and energy-efficient video codecs, in which Distributed Video Coding (DVC) (Girod et al., <xref ref-type="bibr" rid="B16">2005</xref>) and Compressive Video Sensing (CVS) (Baraniuk et al., <xref ref-type="bibr" rid="B4">2017</xref>) have attracted more attention in industry and academia. Different from DVC, CVS dispenses with the feedback and virtual channels (Unde and Pattathil, <xref ref-type="bibr" rid="B40">2020</xref>), which makes the codec framework simpler. Meanwhile, CVS provides a low-complexity encoder because of its theoretic foundation, Compressive Sensing (CS) (Baraniuk, <xref ref-type="bibr" rid="B3">2007</xref>), realizes the capture of video frames at a rate significantly below the Nyquist rate. Currently, many researchers recognize that CVS is a potential scheme to compress the video sequences in the IoT framework, and especially for wireless video monitoring of plants, the CVS scheme can assist visual sensors to efficiently reduce the energy consumptions, however, its rate-distortion performances are still far from satisfactory.</p>
<p>The objective of this article is to improve the rate-distortion performance of CVS, providing high-quality video monitoring of plants with low energy consumption. To achieve this objective, the existing works focus on how to design excellent recovery algorithms, and they are keen on mixing various advanced tools into the CVS framework, e.g., the latest popular Deep Neural Network (DNN) (Palangi et al., <xref ref-type="bibr" rid="B24">2016</xref>; Zhao et al., <xref ref-type="bibr" rid="B48">2020</xref>; Tran et al., <xref ref-type="bibr" rid="B38">2021</xref>). Though effective, they bear a heavy computational burden. Different from these works, we try to exploit the capability of CS to capture important structures, improving the reconstruction quality only armed with some simple recovery algorithms. It is well known that the context feature (Shechtman and Irani, <xref ref-type="bibr" rid="B32">2007</xref>; Romano and Elad, <xref ref-type="bibr" rid="B30">2016</xref>) is a good structure for visual quality, and, therefore, in this article, we focus on how to fuse contexts into CVS for an obvious improvement of reconstruction quality.</p>
<p>Compressive Video Sensing consists of three essential steps including CS measurement, measurements quantization, and reconstruction. CS measurement is a process of randomly sampling each video frame, in which the block-based (Gan, <xref ref-type="bibr" rid="B14">2007</xref>; Bigot et al., <xref ref-type="bibr" rid="B6">2016</xref>) or structurally (Do et al., <xref ref-type="bibr" rid="B13">2012</xref>; Zhang et al., <xref ref-type="bibr" rid="B46">2015</xref>) random matrix is often used to ensure the small memory requirement. Output by CS measurement, all measurements are required to be quantized as bits, then transmitted to the decoder. The straightforward solution to incorporating quantization into CVS is simply to apply Scalar Quantization (SQ), but it brings a big error. For block-based sampling, Differential Pulse Code Modulation (DPCM) (Mun and Fowler, <xref ref-type="bibr" rid="B22">2012</xref>) can be used, and it exploits the correlations between blocks to improve the rate-distortion performance. Based on DPCM, many works also proposed some efficient predictive schemes (Zhang et al., <xref ref-type="bibr" rid="B44">2013</xref>; Gao et al., <xref ref-type="bibr" rid="B15">2015</xref>) to quantize CS measurements. Reconstruction is deployed at the decoder, and it uses quantized measurements to reconstruct the video sequence by the CS recovery algorithm. At present, the reconstruction can be implemented by one of the three types: frame-by-frame (Chen Y. et al., <xref ref-type="bibr" rid="B10">2020</xref>; Trevisi et al., <xref ref-type="bibr" rid="B39">2020</xref>), three-dimensional (3D) (Qiu et al., <xref ref-type="bibr" rid="B28">2015</xref>; Tachella et al., <xref ref-type="bibr" rid="B35">2020</xref>), and distributed strategies (Zhang et al., <xref ref-type="bibr" rid="B47">2020</xref>; Zhen et al., <xref ref-type="bibr" rid="B49">2020</xref>). The frame-by-frame reconstruction performs a CS recovery algorithm to reconstruct each video frame independently, and it has a poor rate-distortion performance due to neglecting the correlations between frames. The 3D reconstruction designs some complex representation models to once reconstruct a whole video sequence or a Group Of Pictures (GOP), e.g., Li et al. (<xref ref-type="bibr" rid="B20">2020</xref>) proposed the Scalable Structured CVS (SS-CVS) framework, which learns the union of data-driven subspaces model to reconstruct GOPs. However, it has a defect in 3D reconstruction that the huge memory and high computational complexity are required to be invested at decoder. Derived from the decoding strategy of DVC, the distributed reconstruction divides the input video sequence into non-key frames and key frames and reconstructs each non-key frame by the CS recovery algorithm with the aid of its neighboring key frames. With a small memory and a low computational complexity, the distributed reconstruction improves the rate-distortion performance by exploiting the motions between frames, so many existing works focus on it to design the CVS systems, e.g., Ma et al. proposed the DIStributed video Coding Using Compressed Sampling (DISCUCS) (Prades-Nebot et al., <xref ref-type="bibr" rid="B27">2009</xref>), Gan et al. proposed the DIStributeCOmpressed video Sensing (DISCOS) (Do et al., <xref ref-type="bibr" rid="B12">2009</xref>), Fowler et al. proposed the Multi-Hypothesis Block CS (MH-BCS) system (Chen et al., <xref ref-type="bibr" rid="B8">2011</xref>; Tramel and Fowler, <xref ref-type="bibr" rid="B37">2011</xref>; Azghani et al., <xref ref-type="bibr" rid="B2">2016</xref>), etc. The core of distributed reconstruction is the Multi-Hypothesis (MH) predictive technique, which uses a linear combination of blocks in key frames to interpolate the blocks in non-key frames. As one of the state-of-the-art techniques, the MH prediction is widely applied to distributed reconstruction. Recently, some works try to modify the implementation of MH prediction, e.g., Chen C. et al. (<xref ref-type="bibr" rid="B9">2020</xref>) added the iterative Reweighted TIKhonov-regularized scheme into MH prediction (MH-RTIK), causing a significant improvement of CVS performance. CS theory indicates that the precise recovery requires enough CS measurements. With insufficient CS measurements, the excellent CS recovery algorithm still cannot prevent the degradation of reconstruction quality, however, by adaptively allocating CS measurements based on local structures of the image, a simple recovery algorithm can also provide a good reconstruction quality (Yu et al., <xref ref-type="bibr" rid="B42">2010</xref>; Taimori and Marvasti, <xref ref-type="bibr" rid="B36">2018</xref>; Zammit and Wassell, <xref ref-type="bibr" rid="B43">2020</xref>). Judging from the above facts, the adaptive allocation is a potential way to improve the rate-distortion performance of the CVS system with a light codec.</p>
<p>This article presents a context-based CVS system, of which the core is the allocation of CS measurements adapted by context structures at the encoder. Based on these adaptive measurements, by combining linear estimation and MH prediction into distributed reconstruction, the decoder provides a satisfying reconstruction quality with low memory and computational cost. The contributions of the proposed context-based CVS is to solve the following issues:
<list list-type="order">
<list-item><p>How to extract the context structures from CS measurements? Traditional methods use pixels to compute the context features, but it costs lots of computations at the encoder, resulting in impracticality for CVS. Especially when the encoder is realized by Compressive Imaging (CI) devices (Liu et al., <xref ref-type="bibr" rid="B21">2019</xref>; Deng et al., <xref ref-type="bibr" rid="B11">2021</xref>), due to the unavailability of original pixels, it is impossible to perform the traditional methods. Considering the low dimensionality and availability of CS measurements, it is practical in CVS to extract context structures from CS measurements.</p></list-item>
<list-item><p>How to adaptively allocate CS measurements by context structures? Contexts measure the correlations between pixels, and their distribution reveals some meaningful structures, e.g., smoothness, edges, textures, etc. With the same recovery quality, fewer necessary measurements are required for simple structures and more for complex structures. According to the distribution of contexts, an efficient allocation is designed to avoid insufficiency or redundancy of measurements.</p></list-item>
<list-item><p>How to quantize the adaptive measurements? Adaptive allocation makes blocks have different numbers of CS measurements, as a result, the traditional prediction cannot be applied to quantization. Due to the insufficient capability of SQ, an appropriate prediction scheme is required to reduce the quantization error.</p></list-item>
</list></p>
<p>Experimental results show that the proposed context-based CVS system outputs the high-quality reconstructed video sequences when monitoring plant growth or propagation and improves the rate-distortion performances when compared with the state-of-the-art CVS systems, which demonstrates the effectiveness of context-based allocation for CVS.</p>
<p>The rest of this article is organized as follows. Section 2 briefly overviews Plant Monitoring System, CVS, and describes the traditional method to extract context features. Section 3 presents the proposed context based CVS system. Experimental results are provided in Section 4, and we conclude this article in Section 5.</p></sec>
<sec id="s2">
<title>2. Related Works</title>
<sec>
<title>2.1. Plant Monitoring System</title>
<p>In modern agriculture, it is essential to monitor plant propagation or growth for guaranteeing productivity. The labor costs can be efficiently reduced by automatically capturing the architectural parameters of the plant, so more and more attention has been paid to the design of the plant monitoring system (Somov et al., <xref ref-type="bibr" rid="B33">2018</xref>; Grimblatt et al., <xref ref-type="bibr" rid="B17">2021</xref>; Rayhana et al., <xref ref-type="bibr" rid="B29">2021</xref>). Early, lots of systems are designed to monitor the various environmental parameters on plant growth, such as humidity, temperature, solar illuminance, etc., e.g., James and Maheshwar (<xref ref-type="bibr" rid="B19">2016</xref>) used multiple sensors to measure the soil data of plants and transmitted these data to the mobile phone by Raspberry Pi; Okayasu et al. (<xref ref-type="bibr" rid="B23">2017</xref>) developed a self-powered wireless monitoring device that is equipped with some environmental sensors; Guo et al. (<xref ref-type="bibr" rid="B18">2018</xref>) added big-data services to analyze the environmental data on plant growth. These environmental parameters indirectly indicate the process of plant growth, and they cannot record the visual scenes on plant growth, resulting in the unavailability of the physical structure parameters on plants. To realize the visual monitoring of plants, some works have started to integrate the visual sensors into the plant monitoring system, e.g., Peng et al. (<xref ref-type="bibr" rid="B25">2022</xref>) used the binocular camera to capture video sequences on a plant and used the structure from motion method (Piermattei et al., <xref ref-type="bibr" rid="B26">2019</xref>) to extract the 3-D information of a plant; Sajith et al. (<xref ref-type="bibr" rid="B31">2019</xref>) designed a complex network to derive the plant growth parameters from the monitoring images; Akila et al. (<xref ref-type="bibr" rid="B1">2017</xref>) extracted the plant color and texture by the visual monitoring system. From the above, it can be seen that the visual sensor or camera is used to capture the video sequences on plant growth, and these video sequences are compressed as bitstream which is transmitted to the IoT cloud for further analyzing. As the core of visual sensors, the video compression is a major energy consumer, so a challenge that we face for the visual monitoring system of the plant is to design an energy-efficient video coding scheme to extend the working time of the visual sensor. In the framework of IoT, CVS is a potential coding scheme to reduce the energy consumption of visual sensors. The following briefly overviews the CVS systems.</p></sec>
<sec>
<title>2.2. CVS System</title>
<p>Compressive Video Sensing is the marriage of CS theory and DVC, which reduces the encoding costs and enhances the robustness to noise, thus becoming a potential video codec for wireless visual sensors. At the encoder, to satisfy low complexity and fast computation, the block-based CS sampling is performed on each video frame independently, i.e., the <italic>i</italic>th video frame <bold><italic>f</italic></bold><sub><italic>i</italic></sub> of size <italic>N</italic><sub>1</sub> &#x000D7; <italic>N</italic><sub>2</sub> is partitioned into non-overlapping blocks of size <italic>B</italic> &#x000D7; <italic>B</italic>, each block is vectorized as <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> of length <italic>N</italic><sub><italic>b</italic></sub>, and the CS measurements <bold><italic>y</italic></bold><sub><italic>i,j</italic></sub> of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> are output by
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic></sub> is called as the measurement matrix and can be constructed by some random matrices, e.g., Gaussian, Bernoulli, structural random matrix, etc. By setting the length of <bold><italic>y</italic></bold><sub><italic>i,j</italic></sub> to be <italic>M</italic><sub><italic>i,j</italic></sub>, the size of <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic></sub> is fixed to be <italic>M</italic><sub><italic>i,j</italic></sub> &#x000D7; <italic>N</italic><sub><italic>b</italic></sub>, and the subrate <italic>S</italic><sub><italic>i</italic></sub> of <bold><italic>f</italic></bold><sub><italic>i</italic></sub> is defined as
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:mstyle><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <italic>N</italic> is the number of total pixels in <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, <italic>M</italic><sub><italic>i</italic></sub> is the number of CS measurements for <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, and <italic>J</italic> is the number of blocks in <bold><italic>f</italic></bold><sub><italic>i</italic></sub>. In CI application, an optical device is designed to perform Equation (1), and directly output the CS measurements. To ensure a stable recovery, <italic>L</italic> video frames are gathered to form a GOP, in which the first frame, called the key frame, is set to be a high subrate, and others, called the non-key frame, are set to be a low subrate. After quantization, all CS measurements of GOP are packaged and transmitted to decoder.</p>
<p>At the decoder, by using the received CS measurements, the frame-by-frame, 3D, or distributed strategy is performed to reconstruct the GOP. For frame-by-frame, the reconstruction model can be represented by
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:munder></mml:mstyle></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>{</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:msubsup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A8;</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <bold><italic>&#x003A8;</italic></bold> denotes the 2D sparse representation basis, &#x003B1; is a regularization factor, ||&#x000B7;||<sub>2</sub> denotes &#x02113;<sub>2</sub> norm, and ||&#x000B7;||<sub>1</sub> denotes &#x02113;<sub>1</sub> norm. The model (3) can be solved by some non-linear optimization algorithms, e.g., Alternating Direction Method of Multipliers (ADMM) (Yang et al., <xref ref-type="bibr" rid="B41">2020</xref>), and all reconstructed blocks are spliced into the estimated frame <inline-formula><mml:math id="M5"><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mover accent="true"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The frame-by-frame model uses only the spatial correlations, so its rate-distortion performance is unsatisfactory. The 3D reconstruction model fully considers the spatial-temporal correlations and it can be represented by
<disp-formula id="E5"><label>(4)</label><mml:math id="M6"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo stretchy="false">&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mo stretchy="false">|</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x00393;</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo stretchy="false">|</mml:mo><mml:mo>}</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <bold><italic>&#x00393;</italic></bold> denotes the 3D sparse representation basis, and it is used to remove the spatial-temporal redundancies between blocks. Though effective, model (4) results in a heavy computational burden. Different from the 3D reconstruction, the distributed reconstruction uses the motion-compensation based prediction technique to expose the spatial-temporal redundancies between blocks. <xref ref-type="fig" rid="F1">Figure 1</xref> shows the mechanism of MH prediction, which is commonly used in distributed reconstruction. MH prediction collects the spatial-temporal neighboring blocks in key frames to construct an MH matrix <bold><italic>H</italic></bold><sub><italic>i,j</italic></sub>. According to the motion vector <bold><italic>v</italic></bold><sub><italic>i</italic></sub> of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, the motion-aligned windows <bold><italic>W</italic></bold><sub>1</sub> and <bold><italic>W</italic></bold><sub>2</sub> of sizes <italic>W</italic> &#x000D7; <italic>W</italic> are, respectively, located on the previous and the next key frames, and all candidate blocks in <bold><italic>W</italic></bold><sub>1</sub> and <bold><italic>W</italic></bold><sub>2</sub> are extracted as the hypotheses <inline-formula><mml:math id="M8"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>h</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, producing <bold><italic>H</italic></bold><sub><italic>i,j</italic></sub> &#x0003D; [<bold><italic>h</italic></bold><sub>1</sub>, <bold><italic>h</italic></bold><sub>2</sub>, &#x022EF;&#x02009;, <bold><italic>h</italic></bold><sub><italic>T</italic></sub>], in which <italic>T</italic> &#x0003D; <italic>W</italic><sup>2</sup>. By using MH prediction, the distributed reconstruction is modeled as a Least-Squares (LS) problem as follows:
<disp-formula id="E7"><label>(5)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x00175;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>H</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mi>&#x003B2;</mml:mi><mml:mo>&#x000B7;</mml:mo><mml:mo>&#x02225;</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x00398;</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E8"><label>(6)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>H</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x00175;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <bold><italic>&#x00398;</italic></bold> is the Tikhonov matrix, and &#x003B2; is a regularization factor. <bold><italic>&#x00398;</italic></bold> is a diagonal matrix and constructed by
<disp-formula id="E9"><label>(7)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x00398;</mml:mi></mml:mstyle><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>h</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022F1;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>h</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
With this structure, <bold><italic>&#x00398;</italic></bold> assigns weights of small magnitude to hypotheses mostly dissimilar from <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>. The LS problem can be fast solved by the Conjugate Gradient algorithm (Zhang et al., <xref ref-type="bibr" rid="B45">2018</xref>), which significantly reduces the computational complexity of distributed reconstruction. Due to the full exploitation of spatial-temporal correlations between blocks, the MH prediction enables the distributed reconstruction to provide superior recovery. From the above, in order to realize a light decoder and ensure a good recovery at the same time, distributed reconstruction is a wise way.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Mechanism of Multi-Hypothesis (MH) prediction.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0001.tif"/>
</fig></sec>
<sec>
<title>2.3. Contexts</title>
<p>Compressive Sensing theory indicates that the sparsity <italic>K</italic> of the signal determines its required number <italic>M</italic> of CS measurements by precise recovery. An empirical rule (Becker and Bobin, <xref ref-type="bibr" rid="B5">2011</xref>) is that the precise recovery can be achieved if
<disp-formula id="E10"><label>(8)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mo>&#x02265;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:mi>K</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
In the block-based CS sampling, this rule can be used to avoid the redundancy or insufficiency of CS measurements for blocks, i.e., adapted by the sparsity, each block is allocated to the appropriate number of CS measurements. The sparsity is defined as the number of coefficients with significant magnitude in a representation, and its calculation has not a strict mathematical formula. For images, the sparsity can be revealed by some features, e.g., edge, variance, gradient, etc., and these features are applied into adaptive allocation, leading to the improvement of recovery quality. The simple features only describe the correlations between pixels, but the structures of blocks are not taken into consideration, thus we require some complex features to improve the efficiency of adaptive allocation. In Ref. Romano and Elad (<xref ref-type="bibr" rid="B30">2016</xref>), the self-similarity descriptor (Shechtman and Irani, <xref ref-type="bibr" rid="B32">2007</xref>) is used to extract the contexts of blocks, which represents how similar a central block is to its large surrounding windows. Contexts contain the internal structures and external relations among blocks, and it is a potential feature to better reveal the sparsity variation. The following briefly describes how to extract the contexts in an image.</p>
<p>The context feature expresses the similarities between a central block and those of its large surrounding windows. As illustrated in <xref ref-type="fig" rid="F2">Figure 2</xref>, for a central block <bold><italic>x</italic></bold><sub><italic>p</italic></sub> in an image, its similarity weights are computed by
<disp-formula id="E11"><label>(9)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x02200;</mml:mo><mml:mi>q</mml:mi><mml:mo>&#x02208;</mml:mo><mml:msub><mml:mrow><mml:mo>&#x003A9;</mml:mo></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <bold><italic>x</italic></bold><sub><italic>q</italic></sub> denotes the <italic>q</italic>th surrounding block in a neighborhood &#x003A9;<sub><italic>d</italic></sub>(<italic>p</italic>) of size <italic>d</italic> &#x000D7; <italic>d</italic>, and &#x003C3; is a normalization factor. The range of <italic>s</italic><sub><italic>p,q</italic></sub> is [0, 1], in which a large value indicates that the blocks <bold><italic>x</italic></bold><sub><italic>p</italic></sub> and <bold><italic>x</italic></bold><sub><italic>q</italic></sub> are highly similar, and a small value indicates that the two are substantially different. All weights constitute a correlation surface <bold><italic>U</italic></bold><sub><italic>p</italic></sub> &#x0003D; [<italic>s</italic><sub><italic>p,q</italic></sub>|&#x02200;<italic>q</italic> &#x02208; &#x003A9;<sub><italic>d</italic></sub>(<italic>p</italic>)], of which the statistics reveal the self-similarity of <bold><italic>x</italic></bold><sub><italic>p</italic></sub>. To measure the statistics, the correlation surface of <bold><italic>x</italic></bold><sub><italic>p</italic></sub> is rearranged into a histogram of <italic>b</italic> bins, of which the normalization is regarded as the context feature <bold><italic>g</italic></bold><sub><italic>p</italic></sub> of <bold><italic>x</italic></bold><sub><italic>p</italic></sub>.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Illustration on the traditional extraction of contexts.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0002.tif"/>
</fig>
<p>The context feature <bold><italic>g</italic></bold><sub><italic>p</italic></sub> is an empirical distribution of the co-occurrences of <bold><italic>x</italic></bold><sub><italic>p</italic></sub> in its large surroundings, which measures the correlations between <bold><italic>x</italic></bold><sub><italic>p</italic></sub> to its surroundings. When <bold><italic>g</italic></bold><sub><italic>p</italic></sub> is biased toward the left bins, it can be concluded that the majority of <italic>s</italic><sub><italic>p,q</italic></sub> are small, indicating the block <bold><italic>x</italic></bold><sub><italic>p</italic></sub> is unique, i.e., it originates from a highly textured and non-repetitive area, so its sparsity is relatively high. When <bold><italic>g</italic></bold><sub><italic>p</italic></sub> is biased toward the right bins, it means that most of <italic>s</italic><sub><italic>p,q</italic></sub> are high, indicating that the block <bold><italic>x</italic></bold><sub><italic>p</italic></sub> has many co-occurrences in its surroundings, i.e., it originates from a large flat area, so its sparsity is low. From the above, we can see that the context feature accurately describes the geometric structure of a block with respect to its surrounding blocks, thus it is naturally sensitive to the sparsity variation. However, in CVS, the traditional method is impractical due to the unavailability of original pixels or high computational complexity. Therefore, it is challenging to extract the context feature by using CS measurements of blocks.</p></sec></sec>
<sec id="s3">
<title>3. Proposed Context Based CVS System</title>
<sec>
<title>3.1. System Architecture</title>
<p>As shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, we describe the architecture of the proposed context-based CVS system in detail. The input video sequence is divided into several GOPs of length <italic>L</italic>, and each GOP<sub><italic>k</italic></sub> is successively encoded as Packet<sub><italic>k</italic></sub>. After receiving this packet, the decoder reconstructs the corresponding <inline-formula><mml:math id="M14"><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mtext>GOP</mml:mtext></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and all reconstructed GOPs are regrouped as the entire video sequence.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Architecture of the proposed context-based Compressive Video Sensing (CVS) system: <bold>(A)</bold> encoder framework, <bold>(B)</bold> decoder framework.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0003.tif"/>
</fig>
<p><xref ref-type="fig" rid="F3">Figure 3A</xref> presents the process of encoding GOP<sub><italic>k</italic></sub>. The key frame <bold><italic>f</italic></bold><sub>1</sub> is split from GOP<sub><italic>k</italic></sub>, and others <inline-formula><mml:math id="M15"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>f</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> are regarded as the non-key frames. The key frame <bold><italic>f</italic></bold><sub>1</sub> and the <italic>i</italic>th non-key frame <bold><italic>f</italic></bold><sub><italic>i</italic></sub> are partitioned into <italic>J</italic> non-overlapping blocks <inline-formula><mml:math id="M16"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="M17"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> of size <italic>B</italic> &#x000D7; <italic>B</italic>, respectively. For the key frame <bold><italic>f</italic></bold><sub>1</sub>, we set a high subrate <italic>S</italic><sub>1</sub> &#x0003D; <italic>S</italic><sub>K</sub> to sample the blocks <inline-formula><mml:math id="M18"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and generate the CS measurements <inline-formula><mml:math id="M19"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> according to Equation (1). The blocks <inline-formula><mml:math id="M20"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> in the non-key frame <bold><italic>f</italic></bold><sub><italic>i</italic></sub> are sampled at a low subrate <italic>S</italic><sub><italic>i</italic></sub> &#x0003D; <italic>S</italic><sub>NK</sub>, producing the corresponding CS measurements <inline-formula><mml:math id="M21"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> by Equation (1). For <bold><italic>f</italic></bold><sub>1</sub> and <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, based on the preset subrates, CS measurements are uniformly allocated to each block, however, without considering the structures of blocks, the uniform allocation results in either redundancy or insufficiency of CS measurements for some blocks. To improve the efficiency of block-based CS sampling, the core of the encoder is to perform the adaptive allocation by contexts of blocks. Different from traditional methods, the contexts <bold><italic>U</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>U</italic></bold><sub><italic>i,j</italic></sub> of <bold><italic>x</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> are, respectively, extracted by using the CS measurements <bold><italic>y</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>y</italic></bold><sub><italic>i,j</italic></sub>, which makes CVS system more practical. After context extraction, according to the contexts <bold><italic>U</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>U</italic></bold><sub><italic>i,j</italic></sub>, the numbers of CS measurements of <bold><italic>x</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> are modified as <italic>M</italic><sub>1,<italic>j</italic></sub> and <italic>M</italic><sub><italic>i,j</italic></sub> by adaptive allocation. According to <italic>M</italic><sub>1,<italic>j</italic></sub> and <italic>M</italic><sub><italic>i,j</italic></sub>, by removing the redundancy or supplementing the insufficiency in <bold><italic>y</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>y</italic></bold><sub><italic>i,j</italic></sub>, <bold><italic>x</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> are re-sampled as <inline-formula><mml:math id="M22"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M23"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, respectively. DPCM cannot be used to quantize the adaptive measurements with different numbers. To overcome this defect of DPCM, we fuse zero padding into DPCM and predictively quantize <inline-formula><mml:math id="M24"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M25"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> as <inline-formula><mml:math id="M26"><mml:msubsup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mtext>q</mml:mtext></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula><mml:math id="M27"><mml:msubsup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mtext>q</mml:mtext></mml:mrow></mml:msubsup></mml:math></inline-formula>. Finally, all quantized CS measurements are encoded as bits by Huffman and packaged as Packet<sub><italic>k</italic></sub>.</p>
<p><xref ref-type="fig" rid="F3">Figure 3B</xref> presents the process of decoding Packet<sub><italic>k</italic></sub>. After unpackaging Packet<sub><italic>k</italic></sub>, the inversions of Huffman and zero-padding DPCM are implemented, and the CS measurements of <bold><italic>x</italic></bold><sub>1,<italic>j</italic></sub> and <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> are recovered as <inline-formula><mml:math id="M35"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M36"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> which have some quantization errors with their originals <inline-formula><mml:math id="M37"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M38"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The distributed reconstruction is performed to reconstruct the key frame <bold><italic>f</italic></bold><sub>1</sub> and the non-key frames <inline-formula><mml:math id="M39"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>f</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>. To suppress the blocking artifacts in the reconstructed frames, we realize the recovery of large blocks by merging the CS measurements of the spatially neighboring blocks, so the CS measurements of <bold><italic>f</italic></bold><sub>1</sub> and <bold><italic>f</italic></bold><sub><italic>i</italic></sub> are updated as <bold><italic>z</italic></bold><sub>1,<italic>r</italic></sub> and <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub> for large blocks. Based on <bold><italic>z</italic></bold><sub>1,<italic>r</italic></sub>, the reconstructed key frame <inline-formula><mml:math id="M40"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>f</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is produced by using a linear recovery model, which rapidly recovers each block by a matrix-vector product. Regarding the previous and the next reconstructed key frames as references, the MH prediction outputs the reconstructed non-key frame <inline-formula><mml:math id="M41"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>f</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> by using <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub>. Finally, all reconstructed frames are combined into <inline-formula><mml:math id="M42"><mml:msub><mml:mrow><mml:mover accent="false"><mml:mrow><mml:mtext>GOP</mml:mtext></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Details of the core parts, including contexts extraction, measurements allocation, zero-padding DPCM, and distribution reconstruction, are described in the following subsections.</p></sec>
<sec>
<title>3.2. Context Extraction</title>
<p>In the proposed CVS system, the context features are extracted by using the CS measurements of blocks. As illustrated in <xref ref-type="fig" rid="F4">Figure 4</xref>, we compute the correlation surface <bold><italic>U</italic></bold><sub><italic>i,j</italic></sub> of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> in <bold><italic>f</italic></bold><sub><italic>i</italic></sub> as its contexts, in which <italic>i</italic> &#x0003D; 1, 2, &#x022EF;&#x02009;, <italic>L</italic>. In the surrounding window of size <italic>d</italic><sub>b</sub> &#x000D7; <italic>d</italic><sub>b</sub> centered on <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, we cannot extract the original blocks pixel-by-pixel due to the unavailability of original pixels, but can only use the CS measurements <inline-formula><mml:math id="M43"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:msubsup></mml:math></inline-formula> of non-overlapping blocks <inline-formula><mml:math id="M44"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:msubsup></mml:math></inline-formula>, in which <inline-formula><mml:math id="M45"><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mtext>b</mml:mtext></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>. According to CS theory, the measurement matrix <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic></sub> holds the Restricted Isometry Property (RIP) (Cand&#x000E8;s and Wakin, <xref ref-type="bibr" rid="B7">2008</xref>) for blocks <inline-formula><mml:math id="M46"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, which implies that all pairwise distances between original blocks can be well preserved in the measurement space, i.e.,</p>
<disp-formula id="E12"><label>(10)</label><mml:math id="M47"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x02248;</mml:mo><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>=</mml:mo><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x02200;</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where it is noted that all blocks share the same measurement matrix <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic></sub> due to the uniform allocation. Based on Equation (10), the similarity weights between <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub> and <bold><italic>x</italic></bold><sub><italic>i, j</italic>&#x021BA;<italic>n</italic></sub> can be estimated by
<disp-formula id="E15"><label>(11)</label><mml:math id="M50"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo class="qopname">exp</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x02200;</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
All weights constitute the correlation surface <bold><italic>U</italic></bold><sub><italic>i,j</italic></sub> as follows:
<disp-formula id="E16"><label>(12)</label><mml:math id="M51"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>U</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo><mml:mo>&#x02200;</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo><mml:mspace width="0.3em" class="thinspace"/><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
To compactly represent the contexts of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, we compute the mean <italic>u</italic><sub><italic>i,j</italic></sub> of <bold><italic>U</italic></bold><sub><italic>i,j</italic></sub> as the context feature, i.e.,</p>
<disp-formula id="E17"><label>(13)</label><mml:math id="M52"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x021BA;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Illustration on contexts extraction based on Compressive Sensing (CS) measurements.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0004.tif"/>
</fig>
</sec>
<sec>
<title>3.3. Measurement Allocation</title>
<p>By exploiting the context feature <italic>u</italic><sub><italic>i,j</italic></sub> of <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, we set the appropriate number of CS measurements for <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, and remove the redundancy or supplement the insufficiency in <bold><italic>y</italic></bold><sub><italic>i,j</italic></sub>. The magnitudes of context features are high in smooth regions, and the magnitudes are low in the edge and texture regions, so it is found that the experience that the context feature is inversely proportional to the sparsity. Based on this experience, we can describe the distribution on the sparsity degrees of blocks by
<disp-formula id="E18"><label>(14)</label><mml:math id="M53"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:mstyle><mml:msubsup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
According to the present subrate <italic>S</italic><sub><italic>i</italic></sub> of <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, we construct the allocation model of CS measurements for blocks as follows:
<disp-formula id="E19"><label>(15)</label><mml:math id="M54"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder></mml:mstyle><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>s</mml:mtext><mml:mo>.</mml:mo><mml:mtext>t</mml:mtext><mml:mo>.</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>N</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02264;</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>9</mml:mn><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>b</mml:mtext></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x02115;</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x0002B;</mml:mo></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
where <italic>N</italic> is the total number of pixels in <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, <italic>N</italic><sub>b</sub> is the block length, <italic>m</italic><sub><italic>i,j</italic></sub> is a positive integer, and its upper bound is set to be 0.9&#x000B7;<italic>N</italic><sub>b</sub>. The model (15) is solved according to <xref ref-type="table" rid="T3">Algorithm 1</xref> and outputs the final number <italic>M</italic><sub><italic>i,j</italic></sub> of CS measurements for <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>.</p>


<table-wrap position="float" id="T3">
<label>Algorithm 1</label>
<caption><p>Allocating the appropriate numbers of Compressive Sensing (CS) measurements to blocks.</p></caption>
<table frame="hsides" rules="groups">
<tbody>
<tr><td align="left" valign="top"><bold>Require:</bold> <italic>S</italic><sub><italic>i</italic></sub> - Subrate of <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, <italic>P</italic><sub><italic>i,j</italic></sub> - Distribution on the sparsity of blocks <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, <italic>j</italic> &#x0003D; 1, 2, &#x022EF;&#x02009;, <italic>J</italic>, <italic>N</italic>- Total number of pixels in <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, <italic>N</italic><sub>b</sub> - Block length;</td></tr>
<tr><td align="left" valign="top">1: Initial measurement number <inline-formula><mml:math id="M28"><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mtext>Round</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, and Round(&#x000B7;) is a rounding operator;</td></tr>
<tr><td align="left" valign="top">2: Restrict <inline-formula><mml:math id="M29"><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula> to not be larger than 0.9<italic>N</italic><sub>b</sub>, i.e., <inline-formula><mml:math id="M30"><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mtext>Min</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>9</mml:mn><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mtext>b</mml:mtext></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, in which Min(&#x000B7;) is a minimization operator;</td></tr>
<tr><td align="left" valign="top">3: Set <inline-formula><mml:math id="M31"><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mtext>sup</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mi>N</mml:mi><mml:mo>-</mml:mo><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula>, and <italic>iter</italic> &#x0003D; 0;</td></tr>
<tr><td align="left" valign="top">4: <bold>while</bold> <italic>M</italic><sub>sup</sub> &#x0003E; 0, increment <italic>iter</italic> by 1 <bold>do</bold></td></tr>
<tr><td align="left" valign="top">5: &#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>if</bold> <italic>M</italic><sub>sup</sub> &#x0003C; <italic>J</italic> <bold>then</bold></td></tr>
<tr><td align="left" valign="top">6: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;Randomly select <italic>M</italic><sub>sup</sub> blocks, and their measurement numbers are incremented by 1;</td></tr>
<tr><td align="left" valign="top">7: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;Update <inline-formula><mml:math id="M32"><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula>, and set <inline-formula><mml:math id="M33"><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup></mml:math></inline-formula>;</td></tr>
<tr><td align="left" valign="top">8: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>Break</bold>;</td></tr>
<tr><td align="left" valign="top">9: &#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>else</bold></td></tr>
<tr><td align="left" valign="top">10: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<inline-formula><mml:math id="M34"><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula></td></tr>
<tr><td align="left" valign="top">11: &#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;<italic>M</italic><sub>sup</sub> &#x0003D; <italic>M</italic><sub>sup</sub> &#x02212; <italic>J</italic></td></tr>
<tr><td align="left" valign="top">12: &#x000A0;&#x000A0;&#x000A0;&#x000A0;<bold>end if</bold></td></tr>
<tr><td align="left" valign="top">13: <bold>end while</bold></td></tr>
<tr><td align="left" valign="top">14: <bold>return</bold> <italic>M</italic><sub><italic>i,j</italic></sub>, <italic>j</italic> &#x0003D; 1, 2, &#x022EF;&#x02009;, <italic>J</italic>.</td></tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>3.4. Zero-Padding DPCM</title>
<p>Due to the adaptive allocation, the lengths of the re-sampled CS measurements <inline-formula><mml:math id="M56"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> vary. Compared with SQ, DPCM provides better rate-distortion performance by adding the predictive scheme into the quantization of block-based CS measurements. However, DPCM requires that all blocks have the same number of CS measurements, as a result, DPCM cannot be used to quantize <inline-formula><mml:math id="M57"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>. To make DPCM adapt to the adaptive allocation, we propose zero-padding DPCM, whose implementation is shown in <xref ref-type="fig" rid="F5">Figure 5</xref>. Before inputting <inline-formula><mml:math id="M58"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to DCPM, we fill zeros in the last of <inline-formula><mml:math id="M59"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to make its length the same as others. After obtaining the de-quantized CS measurements <inline-formula><mml:math id="M60"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, we delete the zeros in the last of <inline-formula><mml:math id="M61"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> to recover its original length <italic>M</italic><sub><italic>i,j</italic></sub>. By zero padding, each measurement in <inline-formula><mml:math id="M62"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> can be used to predict the corresponding measurement in <inline-formula><mml:math id="M63"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and especially when there is predictive measurement &#x00177;<sub><italic>i, j</italic>&#x02212;1</sub>(<italic>m</italic>) of the <italic>m</italic>-th measurement &#x01EF9;<sub><italic>i,j</italic></sub>(<italic>m</italic>), the residual <inline-formula><mml:math id="M64"><mml:msubsup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mtext>d</mml:mtext></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> can be significantly reduced due to the intrinsic spatial correlation between <inline-formula><mml:math id="M65"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula><mml:math id="M66"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. The rate-distortion curves of the reconstructed <italic>Foreman, Mobile</italic>, and <italic>Football</italic> sequences are presented when zero-padding DPCM and SQ are, respectively, used to quantize the adaptive CS measurements (shown in <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure 1</xref>), in which the rate-distortion curve is measured in terms of the Peak Signal-to-Noise Ratio (PSNR) in dB and bitrate in bits per pixel (bpp), and the linear recovery algorithm presented in subsection 3.5 is used to recover each video frame. It can be seen that zero-padding DPCM presents competitive performance with SQ at low bitrates but as the bitrate increases, its improvement of performance over SQ is increasingly significant. From these results, we find that the efficiency of zero-padding DPCM relies on the correlation between block-based CS measurements. With insufficient measurements, the correlation is weakened by the filling of excessive zeros, causing the performance degradation, but when measurements are sufficient, a high correlation is maintained, so the performance improvement stands out. From the above, zero-padding DPCM is more suitable for adaptive measurements compared with SQ.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Illustration on zero-padding Differential Pulse Code Modulation (DPCM).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0005.tif"/>
</fig></sec>
<sec>
<title>3.5. Distributed Reconstruction</title>
<p>At decoder, the distributed strategy is performed to reconstruct the key frame <bold><italic>f</italic></bold><sub>1</sub> and the non-key frames <inline-formula><mml:math id="M67"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>f</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, in which <bold><italic>f</italic></bold><sub>1</sub> is estimated by a linear recovery model, and <bold><italic>f</italic></bold><sub><italic>i</italic></sub> is produced by MH prediction. To highlight the complex structures by contexts, a small block size is more desired at the encoder. However, the small block size causes serious blocking artifacts due to the differences of neighboring blocks in recovery quality. To suppress the blocking artifacts, we merge the CS measurements <inline-formula><mml:math id="M68"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> of the small blocks <inline-formula><mml:math id="M69"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>J</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> into those <inline-formula><mml:math id="M70"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> of the large blocks <inline-formula><mml:math id="M71"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and realize the sampling of small blocks and the recovery of large blocks. The size <italic>B</italic><sub><italic>lev</italic></sub> &#x000D7; <italic>B</italic><sub><italic>lev</italic></sub> of large block is set to be
<disp-formula id="E21"><label>(16)</label><mml:math id="M72"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msup><mml:mo>&#x000B7;</mml:mo><mml:mi>B</mml:mi><mml:mo>,</mml:mo><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>v</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x022EF;</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <italic>lev</italic> is a positive integer. The number <italic>R</italic> of large blocks is <inline-formula><mml:math id="M73"><mml:mi>N</mml:mi><mml:mo>/</mml:mo><mml:msubsup><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:math></inline-formula>, and it is smaller than the number <italic>J</italic> of small blocks. <xref ref-type="fig" rid="F6">Figure 6</xref> illustrates the block merging when <italic>lev</italic> is set to be 1. The four neighboring blocks <bold><italic>x</italic></bold><sub><italic>i,j</italic></sub>, <bold><italic>x</italic></bold><sub><italic>i,j</italic>&#x0002B;1</sub>, <bold><italic>x</italic></bold><sub><italic>i,j</italic>&#x0002B;<italic>N</italic><sub>1</sub>/<italic>B</italic></sub>, <bold><italic>x</italic></bold><sub><italic>i,j</italic>&#x0002B;1&#x0002B;<italic>N</italic><sub>1</sub>/<italic>B</italic></sub> are merged into a large block <inline-formula><mml:math id="M74"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and their CS measurements <inline-formula><mml:math id="M75"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="M76"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula><mml:math id="M77"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and <inline-formula><mml:math id="M78"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> are spliced into <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub> in rows, i.e.,
<disp-formula id="E22"><label>(17)</label><mml:math id="M79"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>y</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x02248;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x0039B;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E23"><label>(18)</label><mml:math id="M80"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x0039B;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x003A6;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <bold><italic>&#x0039B;</italic></bold><sub><italic>i,r</italic></sub> is the diagonal matrix composed of the block measurement matrices <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic></sub>, <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic>&#x0002B;1</sub>, <bold><italic>&#x003A6;</italic></bold><sub><italic>i,j</italic>&#x0002B;<italic>N</italic><sub>1</sub>/<italic>B</italic></sub>, and <bold><italic>&#x003A6;</italic></bold><sub><italic>i, j</italic>&#x0002B;1&#x0002B;<italic>N</italic><sub>1</sub>/<italic>B</italic></sub>, <italic>N</italic><sub>1</sub> is the total number of rows in <bold><italic>f</italic></bold><sub><italic>i</italic></sub>, and <italic>B</italic> is the block size of the small block. To make <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub>, the CS measurements of <inline-formula><mml:math id="M81"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, we transform <inline-formula><mml:math id="M82"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> as
<disp-formula id="E24"><label>(19)</label><mml:math id="M83"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x0002B;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>/</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>I</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <bold><italic>I</italic></bold> is an elementary column transformation matrix. Plugging Equation (19) into Equation (17), we build the bridge between <inline-formula><mml:math id="M84"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub> by
<disp-formula id="E25"><label>(20)</label><mml:math id="M85"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02248;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>&#x0039B;</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>I</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <bold><italic>A</italic></bold><sub><italic>i,r</italic></sub> &#x0003D; <bold><italic>&#x0039B;</italic></bold><sub><italic>i,r</italic></sub>&#x000B7;<bold><italic>I</italic></bold>. According to Equation (20), the large block <inline-formula><mml:math id="M86"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> can be recovered by using <bold><italic>z</italic></bold><sub><italic>i,r</italic></sub>. When <italic>lev</italic> is set to be larger than 1, the block merging can be done in manner similar to the above.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Illustration on block merging when <italic>lev</italic> is set to be 1.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0006.tif"/>
</fig>
<p>After the block merging, we use <inline-formula><mml:math id="M87"><mml:msubsup><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>R</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> to recover the key frame <bold><italic>f</italic></bold><sub>1</sub>. The block <inline-formula><mml:math id="M88"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of <bold><italic>f</italic></bold><sub>1</sub> is linearly estimated by
<disp-formula id="E26"><label>(21)</label><mml:math id="M89"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <bold><italic>P</italic></bold><sub>1,<italic>r</italic></sub> is the transformation matrix produced by the following model:
<disp-formula id="E27"><label>(22)</label><mml:math id="M90"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">min</mml:mo></mml:mrow><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle></mml:mrow></mml:munder></mml:mstyle><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mo>&#x02225;</mml:mo><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>&#x02225;</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <italic>E</italic>[&#x000B7;] denotes the expectation function. The model (22) outputs the optimal transformation matrix to minimize the mean square error between <inline-formula><mml:math id="M91"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and its estimator <inline-formula><mml:math id="M92"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and it can be solved by making the gradient of objective function equal to 0, producing
<disp-formula id="E28"><label>(23)</label><mml:math id="M93"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow></mml:msubsup></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>z</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow></mml:msubsup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
Plugging Equation (20) into Equation (23), we get
<disp-formula id="E29"><label>(24)</label><mml:math id="M94"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>P</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mtext>xx</mml:mtext></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mtext>xx</mml:mtext></mml:mrow></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>A</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="E30"><label>(25)</label><mml:math id="M95"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mtext>xx</mml:mtext></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>T</mml:mtext></mml:mrow></mml:msubsup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which <bold><italic>Cor</italic></bold><sub>xx</sub> is the auto-correlation matrix of <inline-formula><mml:math id="M96"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and its element <italic>Cor</italic><sub>xx</sub>[<italic>m, n</italic>] is estimated as follows:
<disp-formula id="E31"><label>(26)</label><mml:math id="M97"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mtext>xx</mml:mtext></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>m</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>.</mml:mo><mml:mn>9</mml:mn><mml:msup><mml:mrow><mml:mn>5</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
in which &#x003B4;<sub><italic>m,n</italic></sub> is the Euclidean distance between two pixels <inline-formula><mml:math id="M98"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M99"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> in <inline-formula><mml:math id="M100"><mml:msub><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle></mml:mrow><mml:mo>&#x0007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. When the subrate is set to be large, the linear recovery model can provide excellent visual quality while costing fewer computations.</p></sec></sec>
<sec id="s4">
<title>4. Experimental Results</title>
<p>We evaluate the proposed CVS system on video sequences with various resolutions, including seven CIF (352 &#x000D7; 288) sequences <italic>Akiyo, Bus, Container, Coastguard, Football, Foreman, Hall</italic>, one WQVGA (416 &#x000D7; 240) sequence <italic>BlowingBubbles</italic>, and one 1080p (1920 &#x000D7; 1080) sequence <italic>ParkScene</italic>. In the proposed CVS system, the window size <italic>d</italic><sub>b</sub> &#x000D7; <italic>d</italic><sub>b</sub> and the normalization factor &#x003C3; are, respectively, set to be 11 &#x000D7; 11 and 10 for the context extraction, the window size <italic>W</italic> &#x000D7; <italic>W</italic> and the regularization factor &#x003B2; are, respectively, set to be 21 &#x000D7; 21 and 0.25 for the MH prediction, and the measurement matrix is produced by Gaussian distribution. First, we discuss the effects of different block sizes on the proposed CVS system. Second, we evaluate the performance improvement resulting from the used context extraction. Finally, we compare the proposed CVS system with two state-of-the-art CVS systems: SS-CVS (Li et al., <xref ref-type="bibr" rid="B20">2020</xref>) and MH-RTIK (Chen C. et al., <xref ref-type="bibr" rid="B9">2020</xref>) in terms of the rate-distortion performance. PSNR is used to evaluate the qualities of reconstructed video sequences, and the bitrate denotes the average amount of bits per pixel to encode a video sequence. The variation of PSNR with bitrate is called the rate-distortion performance. The computational complexity is measured by the execution time. Experiments are implemented with MATLAB on a workstation with 3.30-GHz CPU and 8 GB RAM.</p>
<sec>
<title>4.1. Effects of Block Sizes</title>
<p>In the proposed CVS system, in order to highlight the complex structures by contexts, we desire a small block size at encoder, but at decoder, a large block size is desired to suppress the blocking artifacts in the reconstructed video frames. We set a block-size pair (<italic>B, B</italic><sub><italic>lev</italic></sub>), in which <italic>B</italic> and <italic>B</italic><sub><italic>lev</italic></sub> are the block sizes for sampling and recovery, respectively, and evaluate the effects of different block-size pairs on the reconstruction qualities of key frames and non-key frames.</p>
<p>First, we select the first frames of <italic>Foreman, BlowingBubbles</italic>, and <italic>ParkScene</italic> sequences as the key frames, which are linearly recovered, and show their rate-distortion curves at different block-size pairs in <xref ref-type="fig" rid="F7">Figure 7</xref>. For <italic>Foreman</italic> and <italic>BlowingBubbles</italic> with the low resolution, the block-size pair (4, 16) achieves higher PSNR values than others with low bitrates, but the rate-distortion curve for the block-size pair (2, 16) rapidly increases as the bitrate increases and significant PSNR gains are achieved when compared with other block-size pairs. These results indicate that the small blocks used in adaptive allocation and large blocks for linear recovery fit together well. For <italic>ParkScene</italic> with high resolution, when the block size <italic>B</italic> for sampling is set to be too small, e.g., <italic>B</italic> &#x0003D; 2, no block can contain sufficient structures, causing the rate-distortion performance to degenerate as the bitrate increases, but a suitable block size for sampling is set, e.g., <italic>B</italic> &#x0003D; 8, PSNR gains can be significantly improved.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Rate-distortion curves of the reconstructed key frames in <bold>(A)</bold> <italic>Foreman</italic>, <bold>(B)</bold> <italic>BlowingBubbles</italic>, and <bold>(C)</bold> <italic>ParkScene</italic> sequences at different block-size pairs.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0007.tif"/>
</fig>
<p>Then, we select the second frames of <italic>Foreman, BlowingBubbles</italic>, and <italic>ParkScene</italic> sequences as the non-key frames, which are recovered by MH prediction based on the reconstructed previous and next key frames at the subrate 0.7, and show their rate-distortion curves at different block-size pairs in <xref ref-type="fig" rid="F8">Figure 8</xref>. Similar to the results from key frames, for <italic>Foreman</italic> and <italic>BlowingBubbles</italic>, the better rate-distortion performance is achieved when the block-size pair is set to be (2, 16), and for <italic>ParkScene</italic>, in order to prevent the loss of structures, the block size for sampling is appropriately set to be 8.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Rate-distortion curves of the reconstructed non-key frames in <bold>(A)</bold> <italic>Foreman</italic>, <bold>(B)</bold> <italic>BlowingBubbles</italic>, and <bold>(C)</bold> <italic>ParkScene</italic> sequences at different block-size pairs.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0008.tif"/>
</fig>
<p>Given the above, we can see that the bad effects resulting from the extraction of contexts can be suppressed by the block merging, therefore, the quality improvement from contexts-based allocation is further enhanced.</p></sec>
<sec>
<title>4.2. Effects of Contexts</title>
<p>In the proposed CVS system, the contexts are extracted from CS measurements and used to adaptively allocate the CS measurements for blocks, leading to the improvement of reconstruction quality. To verify the validity of contexts from CS measurements on the quality improvement, we evaluate the effects of different allocation schemes on the rate-distortion performance of the proposed CVS system. The uniform allocation is used as a benchmark, and the adaptive allocation uses the contexts extracted from CS measurements and original pixels, respectively.</p>
<p><xref ref-type="fig" rid="F9">Figure 9</xref> shows the rate-distortion curves of the reconstructed key frames when using different allocation schemes, in which the key frames are, respectively, taken from the first frames of <italic>Foreman, BlowingBubbles</italic>, and <italic>ParkScene</italic> sequences. It can be seen that adaptive allocation outperforms uniform allocation in PSNR values at any bitrate, indicating that contexts contribute to quality improvement. Importantly, the contexts from CS measurements are competitive with those from original pixels, and their performance gaps are very small, which means that CS measurements can better represent the contexts of blocks.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Rate-distortion curves of the reconstructed key frames in <bold>(A)</bold> <italic>Foreman</italic>, <bold>(B)</bold> <italic>BlowingBubbles</italic>, and <bold>(C)</bold> <italic>ParkScene</italic> sequences when using different allocation schemes. For <italic>Foreman</italic> and <italic>BlowingBubbles</italic>, the block-size pair is set to be (2, 16), and for <italic>ParkScene</italic>, the block-size is set to be (8, 16).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0009.tif"/>
</fig>
<p><xref ref-type="fig" rid="F10">Figure 10</xref> shows the rate-distortion curves of the reconstructed non-key frames when using different allocation schemes, in which the non-key frames are, respectively, taken from the second frames of <italic>Foreman, BlowingBubbles</italic>, and <italic>ParkScene</italic> sequences. It can be seen that the adaptive allocation is still effective for MH prediction, and it can significantly improve the rate-distortion performances when compared with uniform allocation. The contexts from CS measurements have similar efficiency of allocation to that of contexts from original pixels, which proves that the merits of adaptive allocation can still be maintained in the measurement domain.</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Rate-distortion curves of the reconstructed non-key frames in <bold>(A)</bold> <italic>Foreman</italic>, <bold>(B)</bold> <italic>BlowingBubbles</italic>, and <bold>(C)</bold> <italic>ParkScene</italic> sequences when using different allocation schemes. For <italic>Foreman</italic> and <italic>BlowingBubbles</italic>, the block-size pair is set to be (2, 16), and for <italic>ParkScene</italic>, the block-size is set to be (8, 16).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0010.tif"/>
</fig>
<p>The above results indicate that the contexts extracted by CS measurements prompt the adaptive allocation to improve the reconstruction quality of CVS system, which makes the proposed CVS system more suitable to the applications with limited resources.</p></sec>
<sec>
<title>4.3. Performance Comparisons</title>
<p>We evaluate the performance of the proposed CVS system by comparing it with the two state-of-the-art CVS systems: SS-CVS (Li et al., <xref ref-type="bibr" rid="B20">2020</xref>) and MH-RTIK (Chen C. et al., <xref ref-type="bibr" rid="B9">2020</xref>). To make a fair comparison, we keep the parameter settings of SS-CVS and MH-RTIK in their original reports, some important details are repeated as follows:
<list list-type="order">
<list-item><p>SS-CVS: the system consists of one base layer and one enhancement layer; the block size is set to be 16; the length of GOP is 10; the subrate of key frame is set to be 0.9; the dimension of the subspace is 10; the number of subspaces is 50.</p></list-item>
<list-item><p>MH-RTIK: the sub-block extraction is used; the number of hypotheses is 40; the block size is set to be 16; the length of GOP is 2; the subrate of key frame is set to be 0.7.</p></list-item>
</list></p>
<p>In addition, we employ SQ and Huffman in SS-CVS and MH-RTIK to compress the CS measurements. For the proposed CVS system, the block-size pair is set to be (2, 16) for CIF and QWVGA sequences and (8, 16) for 1080P sequences, the subrate <italic>S</italic><sub>K</sub> of key frame is set to be 0.7, the results under the GOP length <italic>L</italic> &#x0003D; 2 are compared with those of MH-RTIK, and the results under the GOP length <italic>L</italic> &#x0003D; 10 are compared with those of SS-CVS.</p>
<p><xref ref-type="table" rid="T1">Table 1</xref> lists the average PSNR values for the reconstructed video sequences by the proposed CVS system, SS-CVS, and MH-RTIK when the subrate <italic>S</italic><sub>NK</sub> of non-key frame varies from 0.1 to 0.5. Compared with MH-RTIK, the proposed CVS system achieves obvious PSNR gains at any subrate, e.g., the average PSNR gain is 2.824 dB for the <italic>Foreman</italic> sequence. Compared with SS-CVS, the proposed CVS system also presents higher PSNR values at any subrate, and especially for low subrates, PSNR gains are significant, e.g., when the subrate is 0.1, PSNR gains are 9.82, 13.20, and 19.78 dB for <italic>ParkScene, BlowingBubble, Foreman</italic> sequences, respectively. <xref ref-type="fig" rid="F11">Figures 11</xref>, <xref ref-type="fig" rid="F12">12</xref> show the rate-distortion curves for the proposed CVS system, MH-RTIK, and SS-CVS. Due to the implementation of zero-padding DPCM, the performance improvement of the proposed CVS system is further enhanced when compared with MH-RTIK and SS-CVS. By the objective evaluation of the reconstruction quality, it can be indicated that the proposed CVS system can significantly improve the qualities of the reconstructed video sequences.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Average Peak Signal-to-Noise Ratio (PSNR) (dB) for reconstructed video sequences by the proposed Compressive Video Sensing (CVS) system, Scalable Structured CVS (SS-CVS) (Trevisi et al., <xref ref-type="bibr" rid="B39">2020</xref>), and Multi-Hypothesis Reweighted TIKhonov (MH-RTIK) (Chen C. et al., <xref ref-type="bibr" rid="B9">2020</xref>) at subrates 0.1 to 0.5.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Sequence</bold></th>
<th valign="top" align="left"><bold>Resolution</bold></th>
<th valign="top" align="left"><bold>Algorithm</bold></th>
<th valign="top" align="center" colspan="5" style="border-bottom: thin solid #000000;"><bold>Subrate</bold> <italic><bold>S</bold></italic><sub><bold>NK</bold></sub></th>
</tr>
<tr>
<th/>
<th/>
<th/>
<th valign="top" align="center"><bold>0.1</bold></th>
<th valign="top" align="center"><bold>0.2</bold></th>
<th valign="top" align="center"><bold>0.3</bold></th>
<th valign="top" align="center"><bold>0.4</bold></th>
<th valign="top" align="center"><bold>0.5</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="8"><bold>GOP Length</bold> <italic><bold>L</bold></italic> <bold>&#x0003D; 2</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Container</italic></td>
<td valign="top" align="left">CIF</td>
<td valign="top" align="left">MH-RTIK</td>
<td valign="top" align="center">33.67</td>
<td valign="top" align="center">34.76</td>
<td valign="top" align="center">35.08</td>
<td valign="top" align="center">35.28</td>
<td valign="top" align="center">35.47</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>38.74</bold></td>
<td valign="top" align="center"><bold>39.92</bold></td>
<td valign="top" align="center"><bold>40.38</bold></td>
<td valign="top" align="center"><bold>40.47</bold></td>
<td valign="top" align="center"><bold>40.61</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Coastguard</italic></td>
<td/>
<td valign="top" align="left">MH-RTIK</td>
<td valign="top" align="center">33.12</td>
<td valign="top" align="center">34.26</td>
<td valign="top" align="center">34.69</td>
<td valign="top" align="center">35.08</td>
<td valign="top" align="center">35.43</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>35.80</bold></td>
<td valign="top" align="center"><bold>37.22</bold></td>
<td valign="top" align="center"><bold>38.30</bold></td>
<td valign="top" align="center"><bold>38.89</bold></td>
<td valign="top" align="center"><bold>39.45</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Hall</italic></td>
<td/>
<td valign="top" align="left">MH-RTIK</td>
<td valign="top" align="center">37.10</td>
<td valign="top" align="center">38.01</td>
<td valign="top" align="center">38.39</td>
<td valign="top" align="center">38.65</td>
<td valign="top" align="center">38.91</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>38.26</bold></td>
<td valign="top" align="center"><bold>39.69</bold></td>
<td valign="top" align="center"><bold>40.82</bold></td>
<td valign="top" align="center"><bold>41.23</bold></td>
<td valign="top" align="center"><bold>41.50</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Foreman</italic></td>
<td/>
<td valign="top" align="left">MH-RTIK</td>
<td valign="top" align="center">36.52</td>
<td valign="top" align="center">37.09</td>
<td valign="top" align="center">37.56</td>
<td valign="top" align="center">37.96</td>
<td valign="top" align="center">38.60</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>38.13</bold></td>
<td valign="top" align="center"><bold>39.66</bold></td>
<td valign="top" align="center"><bold>40.87</bold></td>
<td valign="top" align="center"><bold>41.41</bold></td>
<td valign="top" align="center"><bold>41.78</bold></td>
</tr>
<tr>
<td valign="top" align="left" colspan="8"><bold>GOP Length</bold> <italic><bold>L</bold></italic> <bold>&#x0003D; 10</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Akiyo</italic></td>
<td valign="top" align="left">CIF</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">17.70</td>
<td valign="top" align="center">24.80</td>
<td valign="top" align="center">33.06</td>
<td valign="top" align="center">36.55</td>
<td valign="top" align="center">39.23</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>40.75</bold></td>
<td valign="top" align="center"><bold>43.50</bold></td>
<td valign="top" align="center"><bold>45.28</bold></td>
<td valign="top" align="center"><bold>45.09</bold></td>
<td valign="top" align="center"><bold>45.56</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Bus</italic></td>
<td/>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">18.65</td>
<td valign="top" align="center">23.57</td>
<td valign="top" align="center">25.71</td>
<td valign="top" align="center">27.67</td>
<td valign="top" align="center">30.10</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>25.65</bold></td>
<td valign="top" align="center"><bold>38.31</bold></td>
<td valign="top" align="center"><bold>30.97</bold></td>
<td valign="top" align="center"><bold>32.97</bold></td>
<td valign="top" align="center"><bold>34.00</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Football</italic></td>
<td/>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">15.52</td>
<td valign="top" align="center">23.95</td>
<td valign="top" align="center">27.87</td>
<td valign="top" align="center">30.33</td>
<td valign="top" align="center">32.93</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>28.98</bold></td>
<td valign="top" align="center"><bold>32.67</bold></td>
<td valign="top" align="center"><bold>35.78</bold></td>
<td valign="top" align="center"><bold>36.55</bold></td>
<td valign="top" align="center"><bold>37.28</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>Foreman</italic></td>
<td/>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">13.40</td>
<td valign="top" align="center">20.51</td>
<td valign="top" align="center">28.07</td>
<td valign="top" align="center">32.90</td>
<td valign="top" align="center">35.25</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>33.18</bold></td>
<td valign="top" align="center"><bold>36.00</bold></td>
<td valign="top" align="center"><bold>38.55</bold></td>
<td valign="top" align="center"><bold>39.54</bold></td>
<td valign="top" align="center"><bold>40.22</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>BlowingBubble</italic></td>
<td valign="top" align="left">QWVGA</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">16.93</td>
<td valign="top" align="center">23.50</td>
<td valign="top" align="center">28.47</td>
<td valign="top" align="center">30.70</td>
<td valign="top" align="center">32.84</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>30.13</bold></td>
<td valign="top" align="center"><bold>32.17</bold></td>
<td valign="top" align="center"><bold>33.58</bold></td>
<td valign="top" align="center"><bold>35.01</bold></td>
<td valign="top" align="center"><bold>35.68</bold></td>
</tr>
<tr>
<td valign="top" align="left"><italic>ParkScene</italic></td>
<td valign="top" align="left">1080P</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">23.19</td>
<td valign="top" align="center">30.04</td>
<td valign="top" align="center">33.14</td>
<td valign="top" align="center">35.53</td>
<td valign="top" align="center">36.62</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center"><bold>33.01</bold></td>
<td valign="top" align="center"><bold>35.18</bold></td>
<td valign="top" align="center"><bold>36.79</bold></td>
<td valign="top" align="center"><bold>37.97</bold></td>
<td valign="top" align="center"><bold>38.67</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>Rate-distortion curves obtained by the proposed CVS system and Multi-Hypothesis TIKhonov (MH-TIK) (Chen C. et al., <xref ref-type="bibr" rid="B9">2020</xref>) for <bold>(A)</bold> <italic>Container</italic>, <bold>(B)</bold> <italic>Coastguard</italic>, <bold>(C)</bold> <italic>Hall</italic>, and <bold>(D)</bold> <italic>Foreman</italic> sequences. Note that the length <italic>L</italic> of GOP is set to be 2.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0011.tif"/>
</fig>
<fig id="F12" position="float">
<label>Figure 12</label>
<caption><p>Rate-distortion curves obtained by the proposed CVS system and Scalable Structured CVS (SS-CVS) (Li et al., <xref ref-type="bibr" rid="B20">2020</xref>) for <bold>(A)</bold> <italic>Akiyo</italic>, <bold>(B)</bold> <italic>Bus</italic>, <bold>(C)</bold> <italic>Football</italic>, <bold>(D)</bold> <italic>Foreman</italic>, <bold>(E)</bold> <italic>BlowingBubble</italic>, and <bold>(F)</bold> <italic>ParkScene</italic> sequences. Note that the length <italic>L</italic> of GOP is set to be 10.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpls-13-849606-g0012.tif"/>
</fig>
<p><xref ref-type="table" rid="T2">Table 2</xref> lists the average encoding time (s/frame) and decoding time (s/frame) on video sequences with different resolutions for the proposed CVS system, SS-CVS, and MH-RTIK. We compute the average execution time on the range [0.1, 0.5] of subrate <italic>S</italic><sub>NK</sub> for the proposed CVS system and compare it with that of MH-RTIK for CIF sequences. The encoding speed of the proposed CVS system is slowed down due to the contexts-based adaptive allocation, and its encoding time is 0.63 s per frame, larger than that of MH-RTIK. Assisted by the simple linear recovery, the proposed CVS system reduces the decoding complexity, and only costs 4.48 s to reconstruct a video frame, however, MH-RTIK requires 19.34 s per frame. Under the subrate <italic>S</italic><sub>NK</sub> &#x0003D; 0.6, the execution time of the proposed CVS algorithm is compared with that of SS-CVS for the CIF, QWVGA, and 1080P video sequences, respectively. Compared with SS-CVS, the proposed CVS system costs less encoding time, and the encoding time does not dramatically increase as the resolution increases, e.g., for 1080P sequence, the proposed CVS system only costs 1.83 s per frame, but SS-CVS costs 108.10 s. In SS-CVS, the subspace clustering and the basis derivation are implemented at the encoder, and they lead to more encoding costs than the adaptive allocation in the proposed CVS system. The proposed CVS system costs less decoding time than SS-CVS, and its decoding costs also grow more slowly when compared with SS-CVS, e.g., for 1080P sequence, the proposed CVS system costs 162.47 s per frame, and the SS-CVS costs 401.8 s. The heavy computational burdens for SS-CVS derive from the non-linear subspace learning, but the decoding complexity of the proposed CVS system is limited benefiting from the linear recovery and prediction. From the above, we can see that the proposed CVS system still keeps a low computational complexity while providing better rate-distortion performance.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Average encoding time (s/frame) and decoding time (s/frame) on video sequences with different resolutions for the proposed CVS system, SS-CVS (Li et al., <xref ref-type="bibr" rid="B20">2020</xref>), and MH-RTIK (Chen C. et al., <xref ref-type="bibr" rid="B9">2020</xref>).</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Resolution</bold></th>
<th valign="top" align="left"><bold>Algorithm</bold></th>
<th valign="top" align="center"><bold>Encoding Time (s/frame)</bold></th>
<th valign="top" align="center"><bold>Decoding Time (s/frame)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="4"><bold>Average on Subrates</bold> <italic><bold>S</bold></italic><sub><bold>NK</bold></sub> <bold>0.1 to 0.5</bold></td>
</tr>
<tr>
<td valign="top" align="left">CIF</td>
<td valign="top" align="left">MH-RTIK</td>
<td valign="top" align="center">0.17</td>
<td valign="top" align="center">19.34</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">4.48</td>
</tr>
<tr>
<td valign="top" align="left" colspan="4"><bold>Average on Subrates <italic>S</italic><sub>NK</sub> &#x0003D; 0.6</bold></td>
</tr>
<tr>
<td valign="top" align="left">CIF</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">5.40</td>
<td valign="top" align="center">21.22</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">7.79</td>
</tr>
<tr>
<td valign="top" align="left">QWVGA</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">4.90</td>
<td valign="top" align="center">17.23</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">7.69</td>
</tr>
<tr>
<td valign="top" align="left">1080P</td>
<td valign="top" align="left">SS-CVS</td>
<td valign="top" align="center">108.10</td>
<td valign="top" align="center">401.8</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Proposed</td>
<td valign="top" align="center">1.83</td>
<td valign="top" align="center">162.47</td>
</tr>
</tbody>
</table>
</table-wrap></sec></sec>
<sec sec-type="conclusions" id="s5">
<title>5. Conclusion</title>
<p>In this article, a context-based CVS system is proposed to improve the visual quality of the reconstructed video sequences. At the encoder, the CS measurements are adaptively allocated for blocks according to the contexts of video frames. Innovatively, the contexts are extracted by CS measurements. Although the extraction of contexts is independent of original pixels, these contexts can still better reveal the structural complexity of each block. To guarantee better rate-distortion performance, the zero-padding DPCM is proposed to quantize these adaptive measurements. At the decoder, the key frames are reconstructed by linear recovery, and these non-key frames are reconstructed by MH prediction. Thanks to the effectiveness of context-based adaptive allocation, the simple recovery schemes also provide the comfortable visual quality. Experimental results show that the proposed CVS system improves the rate-distortion performances when compared with two state-of-the-art CVS systems, including MH-RTIK and SS-CVS, and guarantees a low computational complexity.</p>
<p>As the research in this article is exploratory, there are many intriguing questions that future work should consider. First, the estimation of block sparsity should be analyzed in mathematics. Second, we will investigate how to fuse the quantization into adaptive allocation. More importantly, we will deploy the adaptive CVS system on an actual hardware platform.</p></sec>
<sec sec-type="data-availability" id="s6">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/<xref ref-type="supplementary-material" rid="SM1">Supplementary Material</xref>, further inquiries can be directed to the corresponding author/s.</p></sec>
<sec id="s7">
<title>Author Contributions</title>
<p>RL: designed the study and drafted the manuscript. YY: conducted experiments and analyzed the data. FS: critically reviewed and improved the manuscript. All authors have read and approved the final version of the manuscript.</p></sec>
<sec sec-type="funding-information" id="s8">
<title>Funding</title>
<p>This work was supported in part by the Project of Science and Technology, Department of Henan Province in China (212102210106), National Natural Science Foundation of China (31872704), Innovation Team Support Plan of University Science and Technology of Henan Province in China (19IRTSTHN014), and Guangxi Key Laboratory of Wireless Wideband Communication and Signal Processing of China.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p></sec>
</body>
<back>
<sec sec-type="supplementary-material" id="s10">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpls.2022.849606/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpls.2022.849606/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Akila</surname> <given-names>I.</given-names></name> <name><surname>Sivakumar</surname> <given-names>A.</given-names></name> <name><surname>Swaminathan</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>Automation in plant growth monitoring using high-precision image classification and virtual height measurement techniques</article-title>, in <source>2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS)</source> (<publisher-loc>Coimbatore</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Azghani</surname> <given-names>M.</given-names></name> <name><surname>Karimi</surname> <given-names>M.</given-names></name> <name><surname>Marvasti</surname> <given-names>F.</given-names></name></person-group> (<year>2016</year>). <article-title>Multihypothesis compressed video sensing technique</article-title>. <source>IEEE Trans. Circuits Syst. Video Technol.</source> <volume>26</volume>, <fpage>627</fpage>&#x02013;<lpage>635</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2015.2418586</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baraniuk</surname> <given-names>R. G.</given-names></name></person-group> (<year>2007</year>). <article-title>Compressive sensing</article-title>. <source>IEEE Signal Process. Mag.</source> <volume>24</volume>, <fpage>118</fpage>&#x02013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2007.4286571</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baraniuk</surname> <given-names>R. G.</given-names></name> <name><surname>Goldstein</surname> <given-names>T.</given-names></name> <name><surname>Sankaranarayanan</surname> <given-names>A. C.</given-names></name> <name><surname>Studer</surname> <given-names>C.</given-names></name> <name><surname>Veeraraghavan</surname> <given-names>A.</given-names></name> <name><surname>Wakin</surname> <given-names>M. B.</given-names></name></person-group> (<year>2017</year>). <article-title>Compressive video sensing: algorithms, architectures, and applications</article-title>. <source>IEEE Signal Process. Mag.</source> <volume>34</volume>, <fpage>52</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2016.2602099</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Becker</surname> <given-names>S.</given-names></name> <name><surname>Bobin</surname> <given-names>J. C. E.</given-names></name></person-group>. (<year>2011</year>). <article-title>Nesta: a fast and accurate first-order method for sparse recovery</article-title>. <source>SIAM J. Imag. Sci.</source> <volume>4</volume>, <fpage>1</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1137/090756855</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bigot</surname> <given-names>J.</given-names></name> <name><surname>Boyer</surname> <given-names>C.</given-names></name> <name><surname>Weiss</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>An analysis of block sampling strategies in compressed sensing</article-title>. <source>IEEE Trans. Inf. Theory</source> <volume>62</volume>, <fpage>2125</fpage>&#x02013;<lpage>2139</lpage>. <pub-id pub-id-type="doi">10.1109/TIT.2016.2524628</pub-id><pub-id pub-id-type="pmid">28921631</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cand&#x000E8;s</surname> <given-names>E. J.</given-names></name> <name><surname>Wakin</surname> <given-names>M. B.</given-names></name></person-group> (<year>2008</year>). <article-title>An introduction to compressive sampling</article-title>. <source>IEEE Signal Process. Mag.</source> <volume>25</volume>, <fpage>21</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2007.914731</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Tramel</surname> <given-names>E. W.</given-names></name> <name><surname>Fowler</surname> <given-names>J. E.</given-names></name></person-group> (<year>2011</year>). <article-title>Compressed-sensing recovery of images and video using multihypothesis predictions</article-title>, in <source>2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR)</source> (<publisher-loc>Pacific Grove, CA</publisher-loc>), <fpage>1193</fpage>&#x02013;<lpage>1198</lpage>.</citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Zhou</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>P.</given-names></name> <name><surname>Zhang</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Iterative reweighted tikhonov-regularized multihypothesis prediction scheme for distributed compressive video sensing</article-title>. <source>IEEE Trans. Circuits Syst. Video Technol.</source> <volume>30</volume>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2018.2886310</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Huang</surname> <given-names>T.-Z.</given-names></name> <name><surname>He</surname> <given-names>W.</given-names></name> <name><surname>Yokoya</surname> <given-names>N.</given-names></name> <name><surname>Zhao</surname> <given-names>X.-L.</given-names></name></person-group> (<year>2020</year>). <article-title>Hyperspectral image compressive sensing reconstruction using subspace-based nonlocal tensor ring decomposition</article-title>. <source>IEEE Trans. Image Process.</source> <volume>29</volume>, <fpage>6813</fpage>&#x02013;<lpage>6828</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2020.2994411</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Mao</surname> <given-names>Y.</given-names></name> <name><surname>Fan</surname> <given-names>J.</given-names></name> <name><surname>Suo</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Sinusoidal sampling enhanced compressive camera for high speed imaging</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>43</volume>, <fpage>1380</fpage>&#x02013;<lpage>1393</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2019.2946567</pub-id><pub-id pub-id-type="pmid">31603813</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Do</surname> <given-names>T. T.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Nguyen</surname> <given-names>D. T.</given-names></name> <name><surname>Nguyen</surname> <given-names>N.</given-names></name> <name><surname>Gan</surname> <given-names>L.</given-names></name> <name><surname>Tran</surname> <given-names>T. D.</given-names></name></person-group> (<year>2009</year>). <article-title>Distributed compressed video sensing</article-title>, in <source>2009 16th IEEE International Conference on Image Processing (ICIP)</source> (<publisher-loc>Baltimore, MD</publisher-loc>), <fpage>1393</fpage>&#x02013;<lpage>1396</lpage>.</citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Do</surname> <given-names>T. T.</given-names></name> <name><surname>Gan</surname> <given-names>L.</given-names></name> <name><surname>Nguyen</surname> <given-names>N. H.</given-names></name> <name><surname>Tran</surname> <given-names>T. D.</given-names></name></person-group> (<year>2012</year>). <article-title>Fast and efficient compressive sensing using structurally random matrices</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>60</volume>, <fpage>139</fpage>&#x02013;<lpage>154</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2011.2170977</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gan</surname> <given-names>L.</given-names></name></person-group> (<year>2007</year>). <article-title>Block compressed sensing of natural images</article-title>, in <source>2007 15th International Conference on Digital Signal Processing</source> (<publisher-loc>Cardiff</publisher-loc>), <fpage>403</fpage>&#x02013;<lpage>406</lpage>.</citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Che</surname> <given-names>W.</given-names></name> <name><surname>Fan</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Block-based compressive sensing coding of natural images by local structural measurement matrix</article-title>, in <source>2015 Data Compression Conference</source> (<publisher-loc>Snowbird, UT</publisher-loc>), <fpage>133</fpage>&#x02013;<lpage>142</lpage>.</citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Girod</surname> <given-names>B.</given-names></name> <name><surname>Aaron</surname> <given-names>A.</given-names></name> <name><surname>Rane</surname> <given-names>S.</given-names></name> <name><surname>Rebollo-Monedero</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Distributed video coding</article-title>. <source>Proc. IEEE</source> <volume>93</volume>, <fpage>71</fpage>&#x02013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1109/JPROC.2004.839619</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grimblatt</surname> <given-names>V.</given-names></name> <name><surname>J&#x000E9;go</surname> <given-names>C.</given-names></name> <name><surname>Ferr&#x000E9;</surname> <given-names>G.</given-names></name> <name><surname>Rivet</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>How to feed a growing population&#x02014;an iot approach to crop health and growth</article-title>. <source>IEEE J. Emerg. Sel. Top. Circuits Syst.</source> <volume>11</volume>, <fpage>435</fpage>&#x02013;<lpage>448</lpage>. <pub-id pub-id-type="doi">10.1109/JETCAS.2021.3099778</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>L.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Hao</surname> <given-names>H.</given-names></name> <name><surname>Han</surname> <given-names>J.</given-names></name> <name><surname>Liao</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Growth monitoring and planting decision supporting for pear during the whole growth stage based on pie-landscape system</article-title>, in <source>2018 7th International Conference on Agro-geoinformatics (Agro-geoinformatics)</source> (<publisher-loc>Hangzhou</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>James</surname> <given-names>J.</given-names></name> <name><surname>Maheshwar P</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Plant growth monitoring system, with dynamic user-interface</article-title>, in <source>2016 IEEE Region 10 Humanitarian Technology Conference (R10-HTC)</source> (<publisher-loc>Agra</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>.</citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Dai</surname> <given-names>W.</given-names></name> <name><surname>Zou</surname> <given-names>J.</given-names></name> <name><surname>Xiong</surname> <given-names>H.</given-names></name> <name><surname>Zheng</surname> <given-names>Y. F.</given-names></name></person-group> (<year>2020</year>). <article-title>Scalable structured compressive video sampling with hierarchical subspace learning</article-title>. <source>IEEE Trans. Circuits Syst. Video Technol.</source> <volume>30</volume>, <fpage>3528</fpage>&#x02013;<lpage>3543</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2019.2939370</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Yuan</surname> <given-names>X.</given-names></name> <name><surname>Suo</surname> <given-names>J.</given-names></name> <name><surname>Brady</surname> <given-names>D. J.</given-names></name> <name><surname>Dai</surname> <given-names>Q.</given-names></name></person-group> (<year>2019</year>). <article-title>Rank minimization for snapshot compressive imaging</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>41</volume>, <fpage>2990</fpage>&#x02013;<lpage>3006</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2018.2873587</pub-id><pub-id pub-id-type="pmid">30295611</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mun</surname> <given-names>S.</given-names></name> <name><surname>Fowler</surname> <given-names>J. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Dpcm for quantized block-based compressed sensing of images</article-title>, in <source>2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO)</source> (<publisher-loc>Bucharest</publisher-loc>), <fpage>1424</fpage>&#x02013;<lpage>1428</lpage>.</citation></ref>
<ref id="B23">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Okayasu</surname> <given-names>T.</given-names></name> <name><surname>Nugroho</surname> <given-names>A. P.</given-names></name> <name><surname>Sakai</surname> <given-names>A.</given-names></name> <name><surname>Arita</surname> <given-names>D.</given-names></name> <name><surname>Yoshinaga</surname> <given-names>T.</given-names></name> <name><surname>Taniguchi</surname> <given-names>R.-I.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Affordable field environmental monitoring and plant growth measurement system for smart agriculture</article-title>, in <source>2017 Eleventh International Conference on Sensing Technology (ICST)</source> (<publisher-loc>Sydney, NSW</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palangi</surname> <given-names>H.</given-names></name> <name><surname>Ward</surname> <given-names>R.</given-names></name> <name><surname>Deng</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>Distributed compressive sensing: a deep learning approach</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>64</volume>, <fpage>4504</fpage>&#x02013;<lpage>4518</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2016.2557301</pub-id><pub-id pub-id-type="pmid">30215605</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peng</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>M.</given-names></name> <name><surname>Zhao</surname> <given-names>G.</given-names></name> <name><surname>Cao</surname> <given-names>G.</given-names></name></person-group> (<year>2022</year>). <article-title>Binocular-vision-based structure from motion for 3-d reconstruction of plants</article-title>. <source>IEEE Geosci. Remote Sens. Lett.</source> <volume>19</volume>, <fpage>1</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1109/LGRS.2021.3105106</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Piermattei</surname> <given-names>L.</given-names></name> <name><surname>Karel</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Wieser</surname> <given-names>M.</given-names></name> <name><surname>Mokros</surname> <given-names>M.</given-names></name> <name><surname>Surov&#x000FD;</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Terrestrial structure from motion photogrammetry for deriving forest inventory data</article-title>. <source>Remote Sens.</source> <volume>11</volume>, <fpage>950</fpage>. <pub-id pub-id-type="doi">10.3390/rs11080950</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Prades-Nebot</surname> <given-names>J.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name> <name><surname>Huang</surname> <given-names>T.</given-names></name></person-group> (<year>2009</year>). <article-title>Distributed video coding using compressive sampling</article-title>, in <source>2009 Picture Coding Symposium</source> (<publisher-loc>Chicago, IL</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiu</surname> <given-names>W.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Zhao</surname> <given-names>H.</given-names></name> <name><surname>Fu</surname> <given-names>Q.</given-names></name></person-group> (<year>2015</year>). <article-title>Three-dimensional sparse turntable microwave imaging based on compressive sensing</article-title>. <source>IEEE Geosci. Remote Sens. Lett.</source> <volume>12</volume>, <fpage>826</fpage>&#x02013;<lpage>830</lpage>. <pub-id pub-id-type="doi">10.1109/LGRS.2014.2363238</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rayhana</surname> <given-names>R.</given-names></name> <name><surname>Xiao</surname> <given-names>G. G.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name></person-group> (<year>2021</year>). <article-title>Printed sensor technologies for monitoring applications in smart farming: a review</article-title>. <source>IEEE Trans. Instrum. Meas.</source> <volume>70</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1109/TIM.2021.3112234</pub-id><pub-id pub-id-type="pmid">32065178</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romano</surname> <given-names>Y.</given-names></name> <name><surname>Elad</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Con-patch: when a patch meets its context</article-title>. <source>IEEE Trans. Image Process.</source> <volume>25</volume>, <fpage>3967</fpage>&#x02013;<lpage>3978</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2016.2576402</pub-id><pub-id pub-id-type="pmid">27295669</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sajith</surname> <given-names>V. V. V.</given-names></name> <name><surname>Gopalakrishnan</surname> <given-names>E. A.</given-names></name> <name><surname>Sowmya</surname> <given-names>V.</given-names></name> <name><surname>Soman</surname> <given-names>K. P.</given-names></name></person-group>. (<year>2019</year>). <article-title>A complex network approach for plant growth analysis using images</article-title>, in <source>2019 International Conference on Communication and Signal Processing (ICCSP)</source> (<publisher-loc>Chennai</publisher-loc>), <fpage>0249</fpage>&#x02013;<lpage>0253</lpage>.</citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Shechtman</surname> <given-names>E.</given-names></name> <name><surname>Irani</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Matching local self-similarities across images and videos</article-title>, in <source>2007 IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Minneapolis, MN</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Somov</surname> <given-names>A.</given-names></name> <name><surname>Shadrin</surname> <given-names>D.</given-names></name> <name><surname>Fastovets</surname> <given-names>I.</given-names></name> <name><surname>Nikitin</surname> <given-names>A.</given-names></name> <name><surname>Matveev</surname> <given-names>S.</given-names></name> <name><surname>Seledets</surname> <given-names>I.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Pervasive agriculture: iot-enabled greenhouse for plant growth control</article-title>. <source>IEEE Pervasive Comput.</source> <volume>17</volume>, <fpage>65</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1109/MPRV.2018.2873849</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sullivan</surname> <given-names>G. J.</given-names></name> <name><surname>Ohm</surname> <given-names>J.-R.</given-names></name> <name><surname>Han</surname> <given-names>W.-J.</given-names></name> <name><surname>Wiegand</surname> <given-names>T.</given-names></name></person-group> (<year>2012</year>). <article-title>Overview of the high efficiency video coding (hevc) standard</article-title>. <source>IEEE Trans. Circuits syst. Video Technol.</source> <volume>22</volume>, <fpage>1649</fpage>&#x02013;<lpage>1668</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2012.2221191</pub-id><pub-id pub-id-type="pmid">24111419</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tachella</surname> <given-names>J.</given-names></name> <name><surname>Altmann</surname> <given-names>Y.</given-names></name> <name><surname>M&#x000E1;rquez</surname> <given-names>M.</given-names></name> <name><surname>Arguello-Fuentes</surname> <given-names>H.</given-names></name> <name><surname>Tourneret</surname> <given-names>J.-Y.</given-names></name> <name><surname>McLaughlin</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Bayesian 3d reconstruction of subsampled multispectral single-photon lidar signals</article-title>. <source>IEEE Trans. Comput. Imag.</source> <volume>6</volume>, <fpage>208</fpage>&#x02013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1109/TCI.2019.2945204</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taimori</surname> <given-names>A.</given-names></name> <name><surname>Marvasti</surname> <given-names>F.</given-names></name></person-group> (<year>2018</year>). <article-title>Adaptive sparse image sampling and recovery</article-title>. <source>IEEE Trans. Comput. Imag.</source> <volume>4</volume>, <fpage>311</fpage>&#x02013;<lpage>325</lpage>. <pub-id pub-id-type="doi">10.1109/TCI.2018.2833625</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tramel</surname> <given-names>E. W.</given-names></name> <name><surname>Fowler</surname> <given-names>J. E.</given-names></name></person-group> (<year>2011</year>). <article-title>Video compressed sensing with multihypothesis</article-title>, in <source>2011 Data Compression Conference</source> (<publisher-loc>Snowbird, UT</publisher-loc>), <fpage>193</fpage>&#x02013;<lpage>202</lpage>.</citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tran</surname> <given-names>D. T.</given-names></name> <name><surname>Yama&#x000E7;</surname> <given-names>M.</given-names></name> <name><surname>Degerli</surname> <given-names>A.</given-names></name> <name><surname>Gabbouj</surname> <given-names>M.</given-names></name> <name><surname>Iosifidis</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Multilinear compressive learning</article-title>. <source>IEEE Trans. Neural Netw. Learn. Syst.</source> <volume>32</volume>, <fpage>1512</fpage>&#x02013;<lpage>1524</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2020.2984831</pub-id><pub-id pub-id-type="pmid">32310801</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Trevisi</surname> <given-names>M.</given-names></name> <name><surname>Akbari</surname> <given-names>A.</given-names></name> <name><surname>Trocan</surname> <given-names>M.</given-names></name> <name><surname>Rodr&#x000ED;guez-V&#x000E1;zquez</surname> <given-names>A.</given-names></name> <name><surname>Carmona-Gal&#x000E1;n</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Compressive imaging using rip-compliant cmos imager architecture and landweber reconstruction</article-title>. <source>IEEE Trans. Circuits Syst. Video Technol.</source> <volume>30</volume>, <fpage>387</fpage>&#x02013;<lpage>399</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2019.2892178</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Unde</surname> <given-names>A. S.</given-names></name> <name><surname>Pattathil</surname> <given-names>D. P.</given-names></name></person-group> (<year>2020</year>). <article-title>Adaptive compressive video coding for embedded camera sensors: compressed domain motion and measurements estimation</article-title>. <source>IEEE Trans. Mob. Comput.</source> <volume>19</volume>, <fpage>2250</fpage>&#x02013;<lpage>2263</lpage>. <pub-id pub-id-type="doi">10.1109/TMC.2019.2926271</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Xu</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>Admm-csnet: a deep learning approach for image compressive sensing</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell.</source> <volume>42</volume>, <fpage>521</fpage>&#x02013;<lpage>538</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2018.2883941</pub-id><pub-id pub-id-type="pmid">30507495</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name></person-group> (<year>2010</year>). <article-title>Saliency-based compressive sampling for image signals</article-title>. <source>IEEE Signal Process. Lett.</source> <volume>17</volume>, <fpage>973</fpage>&#x02013;<lpage>976</lpage>. <pub-id pub-id-type="doi">10.1109/LSP.2010.2080673</pub-id><pub-id pub-id-type="pmid">28747669</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zammit</surname> <given-names>J.</given-names></name> <name><surname>Wassell</surname> <given-names>I. J.</given-names></name></person-group> (<year>2020</year>). <article-title>Adaptive block compressive sensing: Toward a real-time and low-complexity implementation</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>120999</fpage>&#x02013;<lpage>121013</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3006861</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Zhao</surname> <given-names>D.</given-names></name> <name><surname>Jiang</surname> <given-names>F.</given-names></name></person-group> (<year>2013</year>). <article-title>Spatially directional predictive coding for block-based compressive sensing of natural images</article-title>, in <source>2013 IEEE International Conference on Image Processing</source> (<publisher-loc>Melbourne, VIC</publisher-loc>), <fpage>1021</fpage>&#x02013;<lpage>1025</lpage>.</citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>The kernel conjugate gradient algorithms</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>66</volume>, <fpage>4377</fpage>&#x02013;<lpage>4387</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2018.2853109</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>P.</given-names></name> <name><surname>Gan</surname> <given-names>L.</given-names></name> <name><surname>Sun</surname> <given-names>S.</given-names></name> <name><surname>Ling</surname> <given-names>C.</given-names></name></person-group> (<year>2015</year>). <article-title>Modulated unit-norm tight frames for compressed sensing</article-title>. <source>IEEE Trans. Signal Process.</source> <volume>63</volume>, <fpage>3974</fpage>&#x02013;<lpage>3985</lpage>. <pub-id pub-id-type="doi">10.1109/TSP.2015.2425809</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Wu</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Jiao</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>High-performance distributed compressive video sensing: Jointly exploiting the hevc motion estimation and the &#x02113;<sub>1</sub>-&#x02113;<sub>1</sub> reconstruction</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>31306</fpage>&#x02013;<lpage>31316</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2973392</pub-id></citation></ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Xie</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Pan</surname> <given-names>Q.</given-names></name></person-group> (<year>2020</year>). <article-title>A hybrid-3d convolutional network for video compressive sensing</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>20503</fpage>&#x02013;<lpage>20513</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2969290</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhen</surname> <given-names>C.</given-names></name> <name><surname>De-rong</surname> <given-names>C.</given-names></name> <name><surname>Jiu-lu</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>A deep learning based distributed compressive video sensing reconstruction algorithm for small reconnaissance uav</article-title>, in <source>2020 3rd International Conference on Unmanned Systems (ICUS)</source> (<publisher-loc>Harbin</publisher-loc>), <fpage>668</fpage>&#x02013;<lpage>672</lpage>.</citation></ref>
</ref-list>
</back>
</article>