Next Article in Journal
Research on the Coordination Suppression Strategy of Neutral Point Potential and Common Mode Voltage for NPC Three-Level Inverter
Next Article in Special Issue
MODeLING.Vis: A Graphical User Interface Toolbox Developed for Machine Learning and Pattern Recognition of Biomolecular Data
Previous Article in Journal / Special Issue
An Augmented Model of Rutting Data Based on Radial Basis Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Virtual Viewpoint Generation Method under Hole Pixel Information Update

1
College of Information Engineering, Zhongshan Polytechnic Institute, Zhongshan 528400, China
2
College of Engineering, South China Agricultural University, Guangzhou 510642, China
3
Guangdong E & T Research Center for Mountainous Orchard Machinery, Guangzhou 510642, China
4
Zhongshan Agricultural Science and Technology Extension Center, Zhongshan 528400, China
5
College of Natural Resources and Environment, South China Agricultural University, Guangzhou 510642, China
6
College of Intelligent Manufacturing and Electrical Engineering, Guangzhou Institute of Science and Technology, Guangzhou 510540, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(1), 34; https://doi.org/10.3390/sym15010034
Submission received: 8 November 2022 / Revised: 8 December 2022 / Accepted: 16 December 2022 / Published: 23 December 2022

Abstract

:
A virtual viewpoint generation method is proposed to address the problem of low fidelity in the generation of virtual viewpoints for images with overlapping pixel points. Virtual viewpoint generation factors such as overlaps, holes, cracks, and artifacts are analyzed and preprocessed. When the background of the hole is a simple texture, pheromone information around the hole is used as the support, a pixel at the edge of the hole is detected, and the hole is predicted at the same time, so that the hole area is filled in blocks. When the hole background has a relatively complex texture, the depth information of the hole pixels is updated with the inverse 3D transformation method, and the updated area pheromone is projected onto the auxiliary plane and compared with the known plane pixel auxiliary parameters. The hole filling is performed according to the symmetry of the pixel position of the auxiliary reference viewpoint plane to obtain the virtual viewpoint after optimization. The proposed method was validated using image quality metrics and objective evaluation metrics such as PSNR. The experimental results show that the proposed method could generate virtual viewpoints with high fidelity, excellent quality, and a short image-processing time, which effectively enhanced the virtual viewpoint generation performance.

1. Introduction

Vision is one of the main ways in which humans perceive the multidimensional information data of the world and analyze the information elements of images. People visually perceive the outside world, and about 80% of all information about the outside world comes from images. With the rapid development of radio and television technology, the amount of image information data available to humans is becoming increasingly abundant. Relevant research data show that information content processing and applications are mainly derived from visual perception [1,2]. The human perception of images can be enhanced through image video transmission technology. In the context of the rapid development of radio and television, and other related technologies, the ways and means with which the public obtains two-dimensional information are gradually improved and refined, but the real world is a 3D world; how to obtain 3D information and make people feel like being there is a continuous goal. A specific workflow illustration of 3D stereo information acquisition is shown in Figure 1.
Virtual viewpoint generation and drawing technology allow for images to be drawn from a new viewpoint in a natural scene using currently existing reference graphics. With this technology, images from different viewpoints can be converted into three dimensions and presented clearly to the display port, providing users with accurate and objective image information data, reducing data redundancy and transmission delay, and giving the viewer a smooth visual sensory experience [3,4,5]. Cai et al. [6] proposed a virtual viewpoint image postprocessing method using background information to solve the pixel overlap problem using depth information and complementary viewpoints. The method can effectively improve the image drawing quality, but suffers from the problem of a long image-processing time. Wang et al. [7] proposed a reference-free virtual viewpoint image quality evaluation method based on skewness and structural features that uses the method of a local binary pattern operator to extract image skewness features and complete feature training under a support vector machine to improve image quality. However, the target viewpoint fidelity of this method is low. Guo [8] proposed a motion image contour pixel-point restoration method based on an optical network, using kinetic equations to extract image contour pixel points, an FCMC algorithm to determine the pixel-point damage area, and a topological gradient method to complete the image minimal restoration path and complete image reconstruction. However, this method had poor performance in image-processing structural similarity. Wang et al. [9] indicated this problem in the virtual viewpoint generation drawing strategy. They proposed an algorithm combining the spatial weighting method with pixel interpolation to address this problem. Its main purpose was to interpolate the pixels using a weighting process between the depth values of several projected pixels and the absolute distance in the horizontal direction. When interpolation is performed, the number of projected pixels in different areas is taken into account for the interpolation accuracy, so some of the distorted pixels are removed, and distortion is detected and corrected at the reference virtual viewpoints on both sides before the image is output.
At the same time, several researchers have turned their attention to the problem of holes. Lou et al. [10] took the problem of pseudo contours and holes in the virtual viewpoint drawing process as the focus, and proposed an improved virtual viewpoint generation and drawing method. On the basis of the depth information, some untrustworthy regions in the depth image are marked out, and the image is preprocessed by using the multi threshold segmentation method to obtain a virtual viewpoint image. A virtual viewpoint image is obtained by transforming the 3D image, while the holes are eliminated from the target image according to the local median method. A fusion operation is applied to the drawn image, and the remaining holes are filled according to the image restoration method. Zhang et al. [11] focused on the image holes in the virtual viewpoint drawing of depth images by first separating the foreground and background according to a Gaussian mixture model, and the holes in the background were filled by the background values, while the foreground holes were filled according to the optimized image restoration method.
In summary, high-precision and high-quality virtual viewpoint generation and drawing are of great significance for the application and development of related fields. On this basis, this paper proposes a virtual viewpoint generation method for images under updated empty pixel information on the basis of previous research, and proves through experiments that the proposed method can not only maintain virtual viewpoint fidelity and excellent quality, but also shorten the image-processing time, which can better meet practical needs.
The remainder of the paper is organized as follows: Section 2.1, Section 2.2 and Section 2.3 discuss the problem of overlaps, holes, cracks, and artifacts. Virtual viewpoint generation for simple and complex textured backgrounds is discussed in Section 3.1 and Section 3.2. Section 4 describes five comparative experiments and analyzes the results. Lastly, the conclusions are drawn in Section 5.

2. Analysis of the Difficulties of Virtual Viewpoint Generation and Preprocessing

During the virtual viewpoint generation of overlapping pixel-point images, problems such as overlapping image imaging, pheromone holes, and artifact generation are caused by position shifting, imaging occlusion, and dynamic motion. Therefore, the above problems need to be solved through image preprocessing.

2.1. Analysis and Treatment of Overlapping Problems

In the process of generating a virtual viewpoint image, there is a problem where several pixels in the reference camera are projected onto the same pixel of the target camera image, which is known as the mapping overlap problem.
A natural scene with mutual occlusion can be described according to Figure 2.
Figure 2, A–E represent five objects in a natural scene, and O1 and O2 represent two viewpoints. When the scene is viewed from Viewpoint O1, E obscures C. When the scene is viewed from Viewpoint O2, E obscures B. In the O1 viewpoint imaging plane, the rightmost point of B and the leftmost point of E are projected onto a single point, and in the O2 viewpoint imaging plane, the leftmost point of C and the rightmost point of E are projected onto a single point, resulting in B being visible in the O1 imaging plane, but not visible in the O2 imaging plane, and C being visible in the O2 imaging plane, but not visible in the O1 imaging plane. Suppose that Image O1 is mapped onto Image O2, that is, B cannot find the corresponding mapped pixel; then, there is a hole in Image O2.
There is another factor in the overlap problem. When the coordinate system between the reference viewpoint and the virtual viewpoint is inconsistent, the pixels in the reference viewpoint cannot be mapped onto the virtual viewpoint, and the overlap phenomenon occurs [12,13,14,15].
The overlap problem is handled by the Z-Buffer algorithm. The basic principle of this algorithm is that, when the virtual viewpoint is drawn, a cache is provided for each pixel, and the parallax value of the pixel is stored in the cache. Then, the largest pixel of the parallax value is taken as the virtual viewpoint value.

2.2. Analysis and Treatment of the Hole Problem

Virtual viewpoint images may produce holes, mainly because some points are not visible at the reference viewpoint, yet they are visible at the target viewpoint; the continuity of the depth values of individual pixels in the natural scene is nearly zero, resulting in some holes in the target image [16,17]. The hole problem is dealt with using the horizontal filling method.
In practice, the horizontal fill method means that the hole is filled horizontally according to the pixels around the hole.
On the basis of the overall occlusion correlation between the front and back of the object, the relative position between the target and reference viewpoints is the same as the relative position of the presence of the hole. That is, if the target viewpoint is to the right of the reference viewpoint, the hole is at the right edge of the foreground object. By the same token, if the target point of view is to the left of the reference point of view, the position of the hole is at the left edge of the foreground object.
The overall direction of horizontal filling is identified according to the relative position between the target view and the reference view, and the horizontal filling is realized via the pixel value around the hole.
The most direct factor in the generation of holes is the absence of the continuous properties in the depth map, especially in a place where the depth values generate abrupt changes. To better deal with the problem of holes, a filter is introduced, and depth map processing is realized by using a two-dimensional Gaussian filter. Formula (1) describes the two-dimensional Gaussian filter:
D ( x , y ) = u = w u / 2 u = w u / 2 v = w v / 2 v = w v / 2 G ( u , v ) D ( x + u , y + v ) u = w u / 2 u = w u / 2 v = w v / 2 v = w v / 2 G ( u , v ) = v = w v / 2 v = w v / 2 u = w u / 2 u = w u / 2 G u , δ u D ( x + u , y + v ) G v , δ v v = w v / 2 v = w v / 2 u = w u / 2 u = w u / 2 G u , δ u G v , δ v
where w u is the horizontal filtering window, w v is the vertical filtering window, and δ u and δ v are the overall filtering intensities in the horizontal and vertical directions, respectively.

2.3. Analysis and Treatment of Cracks and Artifacts

In a virtual viewpoint image, in addition to hole points, there are also some cracks that are usually not very large, and the projection of a reference viewpoint image onto a target viewpoint image does not usually fall within the same integer number of pixels during 3D transformation, which can result in small cracks [18,19]. Another reason for cracks is that the overall depth information of the depth map pixels is not continuous or very accurate. These problems are handled with a predictive fill operation based on the pixel values around the cracked point, usually with the rounding principles and the polynomial principle. By combining the neighboring pixel values of all pixel values around a crack point with the rounding principles and the polynomial principle, the pixel values are all assigned DC values, filling in the reconstructed image with the pixel-point values in the same position as their location. All pixel values around a crack point are traversed in a scanning sequence from the bottom left to the top left, and from the top left to the top right; if the first point is not available, it is assigned a value using the reconstructed sample value corresponding to the next available point until the traversal is complete, and all are filled.
An artifact is a foreground outline that is not very clear. There are two ways to eliminate artifacts: First, performing an expansion operation for voids in the target image. The expansion is based on highlighted parts, each element of the image is scanned using a structural element, and an overlay is produced with the structural element and the binary image it covers, expanding the binary image by one turn, and removing some of the pixels that tend to generate artifacts. Second, points with a large difference in depth between the foreground and background objects (background depth is more than twice the depth of foreground) are removed before the image is converted, thus reducing the number of artifacts.

3. Virtual-Viewpoint Generation

On the basis of the preprocessing results of the above problem, the holes are filled on the basis of background texture recognition. Virtual viewpoint parameters are optimized using the directional linear detection method, and the hole is predicted using the positional correlation method to achieve hole filling under complex background texture conditions, and generate a virtual viewpoint.

3.1. Virtual-Viewpoint Generation for Simple Textured Backgrounds

There is no need to use any additional auxiliary viewpoint information for the entire process of filling in the holes. It is difficult to obtain high-quality virtual viewpoints, and they can only be repaired on the basis of the available information in the reference image itself. Given the above, virtual-viewpoint generation is optimized by using directional straight-line detection to fill the holes. The detailed process is as follows:
  • During the preprocessing of one hole, the location of the holes present in the virtual-viewpoint image is detected and extracted, along with their edge locations and edge poles. As shown in Figure 3, the shaded part indicates the hole.
    The edge of a hole can be divided into two types: one is the uppermost or lowermost position of the edge of the hole, i.e., the pole position of the edge of the hole, i.e., P 1 ( x , y ) in the diagram. The second is the location of the normal hole edge, which is P 2 ( x , y ) in the figure.
  • The virtual-viewpoint image is grayed out, and the image is masked by the vertical-line recognition detection template and symmetrical ±45 -line recognition detection template, and the final result of the detection is recorded. Formula (2) is the vertical-line mask operator:
    G 1 = 1 2 1 1 2 1 1 2 1
    Formula (3) is the expression for the 45 linear detection mask:
    G 2 = 1 1 2 1 2 1 2 1 1
    Formula (4) is the expression for the −45 linear detection mask:
    G 3 = 2 1 1 1 2 1 1 1 2
    When the inspection is finished, the results are saved in the V e matrix. The horizontal-line inspection is also no longer carried out because of the horizontal filling in Section 2.2.
  • Blend with the location of the hole to identify the existence or not of a background demarcation situation. For the first type of hole edge point in Item 1, which is the edge pole of the hole, a vertical line with ±45 straight-line detection is carried out at a location that is not a hole. Details are shown in Figure 4.
    In the figure, P 1 ( x , y ) represents the acquired hole edge pole, and, according to the relative position, the correlation between it and the hole can be judged; the P 1 ( x , y ) point is to be tested for straight lines at 90 , 45 , and 135 through Formula (2) to determine the vertical line through Formulas (3) and (4) for ±45 to determine the diagonal line.
    The mean value is calculated for a straight-line position according to Formulas (5) and (6), which are used to establish whether the position needs to be straightened or not.
    V L 1 ( x , y ) = 1 , if s = 0 s = m V e ( x , y ) max ( s , h ) T h 1 0 , Others
    V L 2 ( x , y ) = 1 , if s = 0 s = m h = 0 h = n V e ( x , y ± h ) max ( s , h ) T h 2 0 , Others
    where V e ( x , y ) represents the straight-line detection determination value, h is the straight-line determination height value, T h 1 and T h 2 represent the determination threshold value; in this example, both were set to 0.7.
    For general hole edges, no vertical line determination is required, only ±45 and ±135 straight-line determination. The process is the same as that above.
  • At this time, six decision matrices are obtained for ±90 , ±45 , and ±135 lines. On the basis of these matrix results, an extended prediction is performed for the hole position, and the hole is then filled in blocks on the basis of the pixel depth information around the hole position [20,21,22].
  • To render the final virtual viewpoint image more realistic, the filter of Formula (1) is used for filtering.

3.2. Virtual-Viewpoint Generation for Complex Textured Backgrounds

The inverse 3D transformation method is introduced to address relatively complex background textures, and it can achieve good hole filling while significantly reducing the amount of computation and data compared to those in the current related results. The detailed flow of the method is shown below.
  • Extraction of virtual viewpoint depth map.
    Use the basic principles of 3D image transformation to complete the reference viewpoint conversion.
    Z 2 m 2 = Z 1 k 2 R k 1 1 P 1 + k 2 t
    where Z 1 and Z 2 represent the depth description values of two corresponding points in the same scene, k 1 and k 2 represent the matrices under different pheromone parameters within the camera, t represents the translation matrix, R represents the rotation matrix, and P 1 is the reference viewpoint. The corresponding depth maps of the virtual viewpoints can also be obtained simultaneously using Formula (7).
  • Projection to secondary reference viewpoint position.
    At this point, the image of the virtual viewpoint with the presence of the hole and the corresponding, relatively complete depth map are obtained, which meets the conditions for projection to the auxiliary reference viewpoint location [23,24,25]. The hole area is recorded, the depth information is updated using the inverse 3D transformation method for the hole pixels, and the projection elements are fused to obtain the auxiliary reference viewpoint location information, thus obtaining the auxiliary viewpoint hole map. The left and right viewpoints on either side of the virtual viewpoint are selected as reference viewpoints for the joint generation of the virtual view. Each pixel is projected onto its point in 3D space, e.g., with the top-right corner of the image as the origin, and the virtual-viewpoint pixel coordinates are projected to the corresponding value of the reference viewpoint position coordinates with the help of a depth map.
  • Find the matching pixel point to complete the fill.
    Pixel points are compared with each other and the known auxiliary reference viewpoint position pixel points to find out the corresponding auxiliary reference viewpoint position pixel with the hole point, and the pixel-point information is obtained according to the hole map. Multiple reference viewpoints are used to separately generate the target virtual viewpoint images, and the generated multiple virtual viewpoint images are then fused, i.e., the images of the two reference viewpoints selected on both sides of the target viewpoint in the previous section are reprojected to generate the virtual images of the respective target viewpoint positions, and then the two generated virtual images are fused to complete the image matching and filling to achieve virtual viewpoint optimization in complex texture backgrounds.

4. Experimental Results and Analysis

Five experiments were conducted to verify the overall operational effectiveness of the image virtual-viewpoint generation method during the update of empty pixel information. The experimental procedure in this paper was mainly carried out on the MATLAB R2022a platform with data compilation software MathWorks. Video images contained within the image test standard image library were used for the study, and the carrier images were all 512 × 512 in size, and 40 × 40 pixels.
The experimental environment was as follows: the CPU was an i3-10105F with a base frequency of 3.7 GHz, 4 cores, 16 GB RAM, Windows as the operating system, and MATLAB as the development environment. Experimental data were sourced from interactive visual media.
The experimental indicators were as follows:
  • Comparison of preprocessing capacity.
    In the process of generating the virtual viewpoint of the overlapped pixel image, the problem detection and preprocessing of the image should be carried out first, as shown in Figure 5:
    According to the above figures showing image problem detection and preprocessing, in this process, the image preprocessing ability was tested. The results of the preprocessed images generated by the method in this paper were compared with those generated by the methods in the literature [6,7,8], and the results of the comparison are shown in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10:
    Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 show that the method in [6] ignored the artifact problem of the image when processing the image; the method in [7] had deviations in the processing of the overlap problem, and the pixel-point parallax value was not handled properly; the image effect of the method in [8] was blurred, and the preprocessing of the pole position of the edge of the hole was not in place. The proposed method could achieve comprehensive preprocessing for overlaps, holes, cracks, artifacts, and other problems that tend to arise in the virtual-viewpoint generation process, and it is based on background texture recognition. It uses different methods for filling simple and complex textured backgrounds. The quality of the obtained virtual viewpoints was significantly improved compared to that with the general virtual-viewpoint generation method; thus, the feasibility of the method was enhanced.
  • Target viewpoint fidelity
    As virtual-viewpoint generation is one of the most critical aspects of overlapping image generation, the peak-signal-to-noise ratio (PSNR) metric is used to ensure the quality of virtual-viewpoint generation for a more objective comparison. This metric measures the fidelity of the target viewpoint, with a larger metric indicating less distortion, i.e., a closer approximation of the virtual viewpoint to the actual image. Figure 11 shows a schematic diagram of the model for objective quality assessment.
    To objectively evaluate the quality of virtual-viewpoint generation, Frames 1 to 20 of a sequence were selected for testing, and the results of the PSNR comparison of the target viewpoint planar images obtained by plotting are shown in Figure 12.
    The analysis of the above experimental results shows that the overall performance of the virtual-viewpoint generation method for overlapping pixel-point images based on background texture recognition was optimal compared to that of the current research results. The image-processing results generally reflect high definition and high resolution, and the PSNR was also higher and more reliable compared to that in the literature results.
  • Overall image-processing performance comparison.
    The final result of the quality of the image to be drawn by the superior virtual-viewpoint generation is expressed in the image itself, so the overall performance of the image processing was tested for comparison. Figure 13 shows a video image selected from the image testing standard image library, and Figure 14, Figure 15, Figure 16 and Figure 17 are the images after processing the literature results and the proposed method.
    Image quality indicators are specific methods for evaluating image deviation values and readability, and measuring image quality, and they are usually divided into relative and absolute evaluations; relative evaluations are those performed in comparison with reference images, while absolute evaluations are those conducted on the basis of conventional evaluation scales and self-experience.
    In this comparison test, absolute and relative evaluations of the image effect of the method in this paper were carried out. Relative evaluation: as the above figures show, the image-processing effect of this method was significantly better than that of other literature results under the comparison test. Absolute evaluation: the processed image of the proposed method was higher in clarity, with normal exposure and color contrast, and uniform illumination. The evaluation of the above image quality indicators indicates that the image quality of the method in this paper is high.
  • Comparison of image-processing structural similarity performance.
    Structural similarity (SSIM) was used to test the effect of virtual-viewpoint drawing; the closer the value of this indicator is to 1, the higher the quality of the drawn virtual-viewpoint image. Frames 1 to 20 of a sequence were selected for testing, and the SSIM comparison results of the target viewpoint planar images obtained by drawing are shown in Figure 18.
    Analyzing Figure 18 shows that the method in this paper had the highest SSIM values, all above 0.9, compared to other research results. The image-processing results show the advantages of high structural similarity and more reliability in general.
  • Image-processing time comparison
    After 600 images had been randomly selected from the image database and subjected to virtual-viewpoint processing such as voids, overlaps, cracks, and artifacts, the image-processing time of the method in this paper was compared and tested against the methods in [6,7,8]. The test results are shown in Figure 19.
    This paper uses two-dimensional Gaussian filtering to process the image in depth, so as to improve the image restoration effect, so the image-processing time was shorter, only 0.6 s, and it could effectively process 600 defective images. The efficiency was significantly higher than that of the other three comparison methods, and it had strong applicability.

5. Conclusions

The generation and drawing technology of virtual viewpoints is an important development direction in the future. Given the problems of the current research results, a virtual-viewpoint generation method based on the recognition of background texture was proposed. The virtual-viewpoint generation is based on the recognition of background textures, and it was optimized for both simple and complex backgrounds. The experimental results demonstrate that the method was effective in image processing and had a high RSNR, which could provide reliable support for research in this field. The next step is to investigate the integration of the virtual-viewpoint generation and the corresponding hardware device to obtain better-quality virtual viewpoints.

Author Contributions

Conceptualization, L.L. and C.G.; methodology, F.Z.; software, L.L. and C.G.; validation, Z.Z.; formal analysis, W.Z. and Y.D.; investigation, D.L.; writing—original draft, C.G.; writing—review and editing, F.Z.; visualization, Q.L.; supervision, T.G.; project administration, L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Special Project on Serving Key Areas of Rural Revitalization for General Universities in Guangdong Province (2019KZDZX2037), the New Generation of Electronics for General Universities in Guangdong Province Special Project in Key Areas of Information (2022ZDZX1076), the Zhongshan City 2020 Provincial Science and Technology Project Fund “Major Project + Task List” Project (2020SDR003), and the Zhongshan Social Welfare Project (2019B2080).

Data Availability Statement

Not applicable.

Acknowledgments

We give our sincere thanks to those who have provided their valuable comments on the writing of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Yu, Y.; Wang, X. Research and simulation of 3D image virtual viewpoint generation optimization. Comput. Simulationg 2021, 39, 205–209. [Google Scholar]
  2. Dong, W. Research on algorithm of virtual try-on based on single picture. Sci. Technol. Innov. 2021, 32, 82–84. [Google Scholar]
  3. Shi, H.; Wang, L.; Wang, G. Blind quality prediction for view synthesis based on heterogeneous distortion perception. Sensors 2022, 22, 7081. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, J.Z.; Chen, Y.; Song, J.R. Panoramic video virtual view synthesis based on viewing angle. Chin. J. Liq. Cryst. Displays 2019, 34, 63–69. [Google Scholar] [CrossRef]
  5. Cui, S.N.; Peng, Z.J.; Zou, W.H.; Chen, F.; Chen, H. Quality assessment of synthetic viewpoint stereo image with multi-feature fusion. Telecommun. Sci. 2019, 35, 104–112. [Google Scholar]
  6. Cai, L.; Li, X.; Tian, X. Virtual viewpoint image post-processing method using background information. J. Chin. Comput. Syst. 2022, 43, 1178–1184. [Google Scholar]
  7. Wang, C.; Peng, Z.; Zhang, L.; Chen, F.; Lu, Z. No-reference quality assessment for virtual view images based on skewness and structural feature. J. Comput. Appl. 2021, 41, 226–233. [Google Scholar]
  8. Guo, Q. Restoration of moving image contour pixels based on optical network. Laser J. 2021, 42, 78–82. [Google Scholar]
  9. Wang, H.; Chen, F.; Jiao, R.; Peng, Z.; Yu, M. Virtual view rendering algorithm based on spatial weighting. Comput. Eng. Appl. 2016, 52, 174–195. [Google Scholar]
  10. Lou, D.; Wang, X.; Fu, X.; Zhang, L. Virtual view point rendering based on range image segmentation. Comput. Eng. 2016, 42, 12–19. [Google Scholar]
  11. Zhang, Q.; Li, S.; Guo, W.; Chen, J.; Wang, B.; Wang, P.; Huang, J. High quality virtual view synthesis method based on geometrical model. Video Eng. 2016, 40, 22–25. [Google Scholar]
  12. Chen, Y.; Ding, W.; Xu, X.; Hu, S.; Yan, X.; Xu, N. Image super-resolution generative adversarial network based on light loss. J. Tianjin Univ. Sci. Technol. 2022, 37, 55–63. [Google Scholar]
  13. Zhu, H.; Li, H.; Li, W.; Li, F. Single image super-resolution reconstruction based on generative adversarial network. J. Jilin Univ. 2021, 59, 1491–1498. [Google Scholar]
  14. Zhu, C.; Li, S. Depth image based view synthesis:New insights and perspectives on hole generation and filling. IEEE Trans. Broadcast. 2015, 62, 82–92. [Google Scholar] [CrossRef]
  15. Zambanini, S.; Loghin, A.M.; Pfeifer, N.; Soley, E.M.; Sablatnig, R. Detection of parking cars in stereo satellite images. Remote Sens. 2020, 12, 2170. [Google Scholar] [CrossRef]
  16. Wu, L.; Yu, L.; Zhu, C. The camera arrangement algorithm based on the central attention in light field rendering. China Sci. 2017, 12, 180–184. [Google Scholar]
  17. Le, T.H.; Long, V.T.; Duong, D.T.; Jung, S.W. Reduced reference quality metric for synthesized virtual views in 3DTV. Etri J. 2016, 38, 1114–1123. [Google Scholar] [CrossRef]
  18. Ma, J.; Li, S.; Qin, H.; Hao, A. Unsupervised multi-class co-segmentation via joint-cut over L-1-manifold hyper-graph of discriminative image regions. IEEE Trans. Image Process. 2017, 26, 1216–1230. [Google Scholar] [CrossRef]
  19. Serafin, S.; Erkut, C.; Kojs, J.; Nilsson, N.C.; Rolf, N. Virtual reality musical instruments: State of the art, design principles, and future directions. Comput. Music. J. 2016, 40, 22–40. [Google Scholar] [CrossRef] [Green Version]
  20. Guo, Q.; Liang, X. High-quality virtual viewpoint rendering for 3D warping. Comput. Eng. Appl. 2019, 55, 84–90. [Google Scholar]
  21. Huang, H.; Huang, S. Fast hole filling for view synthesis in free viewpoint video. Electronics 2020, 9, 906. [Google Scholar] [CrossRef]
  22. Zhang, J.; Hou, Y.; Zhang, Z.; Jin, D.; Zhang, P.; Lo, G. Deep region segmentation-based intra prediction for depth video coding. Multimed. Tools Appl. 2022, 81, 35953–35964. [Google Scholar] [CrossRef]
  23. Liu, D.; Wang, G.; Wu, J.; Ai, L. Light field image compression method based on correlation of rendered views. Laser Technol. 2019, 43, 115–120. [Google Scholar]
  24. Zhou, G.; Song, H.; Wu, Y.; Ren, P. A non-feature fast 3D figid-body image registration method. Acta Electron. Sin. 2018, 46, 7. [Google Scholar]
  25. Amanjot, S.; Jagroop, S. Content adaptive deblocking of artifacts for highly compressed images. Multimed. Tools Appl. 2022, 81, 18375–18396. [Google Scholar]
Figure 1. Illustration of 3D information acquisition.
Figure 1. Illustration of 3D information acquisition.
Symmetry 15 00034 g001
Figure 2. Mutual shading correlation in natural scenes.
Figure 2. Mutual shading correlation in natural scenes.
Symmetry 15 00034 g002
Figure 3. Hole edge and edge pole recognition.
Figure 3. Hole edge and edge pole recognition.
Symmetry 15 00034 g003
Figure 4. Straight-line detection of hole edge poles.
Figure 4. Straight-line detection of hole edge poles.
Symmetry 15 00034 g004
Figure 5. Schematic diagram of problem detection and preprocessing of images.
Figure 5. Schematic diagram of problem detection and preprocessing of images.
Symmetry 15 00034 g005
Figure 6. Images to be preprocessed.
Figure 6. Images to be preprocessed.
Symmetry 15 00034 g006
Figure 7. Effect of pretreatment in the literature [6].
Figure 7. Effect of pretreatment in the literature [6].
Symmetry 15 00034 g007
Figure 8. Effect of pretreatment in the literature [7].
Figure 8. Effect of pretreatment in the literature [7].
Symmetry 15 00034 g008
Figure 9. Effect of pretreatment in the literature [8].
Figure 9. Effect of pretreatment in the literature [8].
Symmetry 15 00034 g009
Figure 10. Effect of pretreatment in this paper.
Figure 10. Effect of pretreatment in this paper.
Symmetry 15 00034 g010
Figure 11. Schematic diagram of the model for objective quality assessment.
Figure 11. Schematic diagram of the model for objective quality assessment.
Symmetry 15 00034 g011
Figure 12. Comparison of PSNR of different research results [6,7,8].
Figure 12. Comparison of PSNR of different research results [6,7,8].
Symmetry 15 00034 g012
Figure 13. Images to be processed.
Figure 13. Images to be processed.
Symmetry 15 00034 g013
Figure 14. Effect of treatment in the literature [6].
Figure 14. Effect of treatment in the literature [6].
Symmetry 15 00034 g014
Figure 15. Effect of treatment in the literature [7].
Figure 15. Effect of treatment in the literature [7].
Symmetry 15 00034 g015
Figure 16. Effect of treatment in the literature [8].
Figure 16. Effect of treatment in the literature [8].
Symmetry 15 00034 g016
Figure 17. Effect of treatment in this paper.
Figure 17. Effect of treatment in this paper.
Symmetry 15 00034 g017
Figure 18. Comparison of SSIM of different research results [6,7,8].
Figure 18. Comparison of SSIM of different research results [6,7,8].
Symmetry 15 00034 g018
Figure 19. Image-processing time results for different methods [6,7,8].
Figure 19. Image-processing time results for different methods [6,7,8].
Symmetry 15 00034 g019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leng, L.; Gao, C.; Zhang, F.; Li, D.; Zhang, W.; Gao, T.; Zeng, Z.; Tang, L.; Luo, Q.; Duan, Y. Image Virtual Viewpoint Generation Method under Hole Pixel Information Update. Symmetry 2023, 15, 34. https://doi.org/10.3390/sym15010034

AMA Style

Leng L, Gao C, Zhang F, Li D, Zhang W, Gao T, Zeng Z, Tang L, Luo Q, Duan Y. Image Virtual Viewpoint Generation Method under Hole Pixel Information Update. Symmetry. 2023; 15(1):34. https://doi.org/10.3390/sym15010034

Chicago/Turabian Style

Leng, Ling, Changlun Gao, Fangren Zhang, Dan Li, Weijie Zhang, Ting Gao, Zhiheng Zeng, Luxin Tang, Qing Luo, and Yuxin Duan. 2023. "Image Virtual Viewpoint Generation Method under Hole Pixel Information Update" Symmetry 15, no. 1: 34. https://doi.org/10.3390/sym15010034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop