Next Article in Journal
A Method of HBase Multi-Conditional Query for Ubiquitous Sensing Applications
Previous Article in Journal
Robust Directional Angle Estimation of Underwater Acoustic Sources Using a Marine Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Display in the Wild (DIW): An Adaptive Projection-Imaging System to Screen Geometry in Real Time

Department of Computer Science and Engineering, Kyung-Hee University, Gyeonggi-do 17104, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 3063; https://doi.org/10.3390/s18093063
Submission received: 25 July 2018 / Revised: 6 September 2018 / Accepted: 7 September 2018 / Published: 12 September 2018
(This article belongs to the Section Intelligent Sensors)

Abstract

:
TVs and monitors are among the most widely used displays in various environments. However, they have limitations in their physical display conditions, such as a fixed size/position and a rigid/flat space. In this paper, we suggest a new “Display in the Wild” (DIW) concept to overcome the aforementioned problems. Our proposed DIW system allows us to display a flexibly large screen on dynamic non-planar surfaces at an arbitrary display position. To implement our DIW concept practically, we choose a projector as the hardware configuration in order to generate a screen anywhere with different sizes. However, distortion occurs when the projector displays content on a surface that is dynamic and/or non-planar. Therefore, we propose a distortion correction method for DIW to overcome the aforementioned surface constraints. Since projectors are not capture devices, we propose using a depth camera to determine the distortions on the surfaces quickly. We also propose DIW-specific calibration and fast/precise correction methods. Our calibration method is constructed to detect the projection surface easily and quickly, and also allows our proposed system to accommodate the intrinsic parameters such as a display resolution and field of view. We accomplish a fast undistortion process of the projector by considering only surface boundary pixels, which enables our method to run in real time. In our comprehensive experiments, the proposed DIW system generates undistorted screens such as TVs and monitors on dynamic non-planar surfaces at an arbitrary display position with Unmanned Aerial Vehicles (UAVs) in a fast and accurate manner, demonstrating its usefulness in practical DIW scenarios.

1. Introduction

TVs and monitors are among the most widely used display devices in the industry and at home. These display devices have evolved to larger screens and thinner thicknesses [1], but are still fundamentally built-in displays with rectangular screen shapes. Such conventional display devices have various fundamental constraints. First, they require as much flat space as the device size. Second, the space in which the display is located should be invariant. Third, the screen size is limited by the size of the device, resulting in less flexibility. Fourth, they work only in the installed space [2].
If there is a new display that can produce a TV or monitor screen on a dynamic non-flat surface that is flexible in terms of both screen location and size, it can overcome the aforementioned constraints of conventional display devices. We describe such a new device as “Display in the Wild” (DIW), and this device will be applicable and valuable in various industries. For example, to display a large screen at outdoor performances or events, we typically need to install a large flat screen or connect numerous smaller TVs. With DIW, on the other hand, we can use Unmanned Aerial Vehicles (UAVs) and a lightweight canvas to quickly configure a large screen at a low cost and display a TV-like screen on a dynamic non-planar surface. DIW can also create a TV-like screen on curtains that are not flat while being easily found at home, as well as on tents that are commonly used in outdoor camping.
In this paper, we propose both hardware and software configurations to implement DIW. For the hardware, we choose a projector as a suitable display device to overcome the limitations of fixed positions and sizes, as projectors can generate a variable-sized screen anywhere. However, geometric distortions can occur when the projector displays on a surface that is dynamic and non-planar. To solve this problem, a geometric distortion correction method is required to display an undistorted screen (e.g., the rectangular screen on a TV). Here, a capture device that can predict the distortion is needed since the projector itself cannot sense the screen geometry. Given that a depth camera can quickly capture 3D surfaces in real time, we decide to use one as a capture device to predict the distortion on dynamic non-plane surfaces. In summary, we utilize a projector and a depth camera for our hardware configuration to implement DIW.
In order to complete the DIW system with the proposed hardware configuration, we developed essential software configurations for DIW: (i) a quick prediction method for geometric distortion on dynamic non-planar surfaces, and (ii) a fast correction method for geometric distortions. Although there are many distortion-correction methods for projection, the existing methods hardly correct the distortions present in dynamic non-planar surfaces. In fact, they mainly project the structure patterns on surfaces to extract and match features using RGB cameras and find distortions in a reconstructed 3D space [3,4,5,6,7,8,9]. However, it is difficult to cope with dynamic distortion because the projector may generate patterns that interfere with viewing of the screen contents. Some methods [10,11,12,13] can deal with dynamic distortion by projecting non-visible patterns. However, such methods also have drawbacks, in that they often unreliably extract and match features in certain lighting environments, thus failing to correct the distortions. Moreover, the existing feature extraction and matching processes usually require heavy computation, which makes them inappropriate for predicting dynamic distortions at high speed. In addition, to fill in missing information on the surface, they fit 3D mesh shapes to the distorted surface in the correction step [7,9], which also results in a high computation complexity. Therefore, the conventional projector correction methods are hardly applicable to DIW systems, which require fast and reliable correction methods on dynamic non-planar surfaces.
Additionally, most correction-methods perform calibrations between the projector and capture device. Some existing manual and/or automatic calibration methods have shown good results with RGB or depth cameras [14,15,16,17,18,19,20,21,22]. However, existing calibration methods are not suitable for DIW, since they focus only on the accuracy of calibration between the projector and capture device, rather than fast detection of the geometric distortion of the projection. Furthermore, some calibration methods may need to re-calibrate when intrinsic parameters change (e.g., display resolutions) in the projector, as the method involve intrinsic parameters in the calibration process.
Therefore, to overcome the aforementioned problems of the existing methods and complete the DIW system with the proposed hardware, we propose: (i) a quick estimation method for geometric distortions from the projection; (ii) a fast distortion correction method; and (iii) a DIW-specific calibration between the projector and depth camera. Using the depth camera, our method can quickly predict/correct distortions on the projection surface. This is because we specialize our calibration method for fast detection of the projection surface in the depth image. To efficiently find the correction area of the user’s viewpoint, only the boundary pixels on the detected projection surface in the depth image are used. When determining warping and correction, the proposed method can quickly recover the high-resolution depth map of the projection surface by adopting a Gaussian weighted interpolation process that fills out the missing information in the original depth map for a very low computation cost. This method can provide an alternative to the traditional mesh fitting methods that result in high computation costs. In addition, the proposed calibration method focuses on extrinsic parameters (i.e., the transformation matrix for positional relationships between the projector and depth camera), making it easy to respond to display resolution changes. Figure 1 illustrates the overall process of our proposed DIW system. The main contributions of our work are summarized as follows:
  • We first devise the concept of “Display in the Wild” as a new display device that overcomes the limitations of fixedness for both display position and surface condition that are present in existing display devices.
  • We propose a new hardware configuration specialized for DIW that couples a projector with a depth camera for fast and reliable corrections to the distortions that can occur due to dynamic non-planar surfaces.
  • To complete this DIW system, we also propose software configurations that allow the DIW display to be deployed in real time applications. Our software configurations include a fast geometric distortion estimation, correction, and DIW-specific calibration. These allow us to perform robust projector correction without using markers.
  • We prove the usefulness of our DIW system by performing comprehensive quantitative and qualitative tests under dynamic non-planar surface conditions.
This paper is organized as follows: In Section 2 we discuss the existing methods for distortion correction of the projector and for calibrations. Then, in Section 3 we describe our fast correction and calibration methods for the DIW system. We thoroughly conducted quantitative and qualitative experiments to demonstrate the performance of the proposed method, as described in Section 4. Lastly, we discuss limitations and conclusions for our proposed methods in Section 5 and Section 6, respectively.

2. Related Work

2.1. Correction of Projection Distortion

The projector can provide a large display screen at a relatively low cost but has a fatal disadvantage in that projection distortions can easily occur depending on the surface geometry and projection angle. When an image is projected on a flat surface without appropriate projection angles, a trapezoid-like distortion is caused, which is called the keystone effect. This problem can be corrected by adjusting the projector’s position manually or performing 2D projective transformations on an image. In this regard, many correction methods have been proposed to eliminate keystone distortions [23,24,25,26]. These methods slightly differ in the details, but mainly calculate the homography between the projected and original images using RGB cameras.
Besides this, when using the projector irregular distortions can occur when projecting an image on a non-planar surface. Such distortions cannot be corrected by a homography transformation as in the keystone case. Instead, predicting the irregular geometry of the surface is essential for correction. To correct distortion produced by specific non-planar surfaces, [27,28,29,30,31] proposed parametric surface models, such as cylindrical, conic, dome, and intersecting planar models. These methods can correct distortion on a non-planar surface from specific shapes, but it is difficult to apply them to non-planar surface shapes outside typical parametric surface models.
It is worth noting that our DIW system aims at automatic projector correction on a non-planar surface, which is different from projector deformation methods [32,33,34,35]. The main difference is that the projector deformation methods aim at texture mapping to deformable objects for Augmented Reality, involving some computer vision algorithms such as object segmentation/detection and template-based 3D shape mapping [32,35]. To overcome inaccurate behaviors of these computer vision algorithms, many recent projector deformation methods rely on adopting markers to easily find target objects and warping points [33,34].
On the other hand, our method belongs to the automatic projector correction methods where neither markers nor sophisticated computer vision algorithms are needed. The proposed methods aim to minimize distortions in a display caused by non-planarity of a target surface and projector geometry.
To correct geometric distortions on arbitrary surfaces, approaches based on structured light patterns have been proposed [3,4,5,6,7,8,9]. These methods project structured light pattern images on non-planar surfaces and captured the projected patterns by using RGB cameras. With these structured light patterns, 3D points on the projection surface can be calculated, and the correction can be performed according to the 3D shapes on the surface. However, when the shape of the projection surface changes, the shape information for the projection surface needs to be updated. For this, the projector needs to continually project the structured patterns onto the screen, which creates discontinuity in the displayed contents that the users are watching. This limitation makes it difficult to correct distortions that occur due dynamic non-planar surfaces, which can be solved via DIW systems.
To solve the updating problem of dynamic non-planar surfaces, imperceptible pattern-based correction methods have been proposed [10,11,12]. They calculate the 3D points of the distortion surface by inserting imperceptible patterns onto the projection image, which are unnoticeable to the users watching the display. However, they require a high computation power for estimating the model parameters of the dynamic non-planar surfaces, which often change in real time. On the other hand, Setkov et al. corrected the image geometry by using color invariant features [36]. This method can correct the dynamic distortion without beaming structured light patterns. However, it is difficult to apply this method to irregular surfaces since it adopts a global homography that limits reflections of the geometry for non-planar surfaces. Ito et al. proposed a stereo vision-based correction method to capture the geometry of a surface without disturbance for the structured light patterns on the screen [13]. This method estimates a perspective projection matrix using SIFT (Scale-Invariant Feature Transform) [37] and obtains the 3D shape of a non-planar surface using POC (Phase-Only Correlation) [38,39,40,41]. Therefore, Ito’s method is able to continuously predict and correct the distortion resulting from dynamic projection surfaces. As described above, many methods have been proposed for correcting the distortion, but only some of them can be applied to dynamic non-planar surfaces, which is thus a desirable functionality for our DIW system.
Although most of the existing methods use RGB cameras to calculate the 3D shape of the surface, they have intrinsic drawbacks when applied in our DIW scenario. First, to correct the distortion for a rapidly varying non-planar surface, it is essential to develop a fast 3D shape acquisition method for the real-time DIW system. However, to calculate the 3D shape of the surface, RGB camera-based correction methods inevitably perform feature-detection/matching processes. In this case, when the image resolutions are sufficiently high, such as HD ( 720 × 480 ) and FullHD ( 1920 × 1080 ), the traditional feature-detection/matching processes fail to run in real time [42]. Second, the projection area may not have sufficient contrast to detect the feature point, as projectors are mostly used in dark environments that make the captured images in the projection area relatively brighter and saturated. These two problems cannot be solved using the traditional correction methods with RGB-cameras. Instead of using an RGB camera, we incorporate a depth camera into the projection-based DIW system to correct distortions on the dynamic non-planar surfaces under real-time and practical dark ambient illumination conditions.

2.2. Camera to Projector Calibration

Calibrating the positional relationship between the camera and projector is an essential part of predicting the projection area in 3D coordinates. RGB cameras are unable to capture 3D geometry information directly, so, existing calibration methods use to beam specific patterns on a surface that the RGB camera can capture, thus estimating the 3D geometry. To obtain a higher accuracy, multiple sets of features are often extracted from the calibration patterns at different orientations. Then, the world coordinates of the features are estimated with respect to minimizing the reprojection errors. Most calibration methods between RGB cameras and projectors [14,15,16,17,43] perform in a similar manner to that described above.
On the other hand, some calibration methods utilize a depth camera since 3D points can directly and easily be calculated via depth images [20,21,22]. However, existing calibration methods with a depth camera are not specialized for a fast prediction on the projection surface, but rather accuracy of the calibration parameters. Furthermore, some of these methods require re-calibration when the resolution (i.e., the intrinsic parameters of the projector) changes, which often occurs in display devices. Therefore, we propose a simple calibration process to adapt a depth camera as our hardware configuration for DIW, which will be fully described in Section 3.2.

3. Proposed Method

In this study, we developed our DIW system based on the assumptions that: (i) the projector and depth camera are attached and fixed together; (ii) that the target surface projected by the projector is non-planar and dynamic; and (iii) that the depth camera is placed parallel to the longitudinal axis. Our proposed DIW system calculates the relationship between the projector and depth camera (the extrinsic parameters) at first using our proposed calibration method. After that, the projector distortion is corrected in real time through the following iterative process:
  • A target surface area that is projected by the projector is predicted quickly via the depth image with the estimated calibration parameters.
  • The most effective correction area (i.e., the maximum rectangular region inside the projection surface) is calculated for the undistortion processing on the display.
  • Finally, the original projection image is warped into the effective correction area, and the projector outputs the warped image in order to remove the geometric distortion on the target surface.

3.1. Hardware Configuration for the DIW System

We use a projector and depth camera as the hardware configuration to implement our DIW system. In this paper, we choose Kinect v2 and a portable laser projector (Model: Celluon PicoBit B06X93WFBP) where Kinect v2 captures depth images with 30 frames per second (fps) and the projector displays a screen with 60 Hz refresh rates. In choosing hardware configurations, we consider the following important conditions.
First, Kinect v2 has already demonstrated its performance in sensing precise depth images in many fields and has easy development. Even though it has a somewhat larger size than the recent Intel RealSense Depth Camera D435, we determined that the size of Kinect v2 is not big enough to interfere with the mobility, and further Kinect v2 offers fast implementation and testing advantages. Second, we use a laser projector instead of any existing LCD (Liquid-Crystal Display) or DLP (Digital Light Processing) projectors. These conventional projectors have a disadvantage in that one must adjust the focus according to the projection distance from lens. Because a DIW system should be able to generate the screen on a dynamic non-planar surface, adjusting the focus can be a complex problem for correcting the distortion in a DIW display. Moreover, in general LCD or DLP projectors have a large size. This creates a problem in mobility for easily creating a screen at a desired position. Unlike conventional projectors, the laser projector does not use a lens, so we do not need to adjust the focus. Furthermore, the laser projector is very small, so we decided it was suitable for the DIW system.
It is worth noting that, the physical allowable rotation angle and distance between the projector and the screen are limited in hardware configurations. Physically, the allowable rotation angle is dependent on the vertical/horizontal FOV(field of View)s of the projector. For example, when the vertical/horizontal FOVs are F o v y and F o v x , the allowable angular range for vertical direction is ± F o v y 2 and for horizontal ± F o v x 2 . Also, the allowable distance of the DIW system can be determined by the projection limit of the projector and sensing limit of the depth camera. Usually Kinect v2 has an allowable distance of eight meters, and the projector has a longer allowable distance, so eight meters can be judged as the allowable distance of our DIW system.

3.2. Projector to Depth Camera Calibration

We first calibrate the projector and depth camera in closed form. For this, we denote the 3D depth camera coordinates centered at the depth-camera position with points P d = ( x d , y d , z d , 1 ) T R 3 , and the 3D projector coordinates centered at the projector position with points P p = ( x p , y p , z p , 1 ) T R 3 , as shown in Figure 2. The affine transform matrix M d p that transforms the P d into P p is defined as
P p = M d p P d , M d p = R d p T d p 0 1 ,
where R d p R 3 × 3 and T d p R 3 represent the rotation (consisting of r 11 , r 12 , ,   r 33 ) and translation (consisting of t 1 , t 2 , t 3 ) matrices in the affine transform M d p , respectively.
We can easily obtain P d using the depth image and its intrinsic parameters. On the other hand, if P p is not directly available, the projector does not capture any geometric information. Instead of P p , we use 2D projection image coordinates for the images projected onto the target surface. We denote the 2D projection image coordinates as P i = x i , y i , which corresponds to P p . The mathematical representation for the mapping of P p to P i is represented as:
x i = f p x x p z p 1 y i = f p y y p z p 1 ,
where f p x and f p y are the width and height focal lengths of the projector, respectively. The above transformation P p = M d p P d can be expressed using the 2D projection image coordinate P i as:
x i f p x = r 11 x d + r 12 y d + r 13 z d + t 1 r 31 x d + r 32 y d + r 33 z d + t 3 x i f p x = r 21 x d + r 22 y d + r 23 z d + t 2 r 31 x d + r 32 y d + r 33 z d + t 3 .
To calculate elements of the transformation M d p in Equation (3) for more than one pair of P i and P d , we form a linear system A x = b , the details of which will be described in Appendix A. Then, by gathering the pairs, we solve the linear system using an SVD solver via our calibration process. It is worth noting that the SVD solver is known to yield a stable solution for a linear equation A x = b .
The proposed DIW system projects a chessboard pattern onto a surface using the projector and captures the projected chessboard pattern using the RGB camera embedded in the depth-camera to gather the pairs. Note that most depth-cameras contain built-in RGB cameras for sensing color information. Then, our system detects the corners of the projected chessboard pattern using the captured image by implementing a corner detection method in the OpenCV library [44]. We map the chess board corners that are detected via the RGB camera to the corresponding positions in the depth image using the calibration values of the depth camera, which are provided by the manufacturer. Figure 3 shows the overall process of gathering 3D points at the depth camera coordinates and the corresponding 2D coordinates at the projector image.
To collect a large number of sample points, our method generates a chessboard pattern with 25 different positions, moving the chessboard pattern 5 times by 20 pixels in both the horizontal and vertical directions. The chessboard that we used in this study has 54 corners ( 9 × 6 ) which can be obtained per each chessboard image. Therefore, the total 1350 corners can be gathered with 25 different images as shown in Figure 4. This calibration with a large number of sample points yields an accurate performances, which is given in Section 4.
The proposed calibration method can be used directly to detect the projection surface of the projector since it calculates the relationship between P d and P i via Equation (3). In addition, since the intrinsic parameters of both the depth camera and the projector are not included as unknown parameters in our calibration, it is possible to easily cope with a change in the projector resolution.

3.3. Projection Surface Prediction

Once we obtain the transformation matrix M d p through the proposed calibration, we can calculate the P i corresponding to P d .
We convert the depth image I d ( x , y ) , where ( x , y ) represents the pixel position, into 3D point clouds P d ( x , y ) at the 3D depth camera coordinates using the predefined intrinsic parameters. Substituting each 3D point of P d ( x , y ) into Equation (3), we can calculate the corresponding 2D projection image coordinates P i ( x , y ) . We then propose using the projection surface masks M s ( x , y ) for the depth image by using the simple and cost-effective equations as follows:
M s ( x , y ) = 255 if 0 x i < w and 0 y i < h , x i , y i P i ( x , y ) 0 otherwise ,
where w and h are the width and height of the 2D projection image, respectively, and x i and y i are the pixel positions at P i ( x , y ) . If the projection surface masks M s ( x , y ) is 255, the pixel of the depth image at ( x , y ) belongs to the projection surface as shown in Figure 5. In this way, we can simply detect projection regions based on the depth image in real time.

3.4. User Position

In this paper, we correct the distortion of the projection based on the assumptions that: (i) the user looks at the center of the screen (projection surface); and (ii) the user’s position is already known. In order to consider the user’s viewpoint in the correction process, we define 3D user coordinates with the user’s center as the origin, viewing this direction as the z-axis, left-to-right as the x-axis, and bottom-to-top as the y-axis. The user coordinates are defined relative to those of the projector one for easy adaptation of warping-based correction, which is fully described in Section 3.6. We also denote a point in the user coordinates as P u = ( x u , y u , z u , 1 ) T R 3 . Finally, the affine transform M p u that transforms P p into P u is defined as
P u = M p u P p = M p u M d p P d , M p u = R p u t p u 0 1 ,
where R p u R 3 × 3 and T p u R 3 represent the rotation (consisting of r 11 , r 12 , , r 33 ) and translation (consisting of t 1 , t 2 , t 3 ) matrices in the affine transform M p u , respectively.
From our experiments, we determined that the slight variations in the user position and viewpoint produce perceptually unnoticeable geometric distortions on the screen. This is because of the auto-compensation mechanism of the human visual system (HVS) [45], that is, the distortions produced by viewpoints are not noticeable even if the position of the user system is roughly approximated.
Note that this paper focuses on flexible and fast corrections with a projector and a depth-camera for the DIW system. Estimating a user’s position with a variety of poses and occlusions requires advanced user-detection algorithms with very high computation costs [46,47]. Most methods detect the user’s position in standing conditions, but, this is outside the scope of our paper. Therefore, we manually set the user’s position so that our DIW system runs in real time.

3.5. Correction Area Calculation

To correct the geometric distortion on the projection surface, it is necessary to determine the correction area that shows the maximal rectangular image region to the user. In this study, we argue that it is inefficient to calculate the correction area using all of the 3D points that belong to the projection area. Instead, we propose calculating the correction area using only the boundary pixels at the projection area, which is simple and efficient, as shown in Figure 6.
First, we find the boundary pixels of the projection area on the projection surface masks M s ( x , y ) using the fast contour detection method [48]. When several contours are detected due to discontinuities of the projection surface, we choose the largest contour as the boundary to find the correction area. Figure 6a shows an example of detecting the boundary of the projection surface masks M s ( x , y ) . Second, we create a distortion shape image at the user’s viewpoint in order to calculate the correction area from the detected boundary pixels efficiently. To do this, we perform a process similar to a computer graphics pipeline. The boundary pixels on the depth image are transformed into 3D points in the 3D user coordinates according to Equation (5), as shown in Figure 6b. These 3D points are projected onto a plane on the viewing volume of the user’s perspective. Then, we rasterize the projected points into a 512 × 424 image, as shown in Figure 6c. In the rasterization step, a polygon filled with the white color is drawn by using the projected 2D points with orders provided by the contour detection method [48]. Finally, the largest rectangle inside the filled regions with the white color of the rasterized image is calculated by Vandevoorde’s method [49].
After the largest rectangle (the correction area) is found on the 2D distortion shape image, as shown in Figure 6, we now define it using the 3D user coordinates for use in the correction process. Therefore, we define the largest rectangle as the correction area using the fields of view and transformations parametrically as
T x c = ( x c r x W p 2 ) + w c 2 , T y c = ( y c r y H p 2 ) + h c 2 , f o v x c = 2 tan 1 w c 2 r x Z p l a n e , f o v y c = 2 tan 1 h c 2 r y Z p l a n e , r x = W r / W p , r y = H r / H p ,
where f o v x c , f o v y c , T x c , and T y c are the fields of view and transformations for defining the largest rectangle as the correction area in the 3D user coordinates, ( x c , y c , w c , h c ) represents the largest rectangle on the 2D distortion shape image relative to the starting point, width and height, respectively. W p and H p in Equation 6 are the width and height sizes of projection plane, respectively, and W r and H r are the width and height sizes of the rasterized image, respectively. In this way, we can parametrically define the rectangular correction area using the 3D user coordinates as FOVs ( f o v x c and f o v y c ) and transformations ( T x c and T y c ).

3.6. Warping and Correction

In our calibration, we can transform the 3D depth image coordinates into the 2D projection image coordinates by using Equation (3), but it is impossible to convert them in the opposite direction, as it is an ill-posed problem. However, the depth information (3D information) of the projection image is essential for our undistortion process. Fortunately, we can somewhat calculate the depth of the projection image using sparse 3D information via our calibration and the depth image. Therefore, we must estimate unknown parts of the depth information in the projection image.
Existing correction methods [7,9] re-construct 3D parametric or meshed surfaces by fitting known 3D points to estimate the unknown information. However, these approaches have high computation costs, resulting in the slow correction of the geometric distortion of the projection. Instead, we generate a sparse projection depth image from known depth information in the projection image. Then, we quickly convert a sparse projection depth image into a dense projection depth image using Gaussian weight interpolation with 5 × 5 masks. Figure 7 shows the whole process of generating the dense projection depth image.
For generating a warping table to correct the projection distortion, we transform the dense projection depth image into 3D point clouds as P u ( x , y ) = { x u ( x , y ) , y u ( x , y ) , z u ( x , y ) } R 3 in the user coordinates via intrinsic parameters of the projector and M p u . We calculate the angles in the correction area, showing that each 3D point of P u ( x , y ) is included in the correction area parametrically as:
θ x ( x , y ) = 2 tan 1 x u ( x , y ) T x c z u ( x , y ) Z p l a n e 1 z u ( x , y ) , θ y ( x , y ) = 2 tan 1 y u ( x , y ) T y c z u ( x , y ) Z p l a n e 1 z u ( x , y ) ,
where Z p l a n e is the z-axis value for the perspective projection of distortion at the user’s viewpoint, θ x ( x , y ) is the angle relative to the x-axis, and θ y ( x , y ) is the angle relative to the y-axis. Then we check if these angles are inside the correction area as
θ x ( x , y ) f o v x c , θ y ( x , y ) f o v y c ,
When Equation (8) is satisfied, P u ( x , y ) is inside the correction area (the largest rectangle). Using the fields of view and transformations that define the correction area, we calculate the image positions to be displayed for correction by
x c = w · θ x ( x , y ) 2 · f o v x c + w 2 , y c = h · θ y ( x , y ) 2 · f o v y c + h 2 ,
where w and h represent the width and height of the projection image, respectively, and ( x c , y c ) is the position of the pixel to be displayed at ( x , y ) . Finally, we make a warped projection image to correct the distortion given by
g ( x , y ) = f ( x c , y c ) ,
where g is the warped image and f represents the original projection image. This process can warp the projection image through backward mapping, thus constructing the corrected image quickly.

4. Result and Analysis

To investigate the effectiveness of the proposed calibration and correction methods on our DIW system, we conducted both quantitative and qualitative experiments under various conditions. The depth and RGB cameras are built in Kinect v2 with 512 × 424 and 1920 × 1080 resolutions, respectively. For testing, the projector and Kinect v2 were connected to a laptop (MacBook Pro 2015 early with i5 2.7 GHz and 8 GB RAM) on which we ran our DIW system with the proposed calibration and correction methods. Since the depth image produced by Kinect v2 has lens distortion, we preprocessed correction of the lens distortion using Brown’s distortion model [50] with 5 correction parameters, K 1 , K 2 , K 3 , P 1 , and P 2 , which are supplied by the manufacturer, as shown in Table 1. As our laser projector does not use a lens for projection [51], we did not consider lens distortion in our tests. For user position, we set up H p u by assuming the user is located one meter behind the projector for all experiments.

4.1. Quantitative Evaluations

We first performed a comparative evaluation with the methods in [14,21] to determine the reprojection accuracy of the proposed calibration method. We also tested the correction accuracy as we projected and corrected horizontal/vertical line images in Figure 8 and measured the straightness of the lines on the corrected image. Finally, to verify the speed of the distortion correction method, which is essential to our DIW system, we performed a comparison evaluation for the correction time of each image with the methods in [7,9].

4.1.1. Reprojection Errors of Calibration

Our calibration method has the advantages that it can be applied directly to detect the projection surface and cope with variable intrinsic parameters such as screen resolutions in the projector. Thus, we tested the calibration accuracy of our method while it keeps these advantages. We calculated the reprojection errors of our method. Then, we compared our calibration method with the existing calibration methods in [14,21], which use an RGB camera and depth camera for calibration, respectively. For comparison, all methods being tested for calibration accuracy use the same number of samples, which are obtained by projecting a 9 × 6 chessboard to 25 different surfaces (i.e., a total of 1350 samples under test).
Table 2 shows the calibration results of our proposed method and the existing ones. In addition, we calculated the reprojection errors of the three methods, as shown in Figure 9. These results show that all three methods have similar results. However, our calibration is specialized to detect the projection surface directly and quickly for DIW, and it can easily cope with projector resolution changes.

4.1.2. Line Straightness Error Ratio Test

In order to test precision of the proposed correction method quantitatively, we calculated the straightness of vertical and horizontal line images, which are captured at the user position using a camera. To calculate the simple gradients of lines on the line images, we propose using the Line Straightness Error Ratio ( L S E R ) given by:
L S E R v = 1 | tan ( 90 ° k ) tan ( T e ) | , L S E R h = 1 | tan ( k ) tan ( T e ) | ,
where L S E R v is the line straightness error ratio for vertical straight lines, L S E R h is the line straightness error ratio for the horizontal lines, k is the angle with respect to the image x-axis of the image, and T e is the tolerance degree allowed by the error ratio. In this study, we set T e as 10°, which means that L S E R is 1 with no distortion ( k = 0 ) and L S E R is 0 with k = 10 ° . If the angle k is larger than the tolerance degree T e , it has a negative value. To measure L S E R , the line pattern images shown in Figure 10 are projected and corrected by the proposed method. Then, we captured them using the camera, and k is calculated with two edge points at the end of each line. We repeated the L S E R experiments five times in horizontal and vertical images, respectively, for the accuracy. Table 3 and Figure 8 show our L S E R experimental results. Based on these results, our correction method has a good L S E R performance over 0.99 at M e = 10 ° . This means that our correction results have a less than 0.1 ° difference in the straight lines, indicating that the proposed method can accurately correct distortion on non-planar surfaces.

4.1.3. Performance Test for Running Speed

In the proposed DIW, fast correction is considered an important requirement since the distortion of a dynamic non-planar surface need to be estimated and corrected in real time. Therefore, we compared the correction speed of the proposed method with the conventional RGB camera-based methods. To test the performance of the correction speed, we measured the time needed to correct an image with a 1920 × 720 resolution, which is the maximum resolution of our projector. Table 4 shows the correction times of the proposed and existing methods, indicating that the proposed method can correct the distortion of the image very quickly compared to the conventional methods. The main reason for the fast correction of our method is that we use the depth camera to predict the distorted surface directly, followed by filling in the missing information using simple interpolation. This experiment shows that the fast correction of our method is appropriate for dynamic surfaces, which is essential for DIW.

4.2. Qualitative Evaluations

In this paper, we carried out not only quantitative evaluation, but also qualitative evaluation for the proposed DIW. For the qualitative test, we setup two scenarios in which our DIW system can be used in practice: (i) we built a large screen using UAVs and a lightweight canvas to evaluate the practical use of our DIW system in space-constrained environments such as celebrations; and (ii) we manually made dynamic non-planar surface to evaluate extreme environments.
Figure 11 and Figure 12 show the qualitative results of our method, where the left column in each figure depicts the result obtained with the proposed method and the right column shows the output of a conventional projector. Our results show that the proposed DIW system effectively corrects distortion, resulting in a TV-like rectangular screen. Furthermore, our method shows good undistortion results on the dynamic non-planar surfaces made from UAVs and lightweight canvas. These results demonstrate that our correction method and DIW system can be useful in real and extreme environments.
To show the effectiveness of our DIW system, we uploaded a video clip (Video 1) at “https://youtu.be/RIpj0pED6NA”. The first part of Video 1 demonstrates the performance of our DIW system that consists of UAVs and a lightweight canvas. The second part of Video 1 shows the performance of our DIW system in an extreme case where screen geometries severely change in time. In Video 1, the left part of the screen shows projection images generated by our DIW system, and the right part of the screen shows projection images displayed by a projector only.
As shown in Video 1, screens with UAVs and canvas have irregular local and global movements due to wind of UAVs. In this case, our DIW system stably produces undistorted square screens in real time. On the other hand, the projection-only system yields clearly noticeable distortion. Regarding the second scenario, the projector-only system causes significant geometric distortions on extremely varying screen surfaces, while the proposed DIW system consistently provides rectangular-shaped images on that surface.

5. Limitations

Our DIW system uses a projector, which is fundamentally difficult to use in bright environments, especially during the daytime under outdoor conditions. This problem is a fundamental limitation of our DIW system. In recent years, off-axis projectors have been introduced that generate projection screens in a slightly upward direction to create large screens at a short distance. Regarding off-axis projectors, our DIW system cannot be applied to them, as off-axis projectors require different calibration and correction formulations than the proposed methods. Finally, in our method, the user position is manually set for use in the user-perspective distortion correction. This means that if the user’s position changes continually, our DIW system cannot track the user’s position and thus does not generate perceptually correct screens. This problem remains for future work.

6. Conclusions

In this paper, we propose a new display system, namely “Display in the Wild”, to overcome limitations of the conventional displays. To implement DIW, we suggest both hardware and software configurations. For the hardware of DIW, we select the projector and depth camera in order to generate screen anywhere with different sizes and to correct the geometric distortion occurring on dynamic non-planar surfaces in real time. For the software configuration, we propose a DIW-specific calibration between the projector and depth camera. In addition, we also propose a fast correction method that correct the geometric distortions on dynamic and non-planar surfaces. Our experimental results prove that the proposed DIW system generates the rarely distorted screen on a dynamic non-planar surface in real time, thus showing usefulness in practical scenarios (e.g., lightweight screen using UAVs) and under extremely varying surface conditions. Since the proposed method can be divided into per-pixel processing, we consider extending our DIW system to ultra-high definition (UHD)-based projector applications via implementing parallel processing with GPUs for future work.

Author Contributions

Conceptualization, B.R.; Methodology, B.R.; Software, B.R.; Validation, B.R.; Formal Analysis, B.R.; Investigation, B.R.; Resources, S.-H.B.; Data Curation, B.R.; Writing—Original Draft Preparation, B.R.; Writing—Review & Editing, S.-H.B.; Visualization, B.R.; Supervision, S.-H.B.; Project Administration, S.-H.B.; Funding Acquisition, S.-H.B.

Funding

This research was funded by a grant from both Kyung Hee University in 2017 (KHU-20171536) and Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2018R1C1B3008159).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Dividing the numerator and denominator of the right-hand side of Equation (3) by r 33 and replacing r 11 / r 33 , r 12 / r 33 , , t 3 / r 33 with l 1 , l 2 , , l 11 , respectively, yield:
m 1 z d = l 1 x d + l 2 y d + l 3 z d + l 4 m 1 l 9 x d m 1 l 10 y d m 1 l 11 , m 2 z d = l 5 x d + l 6 y d + l 7 z d + l 8 m 2 l 9 x d m 2 l 10 y d m 2 l 11 ,
where m 1 = x i f p x 1 , m 2 = y i f p y 1 . To solve Equation (A1), we set l 1 l 11 as the unknown calibration parameters and produce a system of linear equations A x = b as follows:
x d i 0 x d N 0 y d i 0 y d N 0 z d i 0 z d N 0 1 0 1 0 0 x d i 0 x d N 0 y d i 0 y d N 0 z d i 0 z d N 0 1 0 1 m 1 i x d i m 2 i x d i m 1 N x d N m 2 N x d N m 1 i y d i m 2 i y d i m 1 N y d N m 2 N y d N m 1 i m 2 i m 1 N m 2 N T l 1 l 2 l 3 l 4 l 5 l 6 l 7 l 8 l 9 l 10 l 11 = m 1 i z d i m 2 i z d i m 1 N z d N m 2 N z d N ,
where N is the number of points in our linear system, ( x d i , y d i , z d i ) is i-th 3D point in the depth camera coordinate, and ( m 1 i , m 2 i ) is the respective inverse product of the i-th 2D image point and the focal length of the projector. We can solve Equation (5) if we know at least eleven pairs of 3D points in the depth camera coordinates with the corresponding 2D points in the image coordinates.
We divide the all elements in the numerator and denominator of the right-hand side in Equation (A1) by r 33 . In order to avoid a homogeneous system of linear equation with b = 0 . Note that we specifically choose r 33 as the divisor since r 33 ( = cos ( θ x ) cos ( θ y ) , where 0 θ x , θ y π 2 ) is often greater than zero, thus avoiding unstable results when the denominator is close to zero. Another advantage of choosing r 33 as the divisor is that we can easily recover r 33 and H d p from the calibration parameters. Using l 10 , we calculate θ x as follows:
l 10 = r 32 r 33 = sin ( θ x ) cos ( θ y ) cos ( θ x ) cos ( θ y ) = tan ( θ x )
Using l 9 and θ x , we calculate θ y as follows:
l 9 = r 31 r 33 = sin ( θ y ) cos ( θ x ) cos ( θ y ) = tan ( θ y ) cos ( θ x ) ,
Then, we calculate r 33 by using θ x and θ y , which allows us to recover the transformation matrix H d p from l 1 l 11 .

References

  1. Desroches, L.B.; Ganeshalingam, M. The dynamics of incremental costs of efficient television display technologies. Technol. Forecast. Soc. Chang. 2015, 90, 562–574. [Google Scholar] [CrossRef] [Green Version]
  2. Yamamoto, S.; Tsuruse, M.; Takase, K.; Tsumura, N.; Nakaguchi, T.; Miyake, Y. Real-time Control of Appearance on an Object Using a High-Luminance PC Projector and Graphics Hardware. In Proceedings of the 13th Color and Imaging Conference Final Program and Proceedings, Darmstadt, Germany, 19–23 October 2005. [Google Scholar]
  3. Raskar, R.; Brown, M.S.; Yang, R.; Chen, W.C.; Welch, G.; Towles, H.; Scales, B.; Fuchs, H. Multi-projector displays using camera-based registration. In Proceedings of the Conference on Visualization’99: Celebrating Ten Years, San Francisco, CA, USA, 24–29 October 1999; pp. 161–522. [Google Scholar] [Green Version]
  4. Tardif, J.P.; Roy, S.; Trudeau, M. Multi-projectors for arbitrary surfaces without explicit calibration nor reconstruction. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 217–224. [Google Scholar] [Green Version]
  5. Brown, M.S.; Seales, W.B. Low-Cost and Easily Constructed Large Format Display System. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.108.7586 (accessed on 9 September 2018).
  6. Yuen, N.P.; Thibault, W.C. Inexpensive immersive projection. In Proceedings of the 2008 IEEE Virtual Reality Conference, Reno, NE, USA, 8–12 March 2008; pp. 237–240. [Google Scholar]
  7. Astre, B.; Sarry, L.; Lohou, C.; Zeghers, E. Automatic calibration of a single-projector catadioptric display system. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  8. Zhao, L.; Weng, D.; Li, D. The auto-geometric correction of multi-projector for cylindrical surface using Bézier patches. J. Soc. Inf. Disp. 2014, 22, 473–481. [Google Scholar] [CrossRef]
  9. Boroomand, A.; Sekkati, H.; Lamm, M.; Clausi, D.A.; Wong, A. Saliency-guided projection geometric correction using a projector-camera system. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2951–2955. [Google Scholar]
  10. Park, H.; Lee, M.H.; Seo, B.K.; Park, J.I. Undistorted projection onto dynamic surface. In Advances in Image and Video Technology; Springer: Berlin, Germany, 2006; pp. 582–590. [Google Scholar]
  11. Park, H.; Lee, M.H.; Seo, B.K.; Park, J.I.; Jeong, M.S.; Park, T.S.; Lee, Y.; Kim, S.R. Simultaneous geometric and radiometric adaptation to dynamic surfaces with a mobile projector-camera system. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 110–115. [Google Scholar] [CrossRef]
  12. Yang, T.J.; Tsai, Y.M.; Chen, L.G. Smart display: A mobile self-adaptive projector-camera system. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011. [Google Scholar]
  13. Ito, K.; Takahashi, T.; Aoki, T. A geometric correction method using stereo vision for projected images. In Proceedings of the First Asian Conference on Pattern Recognition, Beijing, China, 28–28 November 2011; pp. 515–519. [Google Scholar]
  14. Falcao, G.; Hurtos, N.; Massich, J.; Fofi, D. Projector-camera calibration toolbox. In Erasmus Mundus Masters in Vision and Robotics; University of Girona: Girona, Spain, 2009. [Google Scholar]
  15. Sadlo, F.; Weyrich, T.; Peikert, R.; Gross, M. A practical structured light acquisition system for point-based geometry and texture. In Proceedings of the 2005 Eurographics/IEEE VGTC Symposium Point-Based Graphics, Stony Brook, NY, USA, 21–22 June 2005; pp. 89–145. [Google Scholar]
  16. Liao, J.; Cai, L. A calibration method for uncoupling projector and camera of a structured light system. In Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2–5 July 2008; pp. 770–774. [Google Scholar]
  17. Moreno, D.; Taubin, G. Simple, accurate, and robust projector-camera calibration. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Zurich, Switzerland, 13–15 October 2012; pp. 464–471. [Google Scholar]
  18. Anwar, H.; Din, I.; Park, K. Projector calibration for 3D scanning using virtual target images. Int. J. Precis. Eng. Manuf. 2012, 13, 125–131. [Google Scholar] [CrossRef]
  19. Draréni, J.; Roy, S.; Sturm, P. Geometric video projector auto-calibration. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops, Miami, FL, USA, 20–21 June 2009; pp. 39–46. [Google Scholar]
  20. Wilson, A.D.; Benko, H. Projectors and Depth Cameras for Deviceless Augmented Reality and Interaction. U.S. Patent US8730309B2, 20 May 2014. [Google Scholar]
  21. Benko, H.; Costa, R.J.J.; Wilson, A.D. Providing an Interactive Experience Using a 3D Depth Camera and a 3D Projector. U.S. Patent US9329469B2, 3 May 2016. [Google Scholar]
  22. Wilson, A.D.; Benko, H. Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 3–6 October 2010; pp. 273–282. [Google Scholar]
  23. Raskar, R.; Beardsley, P. A self-correcting projector. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volume 2. [Google Scholar]
  24. Sukthankar, R.; Mullin, M.D. Automatic keystone correction for camera-assisted presentation interfaces. In Advances in Multimodal Interfaces—ICMI 2000; Springer: Berlin, Germany, 2000; pp. 607–614. [Google Scholar]
  25. Duran-Ramıirez, V.M.; Malacara-Doblado, D. Keystone aberration correction in overhead projectors. Appl. Opt. 2004, 43, 4123–4126. [Google Scholar] [CrossRef]
  26. Sukthankar, R.; Stockton, R.G.; Mullin, M.D. Smarter presentations: Exploiting homography in camera-projector systems. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 1, pp. 247–253. [Google Scholar]
  27. Harville, M.; Culbertson, B.; Sobel, I.; Gelb, D.; Fitzhugh, A.; Tanguay, D. Practical methods for geometric and photometric correction of tiled projector. In Proceedings of the 2006 IEEE Computer Vision and Pattern Recognition Workshop, New York, NY, USA, 17–22 June 2006. [Google Scholar]
  28. Xie, C.; Wang, Q.; Cheng, W. Simple auto-geometric correction for non-planar projection. In Proceedings of the International Conference on Automatic Control and Artificial Intelligence (ACAI 2012), Xiamen, China, 3–5 March 2012. [Google Scholar]
  29. Sajadi, B.; Majumder, A. Markerless view-independent registration of multiple distorted projectors on extruded surfaces using an uncalibrated camera. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1307–1316. [Google Scholar] [CrossRef] [PubMed]
  30. Sajadi, B.; Majumder, A. Auto-calibration of cylindrical multi-projector systems. In Proceedings of the 2010 IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20–26 March 2010; pp. 155–162. [Google Scholar]
  31. Sajadi, B.; Majumder, A. Autocalibration of multiprojector cave-like immersive environments. IEEE Trans. Vis. Comput. Graph. 2012, 18, 381–393. [Google Scholar] [CrossRef] [PubMed]
  32. Siegl, C.; Lange, V.; Stamminger, M.; Bauer, F.; Thies, J. FaceForge: Markerless Non-Rigid Face Multi-Projection Mapping. IEEE Trans. Vis. Comput. Graph. 2017, 23, 2440–2446. [Google Scholar] [CrossRef] [PubMed]
  33. Oliver Bimber, R.R. Spatial Augmented Reality: Merging Real and Virtual Worlds; AK Peters/CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar]
  34. Narita, G.; Watanabe, Y.; Ishikawa, M. Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker. IEEE Trans. Vis. Comput. Graph. 2017, 23, 1235–1248. [Google Scholar] [CrossRef] [PubMed]
  35. Grundhöfer, A.; Iwai, D. Recent Advances in Projection Mapping Algorithms, Hardware and Applications. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2018; Volume 37, pp. 653–675. [Google Scholar]
  36. Setkov, A.; Gouiffès, M.; Jacquemin, C. Color invariant feature matching for image geometric correction. In Proceedings of the 6th International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, Berlin, Germany, 6–7 June 2013; p. 7. [Google Scholar]
  37. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  38. Kuglin, C. The phase correlation image alignment method. In Proceedings of the IEEE International Conference on Cybernetics and Society, San Francisco, CA, USA, 23–25 September 1975. [Google Scholar]
  39. Takita, K.; Aoki, T.; Sasaki, Y.; Higuchi, T.; Kobayashi, K. High-accuracy subpixel image registration based on phase-only correlation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2003, 86, 1925–1934. [Google Scholar]
  40. Takita, K.; Muquit, M.A.; Aoki, T.; Higuchi, T. A sub-pixel correspondence search technique for computer vision applications. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2004, 87, 1913–1923. [Google Scholar]
  41. Shibahara, T.; Aoki, T.; Nakajima, H.; Kobayashi, K. A sub-pixel stereo correspondence technique based on 1D phase-only correlation. In Proceedings of the IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; Volume 5. [Google Scholar]
  42. Qu, X.; Soheilian, B.; Habets, E.; Paparoditis, N. Evaluation of SIFT and SURF for vision based localization. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 685. [Google Scholar] [CrossRef]
  43. Li, T.; Hu, F.; Geng, Z. Geometric calibration of a camera-projector 3D imaging system. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, Hong Kong, China, 11–12 December 2011. [Google Scholar]
  44. Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Sevvan, CA, USA, 2008. [Google Scholar]
  45. Artal, P.; Benito, A.; Tabernero, J. The human eye is an example of robust optical design. J. Vis. 2006, 6. [Google Scholar] [CrossRef] [PubMed]
  46. Jafari, O.H.; Mitzel, D.; Leibe, B. Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 5636–5643. [Google Scholar]
  47. Tian, L.; Li, M.; Hao, Y.; Liu, J.; Zhang, G.; Chen, Y.Q. Robust 3D Human Detection in Complex Environments with Depth Camera. IEEE Trans. Multimed. 2018, 20, 2249–2261. [Google Scholar] [CrossRef]
  48. Suzuki, S. Topological structural analysis of digitized binary images by border following. Comput. Vis. Graph. Image Process. 1985, 30, 32–46. [Google Scholar] [CrossRef]
  49. Vandevoorde, D. The maximal rectangle problem. Doctor Dobbs J. 1998, 23, 30–32. [Google Scholar]
  50. Brown, D.C. Decentering distortion of lenses. Photogramm. Eng. Remote Sens. 1966, 32, 442–462. [Google Scholar]
  51. Freeman, M.; Champion, M.; Madhavan, S. Scanned laser pico-projectors: Seeing the big picture (with a small device). Opt. Photonics News 2009, 20, 28–34. [Google Scholar] [CrossRef]
Figure 1. A practical example of our DIW system. We made a screen in the wild having dynamic and non-planar characteristics, using UAVs and a lightweight canvas. Our system can quickly predict/correct the geometric distortions of the projector.
Figure 1. A practical example of our DIW system. We made a screen in the wild having dynamic and non-planar characteristics, using UAVs and a lightweight canvas. Our system can quickly predict/correct the geometric distortions of the projector.
Sensors 18 03063 g001
Figure 2. An example of the relation between the depth camera and projector coordinates, where the affine transform matrix M d p transforms the depth camera coordinates ( P d ) into the projector coordinates ( P p ).
Figure 2. An example of the relation between the depth camera and projector coordinates, where the affine transform matrix M d p transforms the depth camera coordinates ( P d ) into the projector coordinates ( P p ).
Sensors 18 03063 g002
Figure 3. An example of the sampling procedure for calibration between the projector image coordinates and the depth camera coordinates.
Figure 3. An example of the sampling procedure for calibration between the projector image coordinates and the depth camera coordinates.
Sensors 18 03063 g003
Figure 4. An example of generating chessboard patterns with 25 different positions. Our calibration system projects the chessboard pattern by moving 5 times by 20 pixels in both the horizontal and vertical directions so that a total of 1350 samples can be gathered in this process.
Figure 4. An example of generating chessboard patterns with 25 different positions. Our calibration system projects the chessboard pattern by moving 5 times by 20 pixels in both the horizontal and vertical directions so that a total of 1350 samples can be gathered in this process.
Sensors 18 03063 g004
Figure 5. An example of the projection surface masks M s ( x , y ) obtained by Equation (4). The left image is a captured image with the projection and its background. The red region of the center image represents the projection surface masks. The right image shows the projection surface masks (colored regions) in 3D projector coordinates with five projection rays (white lines) at the boundaries and center.
Figure 5. An example of the projection surface masks M s ( x , y ) obtained by Equation (4). The left image is a captured image with the projection and its background. The red region of the center image represents the projection surface masks. The right image shows the projection surface masks (colored regions) in 3D projector coordinates with five projection rays (white lines) at the boundaries and center.
Sensors 18 03063 g005
Figure 6. An example of maximum rectangular region detection for the correction area: (a) the detected boundary on the projection surface masks depth image M s ( x , y ) ; (b) 3D points of the detected boundary at the 3D user coordinates; (c) the rasterized image of the projection area; (d) the final maximum rectangular region calculated from the correction area.
Figure 6. An example of maximum rectangular region detection for the correction area: (a) the detected boundary on the projection surface masks depth image M s ( x , y ) ; (b) 3D points of the detected boundary at the 3D user coordinates; (c) the rasterized image of the projection area; (d) the final maximum rectangular region calculated from the correction area.
Sensors 18 03063 g006
Figure 7. An example of generation of the dense projection depth image by recursive Gaussian weight interpolation.
Figure 7. An example of generation of the dense projection depth image by recursive Gaussian weight interpolation.
Sensors 18 03063 g007
Figure 8. Test results of the light straight ratio errors, which are captured using the camera at the user position. (a) The distorted screen. (b) The undistorted screen.
Figure 8. Test results of the light straight ratio errors, which are captured using the camera at the user position. (a) The distorted screen. (b) The undistorted screen.
Sensors 18 03063 g008
Figure 9. 2D histograms of the re-projection errors, which are calculated with respect to the projector image coordinates: (a) the RGB camera based method (std = 0.027), (b) the depth camera based method (std = 0.029), (c) the proposed method (std = 0.022).
Figure 9. 2D histograms of the re-projection errors, which are calculated with respect to the projector image coordinates: (a) the RGB camera based method (std = 0.027), (b) the depth camera based method (std = 0.029), (c) the proposed method (std = 0.022).
Sensors 18 03063 g009
Figure 10. The line images for the line straightness error ratio test. (a) The lines image with seven horizontal lines. (b) The lines image with fifteen vertical lines.
Figure 10. The line images for the line straightness error ratio test. (a) The lines image with seven horizontal lines. (b) The lines image with fifteen vertical lines.
Sensors 18 03063 g010
Figure 11. DIW results from dynamic non-planar surfaces generated from UAVs and a lightweight canvas. The left column depicts the result obtained with the proposed method and the right column shows the output of a conventional projector.
Figure 11. DIW results from dynamic non-planar surfaces generated from UAVs and a lightweight canvas. The left column depicts the result obtained with the proposed method and the right column shows the output of a conventional projector.
Sensors 18 03063 g011
Figure 12. DIW results from dynamic non-planar surfaces manually generated by a human. The left and right columns depict the result obtained with the proposed method and the output of a conventional projector, respectively.
Figure 12. DIW results from dynamic non-planar surfaces manually generated by a human. The left and right columns depict the result obtained with the proposed method and the output of a conventional projector, respectively.
Sensors 18 03063 g012
Table 1. Correction parameters of the lens distortion for Brown’s distortion model on Kinect v2 provided by the manufacture.
Table 1. Correction parameters of the lens distortion for Brown’s distortion model on Kinect v2 provided by the manufacture.
Correction ParametersValue
K 1 0.09558
K 2 −0.26799
K 3 0.08755
P 1 0.00000
P 2 0.00000
Table 2. Calibration results of the proposed method and existing methods.
Table 2. Calibration results of the proposed method and existing methods.
Method R d P T d P
The proposed method 0.8856 0 . 0116 0.0109 0.0102 0.8583 0.0072 0.0091 0.0142 0.9998 16.7414 17.2027 278.4640
The method [14] 0.8801 0.0123 0.0138 0.0135 0.8553 0.0076 0.0059 0.0101 0.9967 16.7350 17.1968 278.4688
The method [21] 0.8871 0.0083 0.0070 0.0151 0.8595 0.0174 0.0152 0.0088 1.0050 16.7446 17.2056 278.4625
Table 3. Line Straightness Error Ratio results of the proposed correction method.
Table 3. Line Straightness Error Ratio results of the proposed correction method.
Test Number Vertical LinesHorizontal Lines
10.992101.00000
20.992090.99111
30.996050.99111
40.992100.99115
50.981571.00000
Average0.990780.99467
Table 4. The correction time (seconds) results of the proposed method and existing methods.
Table 4. The correction time (seconds) results of the proposed method and existing methods.
MethodProjection Surface PredictionCorrectionTotal
The proposed method0.0120.0910.103
The method in [7]4.3245.4219.745
The method in [9]3.8754.2738.148

Share and Cite

MDPI and ACS Style

Ryu, B.; Bae, S.-H. Display in the Wild (DIW): An Adaptive Projection-Imaging System to Screen Geometry in Real Time. Sensors 2018, 18, 3063. https://doi.org/10.3390/s18093063

AMA Style

Ryu B, Bae S-H. Display in the Wild (DIW): An Adaptive Projection-Imaging System to Screen Geometry in Real Time. Sensors. 2018; 18(9):3063. https://doi.org/10.3390/s18093063

Chicago/Turabian Style

Ryu, Byungyong, and Sung-Ho Bae. 2018. "Display in the Wild (DIW): An Adaptive Projection-Imaging System to Screen Geometry in Real Time" Sensors 18, no. 9: 3063. https://doi.org/10.3390/s18093063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop