*3.2. Capturing of the L-EIA and R-EIA from Each of the Two Test 3-D Objects*

As the test objects, two kinds of 3-D objects were computationally generated with the 3D Max, which were composed of two pairs of 2-D images with different depths. As seen in Figure 11a, one of them was the 3-D object composed of two English alphabetical letters of 'V' and 'R', which were located

at the depth planes of 50mm and 100mm, respectively, and the other one was the 3-D object composed of a numeric letter of '3' and an English alphabetical letter of 'D', which were also located at the same depth planes of 50 mm and 100 mm, respectively, from the pickup lens array.

**Figure 11.** (**a**) Two test 3-D objects composed of two pairs of 2-D images with different depths ('VR' and '3D') and (**b**) on-axis integral imaging pickup system.

Here, 'V' and '3' images were colored with red, whereas 'R' and 'D' images were colored with green for the visual separation. In addition, two 3-D objects of 'VR' and '3D' were assigned to each of the left and right views for the DV 3-D display, respectively.

These two test 3-D objects were computationally picked up with the on-axis integral imaging system of Figure 11b. As mentioned above, two English alphabetical and numeric images of 'V' and '3' were located at the distances of 50 mm, while two other English alphabetical images of 'R' and 'D' were located at the distances of 100 mm from the pick-up lens array, where depth differences between them are given by 50 mm. Here, the pickup camera was set to be located at the distance of 160 mm from the lens array. In the pickup process, the number of lenses of the lens array and the pitch of the lens array and focal length of an elemental lens are in the Table 1, respectively.

Figure 12 shows two kinds of EIAs captured from two 3-D objects of 'VR' and '3D', which were designated to the L-EIA and R-EIA, respectively. That is, the 'VR' image was used for the left viewing zone, while the '3D' image was used for the right viewing zone.

**Figure 12.** Two EIAs captured from each 3-D object of the 'VR' and '3D': (**a**) L-EIA captured from the 'VR', (**b**) R-EIA captured from the '3D'.

#### *3.3. Generation of the DV-EIA from the L-EIA and R-EIA Based on the SSIM Method*

Then, two captured EIAs of the L-EIA and R-EIA were synthesized into a single DV-EIA, based on the SSIM method on the sub-image plane. Figure 13 shows the synthesizing process of the DV-EIA with the picked-up L-EIA and R-EIA.

**Figure 13.** Generation of the DV-EIA with L-EIA and R-EIA based on SSIM: (**a**) L-EIA and R-EIA, (**b**) left sub-image array (L-SIA) and right sub-image array (R-SIA), (**c**) dual-view right sub-image array (DV-SIA), and (**d**) DV-EIA.

As seen in Figure 13a,b, these two captured EIAs of the L-EIA and R-EIA were transformed into their corresponding sub-image arrays (SIAs), such as the L-SIA and R-SIA, by using the EIA-to-SIA transformation (EST) method. Now, the DV-SIA could be synthesized from the L-SIA and R-SIA based on the SSIM method of Figure 4. That is, odd and even-number components of the L-SIs and R-SIs were selectively chosen from each of the L-SIA and R-SIA and mapped into the left and right half parts of the dual-view SIA (DV-SIA), respectively.

Figure 13c shows the synthesized DV-SIA from the L-SIA and R-SIA, where the numbers of L-SIs and R-SIs consisting of the DV-SIA are 12 and 13, respectively along the *x*-direction, which means the value of α was set to be 0.5 from Equation (1). This DV-SIA was then transformed into the corresponding DV-EIA by using the inverse EST (IEST) method, which is shown in Figure 13d. Since the DV-EIA was a multiplexed EIA with two kinds of EIAs, such as the L-EIA and R-EIA, the resolution and number of EIs of the DV-EIA became the same as those of the L-EIA and R-EIA.

### *3.4. Dual-View 3-D Display of the DV-EIA*

The DV-EIA can be reconstructed into two different 3-D object images of 'VR' and '3D' in the left and right viewing directions, respectively, on the proposed CMA-DPII system, which is shown in Figure 10.

As seen in Figure 10, the fabricated 22 CMA was set to be located at a distance of 800 mm from the beam projector (Model: LG PF85K). Since the pitch of the elemental convex mirror and the distance between the projector and CMA were 7.47 mm and 800 mm, respectively, the far-field condition of *P*/*L* = 0.009 < 0.01 for the PRA method could be satisfied. In addition, in the experiments, the parameters of α and *K* were set to be 0.5 and 15, respectively, and arctan(1/2(*f#*)) of the CM was calculated to be 27.0◦. Thus, with these experimental parameters, each of the left and right viewing angles was calculated to be 20◦ from Equation (5).

Thus, the DV-EIA generated from the pickup system was loaded on the projector. Then, the divergent beam of the projector containing the information of the DV-EIA was projected onto the CMA. On every convex mirror of the CMA, the left and right-view components of the DV-EIs were separated and reflected back into their viewing directions, and two different 3-D scenes were integrated and displayed on their viewing zones. Figure 14 shows two kinds of optically reconstructed 3-D images of 'VR' and '3D' at two different viewing zones on the proposed display system of Figure 14 from the DV-EIA. That is, Figures 14a–c and 14d–f show three kinds of optically reconstructed 3-D images of 'VR' and '3D' at the left and right viewing zones, which are viewed from each of the different viewing angles of −19.5◦, −10.0◦, −3.0 and 3.0, 10.0◦, 19.5◦, respectively.

**Figure 14.** 3-D images of 'VR' viewed from three different viewing angles of (**a**) −21.0◦, (**b**) −10.0◦ and (**c**) −3.0◦, and those of '3D' viewed from those of (**d**) 3.0◦, (**e**) 10.0◦ and (**f**) 21.0◦, respectively.

As we can see, in the left-hand 3-D image of 'VR' viewed from the angle of −19.5◦, 'V' and 'R' images are partially overlapped, where a part of the 'R' image is blocked by the image of 'V' because the 'R' object was originally located back of the 'R' object by 50 mm in the pickup process. It means that the reconstructed image of 'VR' is a form of 3-D data with depth. In the case of the center image of 'VR' of Figure 14b, which is viewed from the angle of −10.0◦ at the left viewing zone, 'V' and 'R' images look aligned in parallel, whereas in case of the right-hand image of 'VR', 'V' and 'R' images get a little bit separated from each other since they have different depths. That is, as we move from the left to the center and right directions, the distance difference between the two object images of 'V' and 'R' increases, which confirms that these object images have depth information. In addition, Figure 14d–f also show the optically reconstructed 3-D images of '3D' at the right viewing zone from the DV-EIA. Those reconstructed '3' and 'D' images at three different directions look almost the same as those images in the case of the 'VR'. These good experimental results of Figure 14 confirm that the proposed system can be applied to the practical dual-view 3-D display.

Moreover, viewing angles for the left and right viewing zones have been measured to range from −19.5◦ to −3.0◦ in the left viewing zone, and from 3.0◦ to 19.5◦ in the right viewing zone, respectively. These results may also confirm that the proposed system can provide relatively large viewing-angles with a simple optical configuration composed of only a pair of projectors and CMA.

#### *3.5. Experiments with Two Volumetric 3-D Objects*

Now, to confirm the feasibility of the proposed system in the practical application, two volumetric 3-D objects of 'Dice' and 'Car' were used as the test objects in the experiments. Figure 15 shows two kinds of test volumetric 3-D objects of 'Dice' and 'Car' whose sizes are 25 mm × 25 mm × 25 mm and 25 mm × 21 mm × 50 mm, respectively.

**Figure 15.** Two test volumetric 3-D objects of '**Dice**' and '**Car**'.

In the experiments, the same pickup lens array was employed. Thus, the pitch and focal length of the pickup lens array were also 1.63 mm and 3.13 mm, respectively, and they were located at the distance of 50 mm from each of the 3-D objects of 'Dice' and 'Car'. Here, the 'Dice' and 'Car' objects were used for the left and right views, respectively, in the dual 3-D display.

As seen in Equation (5), the viewing angle may change depending on the θ*kr*. Thus, under the condition that the pitch and focal length of the convex mirror are given by 7.32 mm and 7.47 mm, and the CMA is located at the distance of 800 mm from the projector, the maximum *K* value is 38 since it is limited by half the number of lenses of 77 along the *x*-direction. As the *K* value increases, the corresponding viewing angle decreases, whereas it increases as the *K* value decreases, which may confirm that the *K* value is inversely related to the viewing angle.

Figure 16 shows the synthesized DV-EIA based on the SSIM method from the captured L-EIA and R-EIA from each of the 'Dice' and 'Car', respectively, for three cases of *K* = 15, 11 and 8.

Here, the DV-EIA with 1920 × 1080 pixels is composed of 77 × 44 EIs, where each EI has the resolution of 25 × 25 pixels. Depending on the *K* values, three kinds of DV-EIAs are synthesized based on the SSIM method, which are shown in Figure 16.

**Figure 16.** Synthesized DV-EIA for the 'Dice' and 'Car' objects with different *K* values: (**a**) *K* = 15, (**b**) *K* = 11 and (**c**) *K* = 8.

Now, these DV-EIAs can be reconstructed on the proposed display system of Figure 17. Figure 17 also shows the experimental results of the volumetric 3-D object images of 'Dice' and 'Car' reconstructed at the left and right viewing zones, respectively, for three cases of the *K* values.

**Figure 17.** Reconstructed dual-view 3-D object images of 'Dice' and 'Car' at the left and right viewing zones, respectively, for three cases of the *K* values.

As seen in Figure 17, for the case of *K* = 15, the reconstructed object images of 'Dice' and 'Car' at each of the left and right viewing zones are getting truncated and disappeared just beyond the viewing angle of −19.5◦ and 19.5◦, respectively. These results show that the viewing angles of this dual-view 3-D display system have been measured to be ranged from −19.5◦ to −3.0◦ and from 3.0◦ to 19.5◦ for the left and right viewing zones, respectively. These measured left and right viewing angles of −19.5◦

and 19.5◦ have been found to be almost the same as those calculated values of −20◦ and 20◦, where the error ratio between the calculated and measured viewing angles has been calculated to around 3.0%. Moreover, as we move from the left to the center and center to right directions, different perspectives of the object images of 'Dice' and 'Car' can be viewed, as seen in Figure 17, which confirms the volumetric 3-D reconstruction of the 'Dice' and 'Car' objects in the proposed system.

For the case of *K* = 11, left and right viewing angles are measured to range from −22.0◦ to −3.0◦ and from 3.0◦ to 22.0◦, respectively, as seen in Figure 17, where those values are calculated to be −22.2◦ and 22.2◦, respectively, using Equation (5). Thus, the error ratio between the calculated and measured viewing angles has been calculated to be around 3.6%. Here, it must be noted that the left and right viewing angles increased by 3◦ as the *K* value decreased to 11 from 15.

For the case of *K* = 8, left and viewing angles are calculated to be −23.5◦ and 23.5◦, respectively. In addition, those values have been measured to range from -23.0◦ to -3.0◦ and from 3.0◦ to 23.0◦, respectively. Thus, these measured left and right viewing angles of −23.0◦ and 23.0◦ look almost the same as those of the calculated values of −23.5◦ and 23.5◦. Thus, for the case of *K* = 8, the error ratio between the calculated and measured viewing angles has been calculated to around 2.1%.

As seen in Figure 17, there are three kinds of viewing zones, such as active, inactive and overlapped regions. Here, the active zones marked with red color represent the DV zones, where we are able to watch the displayed 3-D objects. On the other hand, in other zones including the inactive and overlapped regions, a 3-D object cannot be properly viewed. These experimental results confirm that two different volumetric 3-D object images with their changing perspectives can be successfully reconstructed in both of the left and right viewing zones just like in the case of the previous experiments, and each of the left and right viewing angles can be changed depending on the *K* value.

Figure 18 visually shows that two observers sitting at each of the left and right viewing directions are separately watching the two different 3-D object images of 'Dice' and 'Car' reconstructed from the proposed system, where the projector is located at the distance of 800mm from the CMA, and two observers are seated at the distance of 1200 mm from the CMA.

9LGHRBGLFHDQGFDUDYL

**Figure 18.** Photos of two observers watching two different 3-D object images of 'Dice' and 'Car' at the left and right viewing zones with the optical setup of the proposed system.

As seen in Figure 18, each of the left and right viewers can see only the 'Dice' or 'Car' object images reconstructed at their viewing zones, which means that two different 3-D object images can be separately viewed by two observers who are distinctly sitting in their left and right viewing zones. Experimental results on two test 3-D object images of 'Dice' and 'Car' with the proposed system have been taken as video files and attached in Figure 18 as a video clip. In the video clip, volumetric 3-D object images of the 'Dice' and 'Car' with their changing perspectives, which are viewed from −23.0◦ to −3.0◦ and from 3.0◦ to 23.0◦, respectively, have been recorded for the case of *K* = 11. Here, the video frame rate and total recording time are set to be 24 fps (frames per second) and 34 s, respectively. In the video files, each of the left and right 3-D images of the 'Dice' and 'Car' has been recorded for 17 s, respectively.

As seen in the experimental results of Figures 17 and 18, the resolution and visibility of the reconstructed 3-D images still look somewhat degraded since the fill factor of the elemental convex mirror is low [33]. Thus, using elemental mirrors with much-enhanced fill factors can enhance the resolution and visibility of those 3-D images.

### **4. Conclusions**

In this paper, a new type of the viewing angle-enhanced 3-D dual-view display based on the CMA-DPII system is proposed. Two kinds of elemental image arrays (EIAs) are captured from each of the two 3-D objects and synthesized into a single dual-view EIA (DV-EIA) with the selective sub-image mapping (SSIM) scheme. Then, a divergent beam of the projector containing this DV-EIA image is directly projected onto the CMA, where left and right-view components of the DV-EIA are separately reflected back into their viewing directions and finally form the two different 3-D images displayed on their viewing zones. From the ray-optical analysis with the parallel-ray-approximation method and successful experimental results with the test 3-D objects on an implemented 22 DV 3-D display prototype, the feasibility of the proposed system is confirmed in the practical application.

**Author Contributions:** Conceptualization, H.-M.C., J.-G.C., E.-S.K.; methodology, H.-M.C., J.-G.C.; validation H.-M.C., J.-G.C.; formal analysis, H.-M.C.; writing-original draft preparation, H.-M.C., J.-G.C.; writing-review and editing, E.-S.K.; funding acquisition, E.-S.K.

**Funding:** This research was partially funded by Basic Science Research Program through the National Research Foundation of Korea (NRF) supported by the Ministry of Education (No. 2018R1A6A1A03025242) and supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC support program (IITP-2017-01629) supervised by the IITP.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Review*
