Next Article in Journal
Making Sense of the World: Framing Models for Trustworthy Sensor-Driven Systems
Previous Article in Journal
Model Structure Optimization for Fuel Cell Polarization Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Profiling Director’s Style Based on Camera Positioning Using Fuzzy Logic

by
Hartarto Junaedi
1,2,3,*,†,‡,
Mochamad Hariadi
1,2,‡ and
I Ketut Eddy Purnama
1,2,‡
1
Department of Electrical Engineering, Institut Teknologi Sepuluh Nopember Surabaya, Surabaya 60111, Indonesia
2
Department of Computer Engineering, Institut Teknologi Sepuluh Nopember Surabaya, Surabaya 60111, Indonesia
3
Department of Information System, Sekolah Tinggi Teknik Surabaya, Surabaya 60284, Indonesia
*
Author to whom correspondence should be addressed.
Current address: Ngagel Jaya Tengah 73-77 Surabaya 60284, Indonesia.
These authors contributed equally to this work.
Computers 2018, 7(4), 61; https://doi.org/10.3390/computers7040061
Submission received: 6 September 2018 / Revised: 29 October 2018 / Accepted: 2 November 2018 / Published: 14 November 2018

Abstract

:
Machinima is a computer imaging technology typically used in games and animation. It prints all movie cast properties into a virtual environment by means of a camera positioning. Since cinematography is complementary to Machinima, it is possible to simulate a director’s style via various camera placements in this environment. In a gaming application, the director’s style is one of the most impressive cinematic factors, where a whole different gaming experience can be obtained using different styles applied to the same scene. This paper describes a system capable of automatically profile a director’s style using fuzzy logic. We employed 19 output variables and 15 other calculated variables from the animation extraction data to profile two different directors’ styles from five scenes. Area plots and histograms were generated, and, by analyzing the histograms, different director’s styles could be subsequently classified.

1. Introduction

Machinima is a technology for making films or game scenes in a real-time virtual environment. Machinima is not just a simple technology and there is still no software called machinima. By using machinima, we can create a virtual environment and all desired characters. Furthermore, in this virtual environment, we can animate as many actions as necessary. Nowadays, 3D application developers for computer games are more concerned with the ability to provide a natural experience in the virtual environment. 3D application developers—using some algorithms and methods—attempt to build every cinematography component in the virtual environment as naturally as possible to obtain satisfactory results. Machinima is a technology to do so. It is used not only for making films, but also for game applications. Machinima [1] uses graphics technology to render 3D images in real time. A cinematic product can be produced using this technology. Besides, machinima is also a low-cost alternative for the full production of filmmaking [2]. Nevertheless, to produce a higher quality cinematic product, research on improving camera control language, style incorporation on camera placement, etc. is much needed. In the real world, a director—giving life to a cinematic product—occasionally must create a storyboard to visualize the desired idea [3].
An important element in machinima is camera controller—defining where the camera should be placed and how the camera captures images. There are many styles of camera placement, such as first-person view, third-person view, or bird’s eye view perspective. These styles bring out a different trait of the game scene. Every game genre has its unique style, thus applying a different style to another genre gives it a very different characteristic. For example, when we apply bird’s eye style on first-person shooter games—e.g., Doom, Half-Life, and Counter-Strike—we get a whole different characteristic. Similar to games, every movie director has his or her unique style. Applying noir style to a romance movie will certainly bring out a different impression of the movie.
Machinima has several advantages as compared to other techniques. The results from using machinima are obtained in real time at cheaper production costs. Even though there are many current studies on machinima—especially on camera positioning—there are only a few studies on the positioning of a camera based on director’s style. Furthermore, there is no research on profiling director’s style yet. Focusing our research on the issue of camera controller, we propose a novel system for measuring a director’s style using automatic profiling of his style. The objective of this research was to determine whether a virtual camera placement suits a director’s style. By achieving this objective, we can help animators measure a director’s style automatically during the creation of animation in the game. If the information of certain style can be extracted, we can conversely apply that style to a game. Imagine a game with a customized style such that every player can have his favorite style. For instance, The famous Mario Bros Game has been made into many versions and researched by many researchers [4,5]. This game is just a simple side-scroll game with a static camera placement. If we can change the camera engine behavior, this game will have a totally different ambience. Imagine if Mario Bros camera engine were coupled with Role Playing Game (RPG) camera engine such as Lufia, or even an Action Adventure camera engine such as that in Assassin’s Creed [6,7]. In Figure 1, we can see some different camera positioning styles from Mojopahit Kingdom Game. The same game scene with different styles of camera positioning will give a different feel to the gamer.
Recent research on the domain of animation and game indicate that it an interesting and challenging topic. A technology to support production of animation and game is machinima. Machinima is a system that uses 3D graphics rendering technology in real time to produce a virtual cinematic product or to implement it in game development. Nowadays, the computer technology shifts from 2D to 3D technology. Henceforth, it also affects the development of game technology. The game perspective changes from 2D to 3D technology. The use of 3D technology is expected to be at a higher level. Game and animation are getting even closer to the real condition. The virtual world is expected to correspond to the real world.
Even though the novelty of this research—on how to measure a director’s style—is very high, we acknowledge that further studies are still needed to support the whole system. Figure 2 presents general virtual camera placement. In this figure, we can see that some processes must be done before placing a camera.
Usually, in the production of game or animation, camera placement or movement is done by an animator or a photography director. However, manual placement of a virtual camera in the virtual environment requires modeling and calculations, which need to be repeated for each scene. This demands a substantial cost and time [8]. Thus, the authors of [9] proposed a basic set of camera control in computer graphics. Camera control is an accessible, yet challenging problem that any developer of interactive 3D graphics applications has encountered. The field of computer graphics is one of the most challenging fields of science in the computer world. The main concern on this area is how similar the virtual world to the real world is. Barry [10] used multi-objective Particle Swarm Optimization (PSO) algorithm for virtual photography. PSO is used to apply a few rules in photography, such as rule of third, horizon line, and POI (Point of Interest).
There are many methods for how to place a virtual camera in a virtual environment. In [11], the author used PSO method to solve the problem in Visual Camera Composition (VCC). The approach used in this research is a hybrid strategy. The first step is to calculate the camera position which may have some predefined restrictions. Then, the camera position is calculated using PSO of a predetermined area. The resulting output parameter is the camera position, orientation, and Field-Of-View (FOV). Drucker [12] proposed a constraint for virtual camera in the virtual environment.
In [13], a camera placement system is proposed. This system generates two cameras, but the system places only the secondary camera, while the first one is a camera in fixed position. The system proposes to use static behavior tree method. Fanani [14] suggested some artificial intelligence on the camera using behavior trees and A* algorithm to follow the main actor. Hu [15] suggested a new semiautomatic camera language to control a virtual camera in machinima environment. Terziman [16] enhanced the placement of camera for first-person navigation view based on input parameters such as height and weight. The camera used for the system is a fixed camera. Christianson [17] described some established techniques for camera control.
In [18], the authors used the geometry input of an interactive 3D environment with a motion of a subject that is not known, as well as the flow of narrative elements that describe the actions taken in the 3D environment. The Narrative Element is a component of the conversation to provide a relevant information of action in the story. There is a four-step process for calculating viewpoint and transition: selecting elements of narrative, counting director volumes, editing director volumes, and counting transitions. Lino [19] proposed a new system called Toric space, a novel and compact representation for intuitive and efficient virtual camera control as well as an effective viewpoint interpolation technique which ensures the continuity of visual properties along the generated paths. Benini [20] investigated four different inherent characteristics of single shots which contain indirect information about scene depth. He used Support Vector Machine (SVM). Ferreira [21] proposed a system called IVE (Intelligent Virtual Environment). The goal of IVE is to simulate behavioral simulations of virtual agents based on information from the virtual environment. There are four modules in this framework: IC, IVE, AgentSim and Visualizer.
Junaedi [22] suggested multi-behavior agent using Particle Swarm Optimization (PSO). This approach could be applied in virtual camera. In [23,24], the researcher proposed a system called Darshak. This system automatically constructs a cinematic narrative discourse of a given story in a 3D virtual environment. A nine-operator variable is proposed for the fixed camera. The shot that is generated by Darshak is visualized on a 3D Game engine. Lima [25] proposed an intelligent cinematography director for camera control in plot-based storytelling systems. The role of the director is to select in real-time the camera shots that best fit for the scenes and present the contents in an interesting and coherent manner. The knowledge uses SVM method. Jaafar [26] proposed the behavior (goal seeking and obstacle avoidance) of autonomous agent navigation in a virtual environment using a fuzzy controller.
In [27], the author made the architecture of Storytelling System using four modules, namely scriptwriter, scenographer, director, and cameraman, which are the main components of this architecture, while the focus of this research is a module director. The system uses SVM.
Dib [28] explored the effect of perspective view in educational animations of building construction management tasks by comparing the egocentric perspective view (first person view) and the exocentric perspective view (third person view). Cherif [29] classified the video shots based on golden ratio in the human body. There are seven types of shot: extreme long shot (XLS), long shot (LS), medium long shot (MLS), medium shot (MS), medium close up (MCU), close up (CU), and extreme close up (XCU).
Tsai-Yen [30] proposed a virtual system that can generate a sequence of camera shots automatically according to the screenplay. The system is decomposed into three modules imitating the roles in a real filmmaking process. This paper also suggests some preference parameters carrying user’s aesthetic style, which is used for each module to identify. He [31] suggested some systems for the virtual camera. The authors stated that there are difficulties to implement automatic cinematography. They used sixteen different modules to implement it. Hornung [32] suggested autonomous camera agents for transfer cinematography rule in interactive narratives and games. Burelli [33] created a virtual camera that will predict camera position based on some parameters and the data will be analyzed using Machine Learning.
Burelli [34] proposed the camera path planning in a virtual environments by modeling both camera movement and orientation parameter with the multiple Artificial Potential Fields (APF) in their system. The system supports some constraints: visibility, projection size, and view angle. There is a local minimum problem in the implementation for complex environment. An improvement of APF is proposed in Burelli [35] by prioritizing the frame constraint and calculating the initial position based on the first vantage angle or relative position, enforcing the frame coherence and consisting in interpolating the actor trajectory, and dynamically tuning the weights of frame constraint in the objective function.
In the development of a scene, it may take several cameras, not only one camera, because sometimes the director needs to emphasize some actions or property more than other parameters. Using only one camera will give only one point of view and it needs some time to move to another position. Each camera is needed to get a different viewpoint for the same scene. Tamine [36] showed us how to measure a good quality viewpoint. This research proposes two approaches of viewpoint evaluation method based on the characteristic of the nature of input information: the low-level and middle-level methods. Vazquez [37] developed a system to automatically select a good viewpoint based on image-based modeling to find the optimal view for each component.
A benchmark for virtual camera control is proposed [38] by measuring the accuracy, reliability, and initial convergence time. The simulation uses three scene backgrounds (forest, house and rocky). The backgrounds cover static objects as well as include some moving objects for the experiment. This benchmark differs from other research that uses view point evaluation.
Fuzzy logic has been widely used in research in the field of automation and manufacturing industries, optimization problem or management problems. Lukovac [39] proposed a neuro fuzzy model for developing a human resource portfolio. The hybrid algorithm combines fuzzy and neural network. This neuro fuzzy model uses fuzzy set input variable. Pamucar [40] used type-2 neuro fuzzy network to solve logistics problem. This nonlinear problem is optimized using fuzzy value as the input for type-2 neuro fuzzy. Another method uses neuro fuzzy approach [41]. This research uses adaptive neuro fuzzy model to solve the vehicle route selection problem referred to as Vehicle Routing Problem (VRP). The main problem is to model the language, thus the research uses fuzzy set to represent the input and output variables. Fuzzy logic membership function can reflect the situation in accordance with the real reality of life. Pamucar [42] also used fuzzy logic system for crossing levels selection, so the investment for safety equipment can be included into the automatic control strategy. The research results show that the developed fuzzy logic system can learn and imitate expert evaluations as well as demonstrate a competence level comparable with the competence level of experts. Sremac [43] developed ANFIS model to determine the economic order quantity. ANFIS is the modern class of hybrid systems of artificial intelligence. It is described as artificial neural networks characterized by fuzzy parameters. By combining two different concepts of artificial intelligence, it is expected to exploit the individual strengths of fuzzy logistics and artificial neural networks simultaneously. Fuzzy is widely used because of its similarity with real value, not boolean value. In this paper, fuzzy logic is used to determine various variations of camera positioning. This paper uses fuzzy logic because of the similarities between languages in the world of cinematography and fuzzy language, and also because of the grey value in the cinematography rules, not the boolean one.
Many studies discuss how to position a virtual camera in virtual environment. Some of them use an evolutionary based algorithm such as Particle Swarm Optimization or machine learning based method such as Support Vector Machine. Each method has advantages and disadvantages. Swarm algorithm approach needs more time for repetitive calculation processes. Indeed, there are several studies related to camera positioning, but few discuss director’s style, especially how to measure the camera placement to profile a director’s style. The other studies only discuss how to put the cinematography rules on their virtual camera engine, but not based on a director’s style. This paper does not discuss how to position a camera, but instead how to profile the style. Usually, this kind of research uses questionnaires as its measurement. However, this paper proposes an automatic system to recognize the style.
This paper is organized as follows. Section 1 contains the state of the art of the proposed system, why we need to profile the director’s style and reviews the related works. In Section 2, we discuss the basic theory of cinematography including director’s style. Section 3 is the main part of this research, where we describe the proposed method for profiling. The experiments and the results are described in Section 4. Section 5 is the conclusion and discussion of this research.

2. Cinematography and Director’s Style

A motion picture consists of many shots. Every shot requires the camera to be placed in the best position. Cinematography refers to the lighting and camera arrangement to record a photographic image for cinema [44]. Film is an art form with both language and aesthetic [45]. To produce a good film, there are several factors to consider. Best arrangement of cameras and lighting can make a film more interesting and appropriate to the storyline or screenplay. Good cinematography will greatly help the audience understand the story. For games, especially 3D or RPG games, we need cinematography rules to make the game real.
Some factors should be considered to produce a good film [46].
  • Camera Angle
    Camera angle means the specific location of the camera in shooting a film scene at a certain time, or we can say the camera angle is a point of view that is recorded by the camera. A scene can be taken from various angles simultaneously to get a different perspective from the audiences’s point of view. Camera angles include objective shot, subjective shot, and point of view shot. The shots can be categorized into close-up shot, medium shot, and long shot [47].
  • Continuity
    Continuity is a state condition between one frame and another frame. Without continuity, the frame will not connect with other frames [48]. A picture with perfect continuity is preferred because it depicts events realistically. A picture with wrong continuity action is unacceptable because it distracts rather than attracts. This implies that an action should flow smoothly across every cut in a motion picture
  • Cutting
    Cutting is the process of changing the point of view [49]. Cutting is an important process in film making because cutting has an important role in building a plot of a story. Without the right cutting, the audience will be distracted from the plot of the film.
  • Close Up
    Close Up is technique in photography to take a frame near the objects.
     
    Medium Close Up refers to taking frame such that the target is approximately midway between waist or shoulders and above the head.
     
    Head and Shoulder Close Up means taking a frame from below the shoulders to above the head.
     
    Head Close Up will capture the head area only.
     
    Choker Close up is a shot that covers areas below lips to above eyes.
     
    Extreme close up shows tiny objects (e.g. eyes, rings, etc.) or areas, or small portions of large subjects or areas being filmed in extreme close up so that they appear greatly magnified on the screen.
     
    Over The Shoulder Close Up
    is a typical motion picture shot, usually used in still photography, presenting the close up of a person as seen over-the-shoulder of another person in the foreground, which provides an effective transition from objectively filmed shots to point-of-view close up.
  • Composition
    Good composition is an arrangement of pictorial elements to form a unified, harmonious whole. Composition is about how a director directs a player, puts the background, property and all elements into a single unity to form a beautiful harmony as the way the story has been made. Placement and movement of players within the setting should be planned to produce favorable audience reactions. Making a good arrangement of elements will result in some impressions of static, dynamic, or others.
Every movie director has his unique style to direct and take scenes in his or her work. This artistic style distinguishes a director from another—and accordingly his product from other products. Currently, the process of developing a cinematic product requires considerable human intervention. This is due to the varying ability and behavior of each camera operator. Thus, the director’s involvement is necessary and sometimes, a director even needs to personally capture the motions to acquire the desired quality.
One famous director is James Cameron who directed the box office movie Avatar. This movie [50] can be considered a milestone in the birth of film production based on a virtual environment. He created a virtual camera technology to record his desired scenes during the production of Avatar. This virtual camera has the functions of a normal camera, but it could be used in the virtual environment. James Cameron is famous with his shooting style that highlights detailed components. In his movie Titanic, we can clearly see the details of the ship. Meanwhile, Christopher Nolan—the director of Dark Knight and Man of Steel—always highlights the realistic elements in his films.
Another famous director is Quentin Tarantino [51,52,53], with a number of successful box office films, including Kill Bill, Pulp Fiction, From Dusk Till Dawn and many more. Quentin Tarantino is a brilliant student in filmmaking and an expert in using cinematic language in his works to express his thrilling stories visually. Every cinophile will know and say that this is his style. Most of his directing styles are action thriller and darkness with the addition of an element of sadism. Figure 3 shows some of Quentin Tarantino’s trademark style.
The following are some styles of camera angles and shot-making (Point of View) which are often used by Quentin Tarantino in his films:
  • The Trunk and Hood POV
    In this style, a picture is shot from below as if it is taken from a car trunk. He made many films using this style.
  • Corpse POV
    This style is another variation of Trunk and Hood POV, but this one is taken from the eyes of the victim—that is, someone who is dead or lying on the ground. These two styles are variations of low angle shot.
  • Tracking Shot
    A Tracking Shot is a shot taken from the perspective of someone who is following the main actor. This shot is taken from someone’s eyes trailing the main actor. Sometimes this style is called the following shot.
  • God’s Eye Shot
    This shot is recorded with the camera positioned directly high above the actors to convey that something bigger than them is the subject, or in other words, as though a god is watching what the actors are doing.
  • Black and White Shot
    Black and white style is a shot in monochrome to establish a certain ambience in the course of the story. It can be a flashback—that is, recalling past events—or a special emphasis on a scene before scene transition.
  • Close Up on Lips
    Close up shot on the lips is a shooting style in which the actor’s lips are shot in full close up. This is to give the impression of a mysterious person or a sensual effect. This shot is usually taken in the beginning of the movie when a mysterious character appears. Another name for this shot is Choker Close Up style.
  • Violent Awakening
    This style takes a close up view from someone who suddenly wakes up from a sleep or a coma. This is to show the impression of tension and surprise.
Besides the styles above, Quentin has a preference for adding effects (e.g., blood splash) and recurring objects (e.g., cars). However, for this research, we only used five different styles of camera positioning from Quentin Tarantino. Other styles, such as black and white shooting as well as recurring objects, were not considered.

3. Profiling Methods

We propose a novel approach to profile or recognize a director’s style automatically. Usually, the process to recognize or to approve the result of camera positioning in machinima uses questionnaire approach. Every respondent will see the shot and be asked to assess the result. However, this process will spend a lot of time and effort. Thus, we want this process to be automatic.
Fuzzy logic system has been applied to many fields, including control, optimization, and artificial intelligence. A fuzzy logic system consists of four modules: fuzzifier, defuzzifier, inference engine, and rule base. Figure 4 shows the schematic diagram of fuzzy inference system used in this research. The advantages of using fuzzy logic approach are: the similarity between fuzzy language and cinematography language, as well as the simpler and faster approach as compared to other approaches. We do not need any training process—e.g., machine learning approach—and we do not need repetitive calculation process—e.g., evolutionary approach in Genetic Algorithm or Particle Swarm Optimization. However, fuzzy logic approach has a disadvantage because we need an expert or a judge for knowledge acquisition. Moreover, it is also difficult to carry out. Besides, the translation process into fuzzy inference system is also challenging.
In this section, we discuss the design simulation of the 3D system uniqueness and the storyline of our experiments. We also discuss the proposed fuzzy inference system block.

3.1. Design Simulation

Some inputs are processed. As we can see, in this experiment, there are five different scenes and two different styles. Every scene has a moving camera, a main actor, and/or secondary actor. Hence, each input consists of three coordinates (main actor, camera, and secondary actor) and also the timestamp in number of frames.
The characters for the simulation can be shown in Figure 5. The main character is on the left and the second character is on the right. The complexity of the characters and the background area are based on the constituting number of objects, triangles, and vertices, as shown in Table 1.
There are 19 inputs of the system. For every 3D object, there are x, y, z coordinates and also rx, ry, rz coordinates, as shown in Figure 6. The coordinates in 3D are represented by x, y, z and the angle of the object is based on rx, ry, rz coordinates, as shown in Figure 7 and Figure 8. We need to know all these coordinates to relate the camera placement with cinematography language. For example, when we take a front shot, it means we place the camera based on Y-Axis; howeverm when we take a high shot, it means we place the camera based on Z-Axis. In other words, we cannot depend on only one axis because there are three axes influencing the virtual camera position.
The area of our simulation (Figure 9) is complex enough to allow various actions to be done there. This area is actually a warehouse, so there are many rooms and paths that we can discover.
To develop a short animation where the result will be profiled, we used storyboard method. We did not develop a camera-style engine, instead we used the storyboard to describe the camera positioning. An animator then used the storyboard for his reference to create animation. For each scene, we created two styles. The first one was based on Quentin Tarantino’s style and the second one was based on general cinematic rule. Figure 10 shows the storyboard for Scene 1 in two different styles. The movements of the player are the same in the two styles (we use the same moving path), but the camera positioning and the camera movement are different. In Figure 10, the player walks ahead, and then in the middle of the scene he turns around and moves away. This scene is approximately 24 s. Scene 1 is the simplest as compared to the other scenes.
In other scenes, we add more complex events, such as searching for something and fighting with another character. Scene 2 is about 25 s, Scene 3 about 27 s, and Scene 4 about 30 s. The storyboard of Scene 2 can be shown in Figure 11. This scene shows the main character walking and there is a turning to the right. Figure 12 is the storyboard for Scene 3. In the end of this scene, the character searches for or opens something. The storyboard for Scene 4 is shown in Figure 13. In this scene, there is a fight between two characters.
Lastly, Figure 14 is the storyboard for Scene 5. This last scene is approximately 1 min—the longest as compared to other scenes. Style 1 is based on Quentin Tarantino, while the other style is based on some generic cinematography rules.

3.2. Fuzzy Logic for Profiling

In this section, we discuss the fuzzy logic system approach in our research. For this research, we designed five kinds of fuzzy logic to profile the style of shooting and the membership function that we extracted from the angle in Figure 7 and Figure 15:
  • The Tracking/Following Shot
    This fuzzy logic will decide whether the scene is a tracking shot. The output variable is Follow Shot.
  • Close Up Shot
    This is to profile the choker shot, a different close up shot, or not. The output variable is Lip Shot.
  • High Angle Shot
    This is to profile God View or an ordinary high angle shot. The output variable is God View.
  • Low Angle Shot
    This is to profile the low angle shot from the first person’s view. The output variable is Low First Player.
  • Trunk Shot
    This is to profile the low angle from the trunk shot. The output variable is Trunk Player.
Table 2 shows the fuzzy output membership for our simulation. There are five output variables and every variable has three membership functions. The output variables are used to profile the shot style. In the table, we can see that control is the parametric values to create the membership function types. Trimf means triangular membership function that needs the three control points shown in Figure 16. The first parameter is used for the left bottom point (variable a), the second one is the peak point (variable b), and the last one is for the bottom right corner point (variable c). Trapmf means trapezoidal membership function that needs the four control points shown in Figure 17. The control value is for variables a, b, c, and d.
Figure 18 shows the Follow Shot output variable membership function. There are three membership functions, the first trapezoid is the unfollow, the second one—which is triangular—is pseudo, and the last trapezoid is follow. Seen from the triangle, the triangular shape needs three points a, b, c, as shown in Figure 16. The control value in Table 2 means the values of the points. The first one (2) is the value of the left point (a), the second (4) is the peak point (b), and the last (6) is the value of the right point (c).
From the 34 inputs of the system, we used only eight inputs as fuzzy inputs. Nevertheless, in the future, we could optimize this system using all inputs. The inputs are as follows:
  • Distance P1: The distance between the main actor and the virtual camera. The range of this input is 0–20.
  • Different P1: The differences within some scenes that show consistency of tracking or following.
  • Angle Y Axis P1: The angle between the virtual camera and the main actor in y-axis.
  • Distance P2: The distance between the second actor and the camera.
  • Angle Y Axis P2: The angle between the camera and the second actor in y-axis.
  • Angle X Axis P1: The angle between the camera and the main actor in x-axis.
  • Coordinat Y: The elevation height of the camera, based on the y-axis of the camera.
  • Angle X Axis P2: The angle between the camera and the second actor in x-axis.
Table 3 shows the input membership functions of our simulation. The Fuzzy Rules for the system are the combinations from the number of input membership function parameters, as shown in Figure 4. From these combinations, we can get around 40,500 rules, but we reduced the rules to 47 significant rules. In Table 4, we can see some of the reduced rules of the fuzzy logic system. Table 5 is the IF THEN Rule representation of Table 4.
In Table 5, Rule 1 can be explained as follows: IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Rear THEN follow_shot=follow. It means if the distance between the virtual camera and the player is medium, the difference between previous and current positions is short, and the angle in y-axis is located behind, then we call it Follow Shot. From the cinematography language point of view, follow shot is a shot from behind the character with constant medium distance—as if the audience is following the character.

4. Results

In this research, we designed a simple movie scene using 3D game engine to simulate our system. We used some scenes to generate a movie clip and the profile of the movie clip. The scenes are based on the aforementioned storyboards.
In Figure 19, we can see the whole system that we proposed, but the main focus in this research is the profiling process. Assuming we already have a director’s style dataset, we can use any approach—such as fuzzy logic, swarm methods, or machine learning—for this process. This process requires elaborate experiment and it is a challenging process. However, for this research, we used the styles of an expert director. Two different sets of director’s style from the expert can be seen on the storyboard. For every scene, there are some actions and a different shooting style is applied. The output of this process is the camera positioning based on the applied style.
Before we develop an animation or movie clip, we can add some effects—such as transition, sound, and lighting. Then, we develop an animation based on the path of the storyboard. The output of this process is an animation. For every frame in the animation, we extract some values of coordinates and we add these values to our proposed system for profiling the director style. The profiling method we propose is fuzzy logic approach. The outputs of this system are area graph and histogram. Using the histogram, we then decide the director’s style.
In this research, the animation and the experiment were developed using 3D Game Engine Unity. For the experiments, we used five different scenes and two different styles. The same scene and action with different styles are shown in Figure 20. The first style is based on Quentin Tarantino and the second is another style or generic style. We develop a moving path for the character to take some actions and two different moving path styles for the virtual camera.
In Figure 20, we can see the same action for the same scene with different camera positions. From the visual perspective, we observe the difference between Quentin Tarantino’s style and generic style. The walk action, as shown in Figure 20a is captured from behind (follow shot), but in Figure 20b, the action is captured from the left side of the character. The first style is follow shot and the second style is left-side-scrolling point-of-view shot. Similar to the fighting action, the first one is captured from over-the-shoulder medium shot, but the second one is captured from the left side using long shot. We designed five different scenes, approximately 24 s–1 min long, and we used 30 fps rate. Hence, we have about 600–1600 frames.
Every scene and style is visualized using diagrams: area plot diagram and histogram. In the area plot diagram, the x-axis is the number of animation frames and the y-axis is the fuzzy output value. For the histogram, the x-axis is the value of fuzzy and y-axis is the frequency of occurrence of the values.
Figure 21 is the fuzzy result of the first style or Quentin Tarantino’s style. Next, Figure 22 is the fuzzy result of the second style. In these figures, we can see that, even though the scenes are the same, the graphs are different. This happens because of the different output fuzzy values from different styles.
From the fuzzy result, we create another diagram—the histogram—for each scene and style. The histogram of the first style is shown in Figure 23. Figure 24 shows the histogram of the second style. These histograms show the frequency of the appearance of the fuzzy values. For the profiling result, the threshold value is one. For Quentin Tarantino’s style (Style 1), most values appear on the right side of number one, as shown in Figure 23, and for the other style, most values are on the left side of number one, as shown in Figure 24. From this visualization diagram, we can profile two different styles.

5. Conclusions and Discussion

In this paper, we have described a novel approach for profiling director’s style using fuzzy logic. Usually in a research related to camera positioning, the result is measured using a questionnaire approach. However, using our model approach, we can get the profile or classify a director’s style automatically. This is the main benefit from our research.
We use fuzzy logic because there are similarities between cinematography language and fuzzy membership. The advantage of using fuzzy logic approach as compared to other approaches is that we can get the result faster in real time with less effort. For instance, machine learning approach requires several training phases, and we need to supply enough datasets as the input in the training phases. The process of preparing this dataset will take a lot of time and effort. However, when we compare fuzzy logic approach to an evolutionary approach—such as swarm algorithm—fuzzy logic approach is faster because we do not need a repetitive calculation process. We also know that the process for camera positioning in a game requires real time calculation. However, fuzzy logic approach also has disadvantages. One of the difficult factors is the knowledge extraction from an expert and the knowledge conversion into fuzzy rules. The knowledge acquisition process needs some time and we need an expert in thefield.
The success to profile director’s style will help us to extract it. We use five cinematography rules from 34 variables for every frame that we retrieve from one scene. There are five different scenes and there are two different styles for each scene. We have shown that, by using area plot and histogram generated by fuzzy logic, we can successfully classify the director’s style. Every style has a different histogram based on the major value of the histogram. For this research, we use only five rules because we want to profile Quentin Tarantino’s style and generic style. Nonetheless, for other directors’ styles, we need new knowledge acquisition and more rules development. The visualization using histogram makes it easier to profile the director’s style automatically.
For future work, we are planning to use more variables—for both input and output—to generate more fuzzy rules in cinematography for profiling director’s style. In addition, we want to reverse the process where the rules can be produced from a profiled director’s style.

Author Contributions

Conceptualization, H.J. and M.H.; Formal analysis, H.J., M.H.and I K.E.P.; Investigation, H.J.; Methodology, H.J.and I K.E.P.; Software, H.J.; Supervision, M.H. and I K.E.P.; Visualization, H.J. and M.H.; and Writing—original draft, H.J., M.H. and I K.E.P.

Funding

This research received no external funding.

Acknowledgments

This study was supported in part of BPPS, Republic of Indonesia under Ministry of Research, Technology and Higher Education Scholarship. The authors would like to thank to the Student of Department of Electrical Engineering (Institut Sepuluh Nopember Surabaya), member of Business Center and Multimedia Research Center at Sekolah Tinggi Teknik Surabaya, and Focaloid (Photography and Videography Studio) at Sekolah Tinggi Teknik Surabaya.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MEDMedium
FRFront Right
FLFront Left
UNFOLUnfollow

References

  1. Hancock, H.; Ingram, J. Machinima for Dummies; For Dummies; Wiley: Hoboken, NJ, USA, 2007; ISBN 978-0-470-19583-3. [Google Scholar]
  2. Elson, D.K.; Riedl, M.O. A lightweight intelligent virtual cinematography system for machinima production. In Proceedings of the Third AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Stanford, CA, USA, 6–8 June 2007; AAAI Press: Stanford, California, 2007; pp. 8–13. [Google Scholar]
  3. Hart, J. The Art of the Storyboard: A Filmmaker’s Introduction; Elsevier/Focal Press: Waltham, MA, USA, 2008; ISBN 978-0-240-80960-1. [Google Scholar]
  4. Summerville, A.; Mariño, J.R.H.; Snodgrass, S.; Ontañón, S.; Lelis, L.H.S. Understanding Mario: An Evaluation of Design Metrics for Platformers. In Proceedings of the 12th International Conference on the Foundations of Digital Games, Hyannis, MA, USA, 14–17 August 2017; ACM: New York, NY, USA, 2017; pp. 8:1–8:10. [Google Scholar]
  5. Karakovskiy, S.; Togelius, J. The Mario AI Benchmark and Competitions. IEEE Trans. Comput. Intell. AI Games 2012, 4, 55–67. [Google Scholar] [CrossRef] [Green Version]
  6. Miller, M. Assassin’s Creed: The Complete Visual History; Insight Editions: Dallas, TX, USA, 2015; ISBN 978-1-60887-600-6. [Google Scholar]
  7. Davies, P. The Art of Assassin’s Creed Unity; Titan Books: London, UK, 2014; ISBN 1-78116-690-0. [Google Scholar]
  8. Ranon, R.; Chittaro, L.; Buttussi, F. Automatic camera control meets emergency simulations. Comput. Graph. 2015, 48, 23–34. [Google Scholar] [CrossRef]
  9. Christie, M.; Olivier, P. Camera control in computer graphics: models, techniques and applications. In ACM SIGGRAPH ASIA 2009 Courses; ACM: Yokohama, Japan, 2009; pp. 1–197. [Google Scholar]
  10. Barry, W.; Ross, B.J. Virtual photography using multi-objective particle swarm optimization. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, Vancouver, BC, Canada, 12–16 July 2014; ACM: Vancouver, BC, Canada, 2014; pp. 285–292. [Google Scholar]
  11. Burelli, P.; Di Gaspero, L.; Ermetici, A.; Ranon, R. Virtual Camera Composition with Particle Swarm Optimization. In Smart Graphics; Butz, A., Fisher, B., Krüger, A., Olivier, P., Christie, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 130–141. [Google Scholar]
  12. Drucker, S.M.; Zeltzer, D. Intelligent Camera Control in a Virtual Environment. In Proceedings of the Graphics Interface’94, Banff, AB, Canada, 18–20 May 1994; pp. 190–199. [Google Scholar]
  13. Prima, D.A.; Ferial Java, B.B.; Suryapto, E.; Hariadi, M. Secondary camera placement in Machinema using behavior trees. In Proceedings of the 2013 International Conference on QiR, Yogyakarta, Indonesia, 25–28 June 2013; pp. 94–98. [Google Scholar]
  14. Fanani, A.Z.; Prima, D.A.; Java, B.B.F.; Suryapto, E.; Hariadi, M.; Purnama, I.K.E. Secondary camera movement in machinema using path finding. In Proceedings of the 2013 International Conference on Technology, Informatics, Management, Engineering and Environment, Bandung, Indonesia, 23–26 June 2013; pp. 136–139. [Google Scholar]
  15. Hu, W.; Zhang, X. A Semiautomatic Control Technique for Machinima Virtual Camera. In Proceedings of the 2012 International Conference on Computer Science and Electronics Engineering, Hangzhou, China, 23–25 March 2012; Volume 1, pp. 112–115. [Google Scholar]
  16. Terziman, L.; Marchal, M.; Multon, F.; Arnaldi, B.; Lécuyer, A. Personified and Multistate Camera Motions for First-Person Navigation in Desktop Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2013, 19, 652–661. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Christianson, D.B.; Anderson, S.E.; He, L.; Salesin, D.H.; Weld, D.S.; Cohen, M.F. Declarative camera control for automatic cinematography. In Proceedings of the Thirteenth National Conference on Artificial Intelligence—Volume 1, Portland, OR, USA, 4–8 August 1996; AAAI Press: Portland, OR, USA, 1996; pp. 148–155. [Google Scholar]
  18. Lino, C.; Christie, M.; Lamarche, F.; Schofield, G.; Olivier, P. A Real-time Cinematography System for Interactive 3D Environments. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Madrid, Spain, 2–4 July 2010; Eurographics Association: Goslar, Germany, 2010; pp. 139–148. [Google Scholar]
  19. Lino, C.; Christie, M. Intuitive and efficient camera control with the toric space. ACM Trans. Graph. 2015, 34, 1–12. [Google Scholar] [CrossRef] [Green Version]
  20. Benini, S.; Canini, L.; Leonardi, R. Estimating cinematographic scene depth in movie shots. In Proceedings of the 2010 IEEE International Conference on Multimedia and Expo, Suntec City, Singapore, 19–23 July 2010; pp. 855–860. [Google Scholar]
  21. Ferreira, F.P.; Gelatti, G.; Raupp Musse, S. Intelligent Virtual Environment and Camera Control in behavioural simulation. In Proceedings of the XV Brazilian Symposium on Computer Graphics and Image Processing, Fortaleza-CE, Brazil, 10 October 2002; pp. 365–372. [Google Scholar] [Green Version]
  22. Junaedi, H.; Hariadi, M.; Purnama, I.K.E. Multi agent with multi behavior based on particle swarm optimization (PSO) for crowd movement in fire evacuation. In Proceedings of the 2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP), Beijing, China, 9–11 June 2013; pp. 366–372. [Google Scholar]
  23. Jhala, A.; Young, R.M. Young Cinematic Visual Discourse: Representation, Generation, and Evaluation. IEEE Trans. Comput. Intell. AI Games 2010, 2, 69–81. [Google Scholar] [CrossRef]
  24. Jhala, A.; Young, R.M. Intelligent Machinima Generation for Visual Storytelling. In Artificial Intelligence for Computer Games; González-Calero, P.A., Gómez-Martín, M.A., Eds.; Springer New York: New York, NY, 2011; pp. 151–170. ISBN 978-1-4419-8188-2. [Google Scholar]
  25. Lima, E.E.S.; Pozzer, C.T.; d’Ornellas, M.C.; Ciarlini, A.E.M.; Feijó, B.; Furtado, A.L. Support Vector Machines for Cinematography Real-Time Camera Control in Storytelling Environments. In Proceedings of the 2009 VIII Brazilian Symposium on Games and Digital Entertainment, Rio de Janeiro, Brazil, 8–10 October 2009; pp. 44–51. [Google Scholar]
  26. Jaafar, J.; McKenzie, E. Behaviour Coordination of Virtual Agent Navigation using Fuzzy Logic. In Proceedings of the 2006 IEEE International Conference on Fuzzy Systems, Vancouver, BC, Canada, 16–21 July 2006; pp. 1139–1145. [Google Scholar]
  27. De Lima, E.E.; Pozzer, C.T.; d’Ornellas, M.C.; Ciarlini, A.E.; Feijó, B.; Furtado, A.L. Virtual cinematography director for interactive storytelling. In Proceedings of the International Conference on Advances in Computer Enterntainment Technology, Salzburg, Austria, 15–17 June 2009; ACM: Athens, Greece, 2009; pp. 263–270. [Google Scholar]
  28. Dib, H.N.; Adamo-Villani, N.; Yu, J. Computer Animation for Learning Building Construction Management: A Comparative Study of First Person Versus Third Person View. In E-Learning, E-Education, and Online Training; Vincenti, G., Bucciero, A., Vaz de Carvalho, C., Eds.; Springer International Publishing: New York, NY, USA, 2014; pp. 76–84. [Google Scholar]
  29. Cherif, I.; Solachidis, V.; Pitas, I. Shot type identification of movie content. In Proceedings of the 2007 9th International Symposium on Signal Processing and Its Applications, Sharjah, UAE, 12–15 February 2007; pp. 1–4. [Google Scholar]
  30. Li, T.-Y.; Xiao, X.-Y. An Interactive Camera Planning System for Automatic Cinematographer. In Proceedings of the 11th International Multimedia Modelling Conference, Melbourne, Australia, 12–14 January 2005; pp. 310–315. [Google Scholar]
  31. He, L.; Cohen, M.F.; Salesin, D.H. The virtual cinematographer: A paradigm for automatic real-time camera control and directing. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 4–9 August 1996; pp. 217–224. [Google Scholar]
  32. Hornung, A.; Lakemeyer, G.; Trogemann, G. An Autonomous Real-Time Camera Agent for Interactive Narratives and Games. In Intelligent Virtual Agents; Rist, T., Aylett, R.S., Ballin, D., Rickel, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 236–243. [Google Scholar]
  33. Burelli, P.; Yannakakis, G.N. Adapting virtual camera behaviour through player modelling. User Model. User-Adapt. Interact. 2015, 25, 155–183. [Google Scholar] [CrossRef] [Green Version]
  34. Burelli, P.; Jhala, A. Dynamic Artificial Potential Fields for Autonomous Camera Control. In Proceedings of the Fifth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Stanford, CA, USA, 14–16 October 2009; AAAI Press: Stanford, CA, USA, 2009; pp. 8–13. [Google Scholar]
  35. Burelli, P. Implementing game cinematography: technical challenges and solutions for automatic camera control in games. In Proceedings of the Eurographics Workshop on Intelligent Cinematography and Editing, Zurich, Switzerland, 4 May 2015; Eurographics Association: Zurich, Switzerland, 2015; pp. 59–62. [Google Scholar]
  36. Tamine, K.; Sokolov, D.; Plemenos, D. Viewpoint quality and global scene exploration strategies. In Proceedings of the International Conference on Computer Graphics and Applications, Setúbal, Portugal, 25–28 February 2006; pp. 184–191. [Google Scholar]
  37. Vázquez, P.-P.; Feixas, M.; Sbert, M.; Heidrich, W. Automatic View Selection Using Viewpoint Entropy and its Application to Image-Based Modelling. Comput. Graph. Forum 2004, 22, 689–700. [Google Scholar] [CrossRef]
  38. Burelli, P.; Yannakakis, G.N. A Benchmark for Virtual Camera Control. In Applications of Evolutionary Computation; Mora, A.M., Squillero, G., Eds.; Springer International Publishing: New York, NY, USA, 2015; pp. 455–467. [Google Scholar]
  39. Lukovac, V.; Pamučar, D.; Popović, M.; Đorović, B. Portfolio model for analyzing human resources: An approach based on neuro-fuzzy modeling and the simulated annealing algorithm. Expert Syst. Appl. 2017, 90, 318–331. [Google Scholar] [CrossRef]
  40. Pamučar, D.; Vasin, L.; Atanasković, P.; Miličić, M. Planning the City Logistics Terminal Location by Applying the Green -Median Model and Type-2 Neurofuzzy Network. Available online: https://www.hindawi.com/journals/cin/2016/6972818/cta/ (accessed on 23 October 2018).
  41. Pamucar, D.; Ćirović, G. Vehicle route selection with an adaptive neuro fuzzy inference system in uncertainty conditions. Decis. Mak. Appl. Manag. Eng. 2018, 1, 13–37. [Google Scholar] [CrossRef]
  42. Pamučar, D.; Atanasković, P.; Miličić, M. Modeling of fuzzy logic system for investment management in the railway infrastructure. User Model. User-Adapt. Interact. 2015, 22, 1185–1193. [Google Scholar] [CrossRef]
  43. Sremac, S.; Tanackov, I.; Kopić, M.; Radović, D. ANFIS model for determining the economic order quantity. Decis. Mak. Appl. Manag. Eng. 2018, 1. [Google Scholar] [CrossRef]
  44. Mascelli, J.V. The Five C’s of Cinematography: Motion Picture Filming Techniques; Silman-James Press: Hollywood, CA, USA, 1998; ISBN 978-1-879505-41-4. [Google Scholar]
  45. Bordwell, D.; Thompson, K. Film Art: An Introduction; McGraw Hill: New York, NY, USA, 2008; ISBN 978-0-07-331027-5. [Google Scholar]
  46. Arijon, D. Grammar of the Film Language; Silman-James Press: Hollywood, CA, USA, 1991; ISBN 978-1-879505-07-0. [Google Scholar]
  47. Canini, L.; Benini, S.; Leonardi, R. Classifying cinematographic shot types. Multimed. Tools Appl. 2013, 62, 51–73. [Google Scholar] [CrossRef] [Green Version]
  48. Galvane, Q.; Ronfard, R.; Lino, C.; Christie, M. Continuity editing for 3D animation. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; MAAAI Press: Austin, TX, USA, 2015; pp. 753–761. [Google Scholar]
  49. Brown, B. Cinematography: Theory and Practice: Imagemaking for Cinematographers, Directors & Videographers; Focal Press: Waltham, MA, USA, 2002; ISBN 978-0-240-80500-9. [Google Scholar]
  50. Bennett, J.; Carter, C.P. Adopting Virtual Production for Animated Filmaking; Prakash, E., Ed.; Creative Industries Faculty: Singapore, 2014. [Google Scholar]
  51. Pratt, M.K. How to Analyze the Films of Quentin Tarantino; ABDO Publishing Company: Edina, MN, USA, 2011; ISBN 978-1-61613-529-4. [Google Scholar]
  52. Tarantino, Q.; Peary, G. Quentin Tarantino: Interviews; University Press of Mississippi: Jackson, MS, USA, 1998; ISBN 978-1-57806-051-1. [Google Scholar]
  53. Woods, P. Quentin Tarantino: The Film Geek Files; Plexus: New Jersey, NJ, USA, 2000. [Google Scholar]
Figure 1. Mojopahit Kingdom Game point of view: (a) front View; (b) god view; (c) side view; and (d) bird’s eye view.
Figure 1. Mojopahit Kingdom Game point of view: (a) front View; (b) god view; (c) side view; and (d) bird’s eye view.
Computers 07 00061 g001
Figure 2. The general block system of virtual camera placement.
Figure 2. The general block system of virtual camera placement.
Computers 07 00061 g002
Figure 3. Some of Quentin Tarantino’s Directing Style: (a) Trunk and Hood POV; (b) Corpse POV; (c) God’s Eye POV; (d) Black and White Shot; (e) Lip Shot; and (f) Violent Awakening.
Figure 3. Some of Quentin Tarantino’s Directing Style: (a) Trunk and Hood POV; (b) Corpse POV; (c) God’s Eye POV; (d) Black and White Shot; (e) Lip Shot; and (f) Violent Awakening.
Computers 07 00061 g003
Figure 4. Fuzzy inference system block.
Figure 4. Fuzzy inference system block.
Computers 07 00061 g004
Figure 5. Character of simulation.
Figure 5. Character of simulation.
Computers 07 00061 g005
Figure 6. Design modeling for character and camera modeling.
Figure 6. Design modeling for character and camera modeling.
Computers 07 00061 g006
Figure 7. Shot direction angle: (a) Z-axis angle; (b) X-axis angle; and (c) Y-axis angle.
Figure 7. Shot direction angle: (a) Z-axis angle; (b) X-axis angle; and (c) Y-axis angle.
Computers 07 00061 g007
Figure 8. System coordinates and rotation axis in 3D.
Figure 8. System coordinates and rotation axis in 3D.
Computers 07 00061 g008
Figure 9. Warehouse design for area simulation (a) top view; and (b) perspective world.
Figure 9. Warehouse design for area simulation (a) top view; and (b) perspective world.
Computers 07 00061 g009
Figure 10. Storyboard for Scene 1.
Figure 10. Storyboard for Scene 1.
Computers 07 00061 g010
Figure 11. Storyboard for Scene 2.
Figure 11. Storyboard for Scene 2.
Computers 07 00061 g011
Figure 12. Storyboard for Scene 3.
Figure 12. Storyboard for Scene 3.
Computers 07 00061 g012
Figure 13. Storyboard for Scene 4.
Figure 13. Storyboard for Scene 4.
Computers 07 00061 g013
Figure 14. Storyboard for Scene 5: (a) Style 1; and (b) Style 2.
Figure 14. Storyboard for Scene 5: (a) Style 1; and (b) Style 2.
Computers 07 00061 g014
Figure 15. Angle quadrant of simulation.
Figure 15. Angle quadrant of simulation.
Computers 07 00061 g015
Figure 16. Triangular membership function.
Figure 16. Triangular membership function.
Computers 07 00061 g016
Figure 17. Trapezoidal membership function.
Figure 17. Trapezoidal membership function.
Computers 07 00061 g017
Figure 18. The Follow Shot membership function.
Figure 18. The Follow Shot membership function.
Computers 07 00061 g018
Figure 19. Proposed architectural system for the experiment.
Figure 19. Proposed architectural system for the experiment.
Computers 07 00061 g019
Figure 20. Same scene and action but different director’s style: (a) Style 1; and (b) Style 2.
Figure 20. Same scene and action but different director’s style: (a) Style 1; and (b) Style 2.
Computers 07 00061 g020
Figure 21. Area Plot Diagram for: Style 1 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Figure 21. Area Plot Diagram for: Style 1 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Computers 07 00061 g021
Figure 22. Area Plot Diagram for Style 2 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Figure 22. Area Plot Diagram for Style 2 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Computers 07 00061 g022
Figure 23. Histogram Diagram for Style 1 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Figure 23. Histogram Diagram for Style 1 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Computers 07 00061 g023
Figure 24. Histogram Diagram for Style 2 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Figure 24. Histogram Diagram for Style 2 (a) Scene 1; (b) Scene 2; (c) Scene 3; (d) Scene 4; and (e) Scene 5.
Computers 07 00061 g024
Table 1. Character and scene complexity.
Table 1. Character and scene complexity.
Character/SceneObjectsTriangleVertices
Background720629K451K
Main Character135K21K
Second Character189706154
Table 2. Fuzzy output membership.
Table 2. Fuzzy output membership.
Output VariableMembership Function
MFTypeControl
Follow Shot (O1)Unfollowtrapmf[0,0,2,3]
Pseudotrimf[2,4,6]
Followtrapmf[5,7,10,10]
Lip Shot (O2)Unlip Shottrapmf[0,0,2,4]
Pseudotrimf[2,5,8]
Lip Shottrapmf[6,8,10,10]
God View (O3)Not High Angletrapmf[0,0,2,4]
High Angletrimf[2,5,8]
God Viewtrapmf[6,8,10,10]
Low First Player (O4)Unlowtrapmf[0,0,2,4]
Middle Lowtrimf[2,5,8]
High Lowtrapmf[6,8,10,10]
Trunk Player (O5)Untrunktrapmf[0,0,2,4]
Semitrimf[2,5,8]
Trunktrapmf[6,8,10,10]
Table 3. Fuzzy input membership.
Table 3. Fuzzy input membership.
Input VariableMembership Function
MFTypeControl
Distance_P1 (I1)Neartrapmf[0,0,1.7,2]
Mediumtrimf[1.7,2.1,2.5]
Fartrapmf[2.3,3,20,20]
Different_P1 (I2)Shorttrapmf[0,0,20,40]
Mediumtrimf[20,50,80]
Longtrapmf[60,80,100,100]
Angle_Y_P1 (I3)Front Lefttrapmf[−180,−180,−160,−110]
Lefttrimf[−160,−90,−20]
Reartrimf[−70,0,70]
Righttrimf[20,90,160]
Front Righttrapmf[110,160,180,180]
Distance_P2 (I4)Neartrapmf[0,0,20,40]
Mediumtrimf[20,50,80]
Fartrapmf[60,80,100,100]
Angle_Y_P2 (I5)Front Lefttrapmf[−180,−180,−160,−110]
Lefttrimf[−160,−90,−20]
Reartrimf[−70,0,70]
Righttrimf[20,90,160]
Front Righttrapmf[110,160,180,180]
Angle_X_P1 (I6)Rear Uppertrapmf[−180,−180,−160,−110]
Uppertrimf[−160,−90,−20]
Fronttrimf[−70,0,70]
Belowtrimf[20,90,160]
Rear Belowtrapmf[110,160,180,180]
Coordinat_Y (I7)Lowtrapmf[0,0,20,40]
Eye Viewtrimf[20,50,80]
Hightrapmf[60,80,100,100]
Angle_X_P2 (I8)Front Uppertrapmf[0,0,45,110]
Rear Uppertrimf[70,135,200]
Rear Belowtrimf[160,225,290]
Front Belowtrapmf[250,315,360,360]
Table 4. Reduced fuzzy rule sample.
Table 4. Reduced fuzzy rule sample.
Input FuzzyOutput Fuzzy
I1I2I3I4I5I6I7I8O1O2O3O4O5
1MEDShortRear Follow
2MEDShortRight Pseudo
3MEDShortLeft Pseudo
4MEDShortFR UNFOL
5MEDShortFL UNFOL
6 Long UNFOL
7 MED UNFOL
8Near UNFOL
9Far UNFOL
10 NearFL Lip Shot
11 NearFR Lip Shot
12 NearRight Pseudo
Table 5. IF THEN rule sample.
Table 5. IF THEN rule sample.
RuleIF THEN RULE
1IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Rear THEN follow_shot=follow
2IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Right THEN follow_shot=pseudo
3IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Left THEN follow_shot=pseudo
4IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Front Right THEN follow_shot=unfollow
5IF distance_p1=Medium AND different_p1=Short AND angle_Y_P1=Front Left THEN follow_shot=unfollow
6IF different_p1=Long THEN follow_shot=unfollow
7IF different_p1=Medium THEN follow_shot=unfollow
8IF distance_p1=Near THEN follow_shot=unfollow
9IF distance_p1=Far THEN follow_shot=unfollow
10IF distance_p2=Near AND angle_Y_P2=Front Left THEN lip_shot=lip shot
11IF distance_p2=Near AND angle_Y_P2=Front Right THEN lip_shot=lip shot
12IF distance_p2=Near AND angle_Y_P2=Right THEN lip_shot=pseudo

Share and Cite

MDPI and ACS Style

Junaedi, H.; Hariadi, M.; Purnama, I.K.E. Profiling Director’s Style Based on Camera Positioning Using Fuzzy Logic. Computers 2018, 7, 61. https://doi.org/10.3390/computers7040061

AMA Style

Junaedi H, Hariadi M, Purnama IKE. Profiling Director’s Style Based on Camera Positioning Using Fuzzy Logic. Computers. 2018; 7(4):61. https://doi.org/10.3390/computers7040061

Chicago/Turabian Style

Junaedi, Hartarto, Mochamad Hariadi, and I Ketut Eddy Purnama. 2018. "Profiling Director’s Style Based on Camera Positioning Using Fuzzy Logic" Computers 7, no. 4: 61. https://doi.org/10.3390/computers7040061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop