1. Introduction
Affective computing is the study and development of systems and devices that can recognise, interpret, process, and simulate affective phenomena [
1]. While some developments in the field may be traced back to early philosophical inquiries into emotion [
2], the more modern branch of computer science originated with Picard’s extensive works in [
3] and flourished into an interdisciplinary field spanning computer science, psychology, and cognitive science [
4]. One of the motivations for the field is the ability to ingrain emotional intelligence into machines so that they could interpret emotional states of humans and adapt its behaviour to them as well as providing an appropriate response to those emotions [
5].
In artificial intelligence, interaction between artificial systems and their users is improved by making such systems not just intelligent but also emotionally sensitive [
6]. Research in affective computing is encumbered by fundamental issues of seriality in the computational hardware whereas the human brain fundamentally works differently [
7]. The physical make-up of the human brain (often divided into rational and emotional brains) is such that information is simultaneously processed rationally and emotionally, while in the area of social robotics, this conundrum clumsiness in a robot’s behaviour, i.e., conventional computers cannot effectively process information in parallel. This would suggest that incongruity of using such frameworks to model the human affective system, which is known to be capable of manifesting concurrent emotional experiences (for example, happy and sad) [
8].
When all is said and done, the massive amount of data that will be generated by future sensors combined with the rising complexity associated with controlling, planning, interacting, and reasoning systems, as well as the fact that future robots will operate in all kinds of environments, especially those to facilitate interactions with humans. This will be further aggravated when these robots are connected via internet of things (IoT) frameworks. All these indicate the need for enormous computing resources for future robotic systems. Meanwhile, quantum systems are credited with astounding capabilities ascribed to its properties of entanglement, superposition, and parallelism [
9]. Therefore, there is a consensus cum growing interest in harnessing quantum algorithms, quantum sensors, and quantum controls for important roles in future robotics and automated integrated systems.
Exploiting the overwhelming superiority that quantum computing offers [
10], numerous studies exploring the use of quantum mechanisms in affective computing have been proposed. For example, in [
11], Schwartz et al. enthused the need for current neuropsychology to incorporate the mathematics of quantum physics in accounting for human observational bias in the measurement of physical properties of the human brain. Building on his previous efforts, Aerts asserted that a number of quantum mechanical principles are at the origin of specific effects in cognition, and based on which a general hypothesis about the quantum structure of human thought was investigated [
12,
13]. Similarly, Narens et al. argued that psychology may not be effective in transferring contextual dialogues into probabilistic models but found solace in the belief that quantum probability theory could better handle the dynamics of contextual impact on behaviours [
14].
Following these insights, in [
15], Lukac et al. provided a quantum mechanical model of robot-specific emotions based on quantum cellular automata, where a robot is described as a set of functional agents each acting on behalf of its own emotional function. Years later, Raghuvanshi et al. proposed concepts that apply quantum circuits to model fuzzy sets, which birthed a new method to model emotional behaviour for a humanoid robot [
16]. More recently, in [
17], Yan et al. proposed a framework portraying emotion transition and fusion, which is encoded as quantum emotion space with three entangled qubit sequences to locate an emotion point within a three-dimensional emotion space. They surmised that such a framework facilitates easier tracking of emotion transitions over different intervals in the emotion space. However, as a drawback, their definition of emotion relies on the two qubit sequences, which is insufficient when more sophisticated emotions are desired.
To summarise, to date, efforts to integrate quantum mechanics into different aspects of affective computing can be divided into two groups: (1) applications that exploit some of the properties responsible for the potency of quantum computing algorithms as tools to improve some available affective computing tasks, and (2) applications that derive their inspiration from the expectation that quantum computing hardware will soon be physically realised and, hence, such studies focus on extending affective computing tasks to the quantum computing framework. These directions motivate two fundamental questions regarding the notion of quantum emotion: First, what advantages do the resulting quantum emotion offer us? And, second, how does this “quantumness” relate to possible descriptions of future quantum robots [
18]?
While there may be other questions, answering these two questions as well as exploring the integration of some of the remarkable advances recorded in quantum machine learning [
19], quantum artificial intelligence [
20], and quantum image processing [
21] provide few of the other motivations for this study. For the first question, the use of quantum mechanics in describing emotion presupposes the potential use of parallel processing power and demand lower computing resources since quantum computing offers exponential storage and speed up relative to the required number of qubits as well as the sufficient (polynomial) number of CNOT gates that are needed to manipulate quantum emotion. Regarding the second question, by conceptualising a quantum robot as a mobile quantum system that includes an on-board quantum computer and its required ancilliary systems [
22], it is reasonable to imagine that quantum robots will carry out tasks whose goals include specified changes and adjustments to emotions in conformity with the state of the environment or even carrying out measurements on the environment itself. Similarly, integrating quantum mechanics into descriptions of robot emotion supports potential development of quantum robots as well as its use to manipulate classical robot devices to perform advanced emotion-related actions [
23].
Consequent upon the foregoing survey, in this study, we propose a conceptual framework for quantum affective computing (QAC) as a quantum computing unit within a quantum robot system (defined here as a quantum robot that includes many quantum computing units and each unit is supposed to accomplish specific tasks and exchange of information with each other). While affective computing is defined as presented at the beginning of this section, the primary purpose of QAC is to enhance the capacity for storing, processing, and retrieving the affective information in the human-robot interaction either by transitioning from traditional to quantum paradigms (e.g., as used for quantum robots) or by complementing traditional procedures with quantum techniques (e.g., the quantum machine learning algorithm mentioned earlier). A QAC unit is one ingrained with the capability to receive external stimulus, and its manipulation as quantum signals which can motivate (or compel) the quantum robot to carry out some actions on its environment, such as emotion expression and path planning. The first step in accomplishing this is formulating a representation for emotion within the precepts of quantum mechanics and then realising its manipulation and fusion for different applications in quantum-based affective computing.
The remainder of the study is organised as follows. In
Section 2, we present some of quantum properties and propose our framework for QAC. In
Section 3, we define the quantum representation of robot emotion and discuss their basic transitions. In
Section 4, we discuss the intrigues involved infusing multiple robot emotions and illustrate both the fusion and retrieving procedures.
3. Quantum Representation for Robot Emotion
A traditional two-dimensional pleasure-arousal (PA) plane is characterised by a pleasure-displeasure and an arousal-sleep plane that indicate both the specific emotions and general features common to many different emotions [
36]. In addition, unlike the expression of sensitive feelings in human communication, discussions about robot emotion are relatively straightforward. Therefore, we modify the original PA plane in [
36] and represented in the form depicted in
Figure 5a. As seen in that figure, there are four quadrants that are labelled as “Excitement”, “Distress”, “Depression”, and “Contentment”. Since our study is focused on the robot emotion, each quadrant of the modified PA plane can be temporarily roughly divided into two key emotions illustrated in the figure.
We adopt the PA plane with 8 emotions (using angles in different intervals) in the four quadrants as shown in
Figure 5b. The colours are selected according to established psychological interpretations, and the colour information corresponds to the selected emotion for visualisation [
37]. It is noteworthy that researchers have divergent opinions regarding the use of colour coding in descriptions of emotions. Its use here is for clarity in visualisation of the emotions. The actual parameters used to interpret the emotions are bifurcations represented by the angles in
Figure 5b, which, as already explained, are encoded as quantum superposition states. The type of emotions is chosen according to the coordinate values in the original PA plane. Since angle values in the range 0 to
are continuous, by this spectrum, we can add more intervals (representing more emotions) according to the situation.
Recalling the equation
and its the discussion in
Section 2, we let
and
,
, then
. Further, if we let
, then
. We do this because the trigonometric function is monotonically changing in the period. In this case, we need to proportionally map the emotions in the range 0 to
to a new interval of [0,
]. Since the angles in these two intervals are continuous, bijection is a useful tool to accomplish the desired transformation. In other words, we can map all the eight types of emotions via the angles in the interval of [0,
]. Consequently, mathematically, the quantum emotion of a robot can be expressed as
where
is regarded as the emotional parameter (or variable);
and
are 2-D computational basis quantum states. Following this formalism, it is important to discuss how a desired quantum emotion state of a robot can be generated. Given an angle
, a rotation matrix (rotation around the
y-axis of a Bloch sphere by angle 2
) is defined as
which transforms the computational basis state into the desired quantum emotion state, expressed as
Now, to transform the emotional state, we consider several single quantum gates and apply them on the quantum emotion state in Equation (
1).
X gate exhibits the property that
and
; so, when it is applied on an emotion state, its impact can be expressed as
where
is the emotion state as defined in Equation (
1). The function of the X gate is like the emotion inversion operation that flips every given emotion.
Mathematically, the
Z gate has the properties that
and
; so, when it is applied on the emotion state, we have the transformation:
whose function is the negation operation, i.e., to change the sign of the angle. On the QAC framework, when combined with other transformations, this operation produces many useful outcomes. This will be discussed later.
The
H gate executes the transformation
and
. Therefore, when it is applied on the emotion sate the outcome can be expressed as
As we shall see later, this operation can be used to neutralise (i.e., cancel out) an emotional state.
The general form resulting from the combination of the three transformations (i.e.,
X,
Z, and
H) can be expressed as a unitary matrix in the form:
where
. When applied on the emotion state of a robot, the
operator transforms the emotion to a state:
This operation transforms a given emotion to a new one, within the bound encoded
and
, respectively. As inferred in Equation (
5), combining this operation with the
Z gate changes the emotion to a new state whose angles is
. Moreover, transformations,
X,
Z, and
H, are the special cases of
in which
is equal to
, 0, and
, respectively.
Furthermore, by using the definition of
, that of the rotation matrix in Equation (
2) and the property of the Pauli
Z matrix, we obtain the useful operation exhibited as
An implication arising from Equation (
9) is that trivial tasks to show increase or decrease of emotion from
to
or from
to
can be achieved by using
or
, respectively. These two operations are expressed by
and
The matrix
has unit determinant and exhibits the following properties:
Following the above mathematical formalism, in the quantum circuit model of computation, we can use circuit elements can be used to illustrate the execution of the same operations highlighted earlier. The
operation can be realised using a combination of two rotation gates and two controlled-NOT (CNOT) gates as presented in
Figure 6a. Here, when the control wire has an input
state, the operation
, which is equal to the identity operation is executed. This translates to the first property in Equation (
12). Therein, when the control wire is in the
state, the operation
is equal to the
that is executed. An extended version of this operation with multiple control conditions is presented in
Figure 6b.
In the circuit model of quantum computation, computation is steered by a sequence of unitary operations represented as quantum gates (presented earlier in
Figure 7) that together with connecting wires act on one or more qubits to simultaneously affect each element of the superposition and generate massive parallel data processing [
38]. In this model, the complexity and, conversely, the simplicity of executing a quantum algorithm is determined by the ability to concatenate and breakdown layers of the circuit in terms of basic or elementary quantum gates. In
Figure 7, such complex layers (on the left) are decomposed in terms of the simpler NCT gate libraries (on the right), i.e., composed of NOT, CNOT, and Toffoli gates. In this manner, seemingly complex layers of a circuit can be decomposed into simpler ones as illustrated in
Figure 7c, where an
n-controlled NOT gate is decomposed into 2(
n−1) Toffoli gates as well as 1 CNOT gate, and 1 Toffoli gate can be further approximately simulated by 6 CNOT gates [
39].
4. Fusion of Quantum Emotions for Multi-Robots
Emotion fusion entails the combination (i.e., mixture) of the emotions of multi-robots in a specified communication scenario or environment. The emotion fusion state of all robots (or part of them) can be controlled or changed using external stimulus.
We assume an enclosure (such as a room) containing
N robots, and each robot is identified by a number (
). To simplify things, we divide the enclosure into
lattices (
), as shown in
Figure 8. Meanwhile,
Figure 8a shows the multi-robots in a room, while the
X,
Y, and
Z axes constitute
H,
W, and
V planes. As an example,
Figure 8b shows, the details of the
H plane where the rows and columns along the plane indicate the levels of the qubits encoding the emotions in the affected area. For instance, spatially the robots in the square “ABCD” can be presented by
, while those in the square “AEFG” comprising of four robots can be simultaneously controlled by the qubit sequence
. This illustrates that the more robots we want to control, the less qubits we need.
Therefore, the emotion fusion of these
N robots can be represented as
where ⊗ is the tensor product notation,
, are
-D computational basis quantum states and
is the vector of angles encoding emotions. There are two parts in a typical scenario for multi-robot emotion representation:
encodes the emotional information and
is about the corresponding locations (i.e., the identity) of each robot in the scenario, respectively. Therefore, to facilitate the emotion fusion of these
N robots requires 2
n + 1 qubits, where 2
n qubits specify the robot’s spatial location in the enclosure and the remaining one qubit is used to identify the emotional states of all robots, i.e., the controlled condition operations applied on the locations (Y and X axes) are used to confine the present emotion to a targeted robot. In this manner, transformations on a multi-robot quantum atmosphere facilitate parallel manipulation (communication atmosphere is described as the psychological factor and feeling that can affect the behaviourial process and result in the space, which is usually obtained by fusing individual emotional states). Finally, the fused emotional state of many or multi-robots is a normalised state (i.e., similar to the one composed of intricacies of many humans in some environment (or predefined) scenario), i.e.,
as given by
Next, we discuss how to realise the emotion fusion. Given a vector
of angles
, we turn quantum computers from the initialized state,
, to the quantum emotion state in Equation (
13), composed of a polynomial number of simple gates.
First, using the two-dimensional identity matrix
I and the 2
n Hadamard matrices
, the transform
applied on
produces the state
, presented as
So far, we have formally separated the room into a lattices, i.e., each robot in the enclosure has a unique identification. Subsequently, we relate each robot a specified emotion.
The rotation matrices
as defined in Equation (
2) and controlled-rotation matrices
with
are considered, i.e.,
This controlled-rotation
is a unitary matrix, since
. Meanwhile, applying
and
on
produces:
and
In addition, it is clear from Equation (
19) that
Therefore, the unitary transform
turns a quantum computing hardware from the initialized state
to the emotion fusion state
. The total number of simple operations used to prepare the fusion emotion state is
Consequently, the computational complexity for fusing the emotional states of multi-robots can be calculated as
.
While the storage of an emotional state is accomplished during the preparation process, the measurement of the quantum emotion state produces a probability distribution that is used to retrieve the emotion. As presented earlier, measurement of a quantum state produces only one result, which is one entry in a set of basis vectors. Therefore, with only one quantum state, it is impossible to obtain information from that state; hence, a measurement process requires many identical quantum states and multiple measurement operations on these identical states recovers information about the quantum state [
40] which in our case is information about the emotion of each robot in the form of a probability distribution.
As discussed earlier, transformations on a multi-robot quantum emotion facilitate parallel manipulation in each enclosure or setting. In the sequel, we illustrate the use of control conditions to confine the rotation operation to transform the emotion of each robot or multi-robots in an environment. Suppose an
enclosure to accommodate 64 robots composed of a lattice that is divided into 4 squares, i.e., 16 robots in each square shown in the top figure in
Figure 9a). Further, we suppose that all the 16 robots in each square have the similar emotions that are encoded using the same colour. This is similar to a competitive match, where, for example, there are four teams and they each share some emotional states (e.g., expectation) and each team will also have emotions unique to them. After the match, the team’s emotion will change according to the match result, e.g., the winner will be happy and the loser will be sad or even angry. To encode this and further explain the example, we further divide the lattice into an upper and lower half, which can be used to constrict the operation by focusing on
(using state “1" control condition) and
(using state “0" control condition) to the upper and lower halves of the plane as shown in in
Figure 9b, using which we can transform the emotion to its opposite state on the emotion spectrum. Together with Equation (
11), the bottom figure in
Figure 9a illustrates the original and resultant emotion states (we only use the central
lattice, i.e., 4 robots, to show their locations and change in emotional states).
Extracting and subsequently analysing the distributions of probability read outs resulting from measurement operations provides the information needed to retrieve a new emotion state. As mentioned in
Section 2.1, the total probability of states
and
in an emotion state equals 1. Therefore, when the
is measured, the probability distribution of the
kth robot can be recovered using
Figure 10 presents the probability distribution for the fused emotion environment described earlier where (a) and (b) show the fused emotions prior to and after the transformation in
Figure 9 (i.e., via the circuit in
Figure 9b). These distributions provide a visual impact of the quantum rotation operation on the emotional states of each robot.
While we have devoted a lot of attention to describing the potential use of quantum mechanics in affective computing, we would also like to reiterate that the ’quantumness’ in these operations remains the focal point of any future quantum affective framework (QAC). This is especially useful for in stimulating the interest of practitioners with little or no background in quantum computing towards the pursuit of harnessing the potency of quantum computing to enhance traditional affective computing paradigms. In addition to the potential use of the proposed module in future quantum robotic systems, we emphasize two advantages that the proposed QAC paradigm offers us. First, referring to Equation (
13), in discussing the emotion states of
N robots, we saw how the quantum superposition works. There, we used only one superposition qubit to encode the emotional states of all the
N robots and used the control functions to differentiate (or restrict) each robot to certain emotions. This reduces a lot of computing resources, especially, when a lot of robots are involved. In addition, as shown in
Figure 9, and referring to the discussion in
Section 2.1, quantum computation supports simultaneous evaluation of a function
for many values of
x, which is referred to as quantum parallelism. Therefore, in the example shown in
Figure 9, performing the rotation operation once can change the emotional state of all the robots in the system, and via the use of control conditions, two rotation operations could further divide the lattice into two parts, each with its own transformation. In this manner, the more controls a transformation has, the smaller the size of the area affected by it. However, such a transformation comes with additional costs in terms of the depth and the number of basic gates to confine the required circuit to a predetermined location [
26,
38].
5. Concluding Remarks
The massive amount of data that will be generated by future sensors combined with the rising complexity associated with controlling, planning, interacting, and reasoning systems, as well as the fact that future robots will operate in all kinds of environments, especially those to facilitate interactions with humans provide evidence of the huge computing demands of future robotic systems. This will be further compounded by the integration of internet of things (IoT) into robotics for advanced smart homes, assisted living, industrial IoT and other future technologies. Meanwhile, quantum systems are credited with astounding capabilities ascribed to its properties of entanglement, superposition and parallelism. Therefore, there is a consensus cum growing interest in harnessing quantum algorithms, quantum sensors, and quantum controls for important roles in future robotics and automated integrated systems.
While the early ideas of quantum robots are credited to Benioff [
18], his ruminations did not posit the robot having an awareness nor does it envision the robot’s ability to make decisions or measurements. Meanwhile, in Dong’s work [
29], which is more widely accepted by engineers, an alternative definition for quantum robots where their robots’ interaction with the external environment via sensing and information processing was considered for the first time. In this reasoning, a quantum robot is a mobile physical apparatus designed for using quantum effects of quantum systems, which can sense the environment and its own state, process quantum information and accomplish meaningful tasks. With such an engineering perspective, they formulated several fundamental components to compose a quantum robot’s information acquisition and communication.
In an attempt to offer some modest contributions to the above discourse, our study explores the rudimentary processes for interpreting standard notations and operations in affective computing using properties of quantum computation. The main motivations are twofold: first, is to stimulate interest in microcosmic interpretation of emotions and affective computing in terms of the precepts of quantum mechanics. This is important considering that quantum technology provides veritable tools to model microcosmic phenomena that coalesce into colossally erratic registers that align with the unpredictable and oftentimes arbitrary or fluctuating nature of emotions which could be easily triggered, changed, or even extinguished depending on impulses, stimuli, etc. Second, in their quantum structure of robots, Dong et al. proposed the notion of multi-quantum computing units (MQCU), which exhibit marked similarities with the human brain. However, they did little to present a one-to-one correspondence between information processing in the two worlds. In this study, we advanced this effort by demonstrating how emotions in a quantum affective computing setting could be encoded and processed using quantum unitary operations and, by doing so, we surmised frameworks for communication between robots and their ability to make emotional decisions. While facile in its presentation, the study presents a first step in quantum affective computing (QAC) where many of the bewildering properties of quantum computing could be coalesced into distributions and descriptions of quantum emotion as well as the fusion of emotions in different robot-robot and robot-human interactive environments.
Among others, advantages of the proposed QAC include exploiting the parallel computing capability inherent to quantum computers to conserve computing resources as well as its offer of considerable increase in processing speed. Furthermore, the QAC framework facilitates the representation and fusion of multi-robot emotions in an enclosure.
Despite the enumerated potentials, in the long road to harnessing the potency of quantum computation for efficient affective computing, our ongoing and future work is motivated by the following considerations. First, the proposed QAC framework is supposed to be a dependent or interactive unit for exchange of information with other QCUs in the quantum robot system. Therefore, we may need to study how such a unit can communicate with others, including the necessary information acquisition, fusion, and distribution. In other Words, we should define the interface of QAC unit with other QCUs to make the affective processing of quantum robots and its corresponding response seamless and practical. Second, the four hypothetical devices of the QAC framework need to be further studied to ascertain which technologies are suited for processing quantum sensory information. For example, typical multi-robot or robot-human scenarios as well as changes in emotion states are influenced by external factors and stimuli. Following that, how to design the ES to extract the useful information from the external stimulus and encode it for further computing in the quantum system. Third, in the fusion of multi-robot emotion, it is known that multiple measurement operations required to recover an emotion state would impose additional computing costs. This must be carefully studied for efficiency. Ancilla-driven and measurement-based quantum computation [
35] are viable alternatives to the circuit-model of quantum computation that have been considered in similar quantum readers. Consequently, it is expedient to study required refinements for their integration to help provide required trade-offs between accuracy and computational cost.
As argued in [
41], to build a universal emotion space model with robust emotion recognition and expression systems, it is necessary to discriminate the different types of emotion and establish some basic mode of emotions. However, emotions are predilections that even in humans vary from person to person. They permeate, interact, and transition with each other, often with quite complex and frequent changes. In this regard, quantifying an emotion requires adequate emotion dataset as well as a large amount of computing overhead. Consequently, cross-disciplinary techniques, such as data mining and machine learning, must be considered to facilitate the robot’s capability to have better multi-robot as well as human-robot interaction (HRI). Similarly, an important objective of emotional space modelling is to support the development of emotional robots and the design of human-robot emotional interaction systems [
42]. Therefore, in order to realise practicable and computationally efficient emotional models, effective HRI ingrained with robust emotion space models and standards (i.e., technical, and ethical) must be considered. Furthermore, extensive models for multimodal emotion fusion, communication atmosphere, intention understanding, and computational systems of emotions, etc., that can meet the personalised and non-personalised user needs are required for the development of emotional robots.