Next Article in Journal
An Enhancement Method in Few-Shot Scenarios for Intrusion Detection in Smart Home Environments
Previous Article in Journal
Multitask Coupling Network for Occlusion Relation Reasoning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Fintech Agents: Technologies and Theories

Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore 639798, Singapore
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(15), 3301; https://doi.org/10.3390/electronics12153301
Submission received: 27 May 2023 / Revised: 24 July 2023 / Accepted: 25 July 2023 / Published: 31 July 2023
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Many financial technology (fintech) applications have incorporated interactive computer agents to act as mediators between the user and the fintech system. This paper provides a comprehensive review of interactive fintech agents from technological and social science perspectives. First, we explain the general fintech landscape and define interactive fintech agents. Next, we review the major technologies involved in creating fintech: (1) artificial intelligence and machine learning, (2) big data, (3) cloud computing, and (4) blockchain; as well as the specific key technologies enabling the following aspects of interactive fintech agents: (1) intelligence, (2) understanding of users, and (3) manifestation as social actors. Following the technology review, we examine issues and theories related to human-fintech agent interaction in the following areas: (1) agents’ understanding of users, (2) agents’ manifestation as social actors (via embodiment, emotion, and personality), and (3) users’ social interaction with agents. Finally, we suggest directions for future research on fintech agents.

1. Introduction

Financial technology (fintech) is an emerging field where novel technologies are used to improve the business operations or services offered by financial institutions and enterprises. Some applications of fintech include e-commerce, crowdfunding, insurance-related technology, and automated investment apps (usually referred to as robo-advisors) [1]. These developments have had a significant impact on the traditional financial landscape. For example, the digital financial institution Nubank provides financial services without the high interest rates and fees of traditional banks, enabling financial inclusion for more sections of society [2]. Globally, an expanded user base is turning to technology for its financial needs. Hence, user experience with fintech services has become an important area for research.
As technology develops as a tool for optimizing services and cutting costs in the financial sector [3], financial services have seen an increased usage of computer agents. Agents are traditionally defined as computer programs that can imitate human action and communication and act on behalf of the user [4]. The development of agents led to an important shift in human-computer interaction [5], from direct manipulation (for e.g., controlling computers with a keyboard) to indirect manipulation (for e.g., controlling smartphones via voice assistants), allowing the automation of mundane tasks such as email filtering, scheduling, and bank account checking [6]. Direct manipulation is not suited for complex computer environments, but indirect manipulation increases accessibility and allows for easier social interaction [7].
Agents acquire knowledge about users and predict their needs [8]. They have been deployed to enable more sophisticated services in many industries, including education, e-commerce, finance, and transport. In short, agents simplify computer use by allowing users to delegate tasks to the computer [9]. This is particularly useful for tasks that humans cannot do [10], or for tasks for which it is expensive to hire humans. Agents are designed to imitate the range of human abilities required for the task [4], whether simple or complex. The financial industry can be complicated and intensive even for experienced users, but agents hold promise as intermediaries that complete financial tasks for users.
Traditionally, agents have been studied in the context of computing and highly specialized users, or as personal assistants that act in the background (for e.g., email spam filters) [11]. With the increase of interactive technology, however, many agents, such as website chatbots, have been developed to interact with users in diverse contexts. In contrast to traditional agents that worked in the background of an interface, many current agents have a social interaction element, operate in the foreground, and communicate directly with users.
In the fintech industry, agent applications range from backend operations such as financial cybersecurity to client-facing services such as mobile payments, crowdfunding, cryptocurrency, peer-to-peer lending, stock trading, small business financing, insurance technologies, mortgage lending, and robo-advisors [1,3]. Interactive fintech agents are an important area for research for client-facing services, as the industry is moving towards the automation of services but still needs to maintain good interaction with clients. The financial industry is a unique context in which to examine human-agent interaction because it involves significant levels of trust from users who are asked to entrust a computer with their money. In addition, high market volatility can lead to investors making emotional decisions or adopting a herd mentality, which may be disadvantageous to them. These may be mitigated by a fintech agent that is able to interact with the users smoothly, understand their goals and emotions, and encourage rational decisions.
As the use of interactive fintech agents increases, the psychology behind human-fintech agent interaction demands further examination. Although fintech agents are just digital objects, they need to be studied from a social science perspective because their communicative elements make users perceive them as social actors, a term that refers to anyone who engages in intentional action [12]. Issues and theories related to our interaction with social actors are crucial for the development of fintech agents that are accepted, trusted, and enjoyed by users. This paper contributes to the existing literature by providing an overview of the state-of-the-art in terms of technological considerations in fintech agent development. Additionally, an important contribution of this paper is to highlight the relevance of theories of interaction to the development of interactive fintech agents that are liked and trusted by users. Interaction theories are crucial towards developing successful fintech agents and must be examined on par with technical advancements. These social theories are essential references to guide the design of products that will eventually interact socially with financial clients, who will have varying needs and objectives.
This paper is the first known review of fintech agents. Hence, it will serve as a useful guide and reference for scholars as well as practitioners from different fields such as electronic engineering, facial expression detection algorithms, user experience design, entrepreneurship, and psychology. We also highlight where further research can improve the current state of interactive fintech agents.
This review uses the terms “interactive fintech agents” or “fintech agents” to refer to computer algorithms that (a) signal their presence as interaction partners (social actors); and (b) communicate with the user to perform financial tasks or analysis on the user’s behalf. The forthcoming sections will discuss the key technological components of fintech, the technologies involved in fintech agent development, and theories and issues related to human-fintech agent interaction.

2. Methodology

Interactive fintech agents are an emerging area of research, with limited theoretical or empirical research work in this specific area. Hence, there is currently insufficient material for a formal systematic review. Instead, we conducted a narrative review to provide an overview of the developments in this emerging field. This review covers both the technologies and the social issues relevant to interactive fintech agents. The aim of this research was to understand both the technologies required in the development of interactive agents as well as the factors to note in designing fintech agents to be effective interaction partners. The insights presented here can serve as a reference for future systematic reviews and empirical work on interactive fintech agents, both in terms of technical development and improving the quality of the interaction experience.
Section 3 presents a broad overview of the technologies underlying fintech in general. Search terms used for this section were ‘fintech’, ‘technology’, ‘origin’, and ‘application’. Section 4 discusses how technology is used to create fintech agents specifically. Using the search term ‘fintech agent’ did not yield many results due to the lack of specific work in this area. Hence, the keywords ‘agent’, ‘computer agent’, ‘banking agent’, ‘finance agent’, ‘chatbot’, ‘financial chatbot,’ and banking chatbot’ were used. To identify the technologies relevant to user interaction, we first used the keywords ‘computer agent’ and ‘interaction’. The initial results provided further keywords that we used to define the scope of the review: ‘user intention’, ‘user emotion’, ‘user financial behavior’, ‘embodiment’, ‘emotion’, and ‘personality’. Section 5 discusses issues of social interaction. The keywords used for this section were ‘human agent interaction’, ‘human computer interaction’, ‘interaction theory’, ‘intention’, ‘emotion’, ‘personality’, ‘financial behavior’, ‘financial bias’, ‘embodiment’, ‘social actors’, and ‘manifest’. Across all the sections, the Boolean operators “AND” and “OR” were used to form combinations of the key search terms, such as ‘computer AND agent AND manifest AND emotion’ to look for prior work on computer agents that display emotions.
We retained peer-reviewed empirical articles, review articles, and credible news sources or websites that were published in English. Articles from non-financial domains (i.e., sources that we found when not including finance-related keywords) were retained if they provided insights into human-agent interaction in the fintech context. As this research area is recent, we did not restrict the time range of the articles. We retained 112 papers for Section 3 and Section 4, and 81 papers for Section 5. This review was conducted on Google Scholar and the authors’ university library database. The keywords that emerged from our initial search on human-computer and human-agent interaction (e.g., embodiment, personality, emotion) were used to structure this review.

3. Financial Technologies (Fintech)

This section will provide a broad overview of the key technologies underlying fintech. Artificial intelligence (AI), blockchain, and cloud services have caused process disruption, while big data enables greater customer acquisition and retention and enhances robo-advisor-driven investment solutions [13]. We briefly review these four main technologies enabling fintech and their application in the field: AI and machine learning (ML), big data, cloud computing, and blockchain.

3.1. Artificial Intelligence and Machine Learning

According to computer scientist and AI pioneer John McCarthy, “(AI) is the science and engineering of making intelligent machines, especially intelligent computer programs” [14] (p. 2). The concept of ML emerged from the idea that “programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort” [15] (p. 535). Hence, ML, a subcategory of AI [16], is used to make sense of data based on experience. Deep learning is a further subset of ML where computers learn using algorithms modeled on the human brain’s biological structure and functioning [17].
In recent times, Artificial Intelligence (AI) has become widespread across various technological domains, primarily because of its capability to carry out computations in distributed systems or the cloud [18]. These technologies are being put to a wide range of uses in various fields.
The advancement of AI and ML enables real-time analysis of multimedia streaming data, facilitating informed decision-making. Diverse sources generate vast amounts of valuable data, which AI and ML techniques efficiently process to extract meaningful insights. This empowers organizations to identify trends, anomalies, and critical events, optimizing processes and services [19].
Another application of AI is with electromyogram (EMG) signals, which are generated by the electrical activity of muscles and are widely used in applications such as prosthetics, rehabilitation, and human-computer interaction. Traditionally, hardware processing techniques have been employed to analyze and interpret EMG signals. However, with the advancements in AI and edge computing, intelligent embedded processing has emerged as a superior approach [20].
AI applications in marketing have revolutionized the ability to customize services and content on websites and apps, serving as a crucial initial step in driving personalized marketing campaigns and fostering meaningful consumer engagement. ML-powered AI chatbots play a vital role in this process by continuously improving and becoming smarter over time. These chatbots are vast, adaptable, and intelligent, enhancing user experiences with a more lifelike interaction [21].
Similarly, AI and ML have far-reaching implications for fintech. AI systems can process and analyze large amounts of financial data in a consistent and accurate manner that is not possible for humans [22]. AI-powered financial apps provide a greater range of tailor-made services and products at a lower cost by leveraging personal customer data [23]. AI and ML can be applied to data such as the client’s income, saving and spending habits, assets, and liabilities, and can give investment recommendations that match their needs [24] as well as more customized advice than traditional advisors offer [25]. AI and ML also power conversational interfaces, automatically providing relevant and increasingly more accurate information over time [26,27].
For example, Bank of America’s AI-driven virtual assistant, Erica, is used by millions of customers to answer basic banking questions. The chatbot is fed with customer data, including past financial history and location information. Applying ML and deep learning, Erica can provide tailor-made services [28]. The BlackRock Robo-Advisor 4.0 also uses AI and ML and can outperform human stock-pickers in the task of buying stocks whose estimated intrinsic value is higher than the market value [29].
While AI has a diverse range of applications, including fintech, AI technology faces the challenge of needing to be human-centered and placing human well-being at its core. This approach entails designing AI systems responsibly, respecting privacy, adhering to human-centered design principles, implementing appropriate governance and oversight, and ensuring that an AI system’s interactions with individuals consider and respect users’ cognitive capacities. By adopting such an approach, stakeholders can navigate the complexities of AI while prioritizing ethical considerations and harnessing the full potential of these technologies to benefit humanity [30]. Other challenges regarding the development of AI include security, privacy, energy consumption, morality, and ethics [31].

3.2. Big Data

Big data refers to massive data sets that are complex, varied, and fast-moving, requiring advanced management and analysis techniques. Big data analytics refers to a set of technologies and techniques used to find patterns and information from data sets that are substantially larger and more complex than usual data sets [32].
Big data helps banks provide improved services to their customers, boost their security systems, and gauge customer sentiments from social media data. For example, Q.ai, a robo-advisory app, uses AI and big data to provide customized portfolio recommendations and maximize returns on investments [33]. Banks also use big data analytics to study consumption patterns and customer behavior [34,35]. Similarly, health insurance companies use data from wearable technologies to provide superior customer service and product innovations [36], and some insurance companies track driving data to reward safe driving [37]. It is also possible to build a system that recommends buying, selling, or holding a stock at specific times of the day [38].

3.3. Cloud Computing

The U.S. National Institute of Standards and Technology (NIST) defines cloud computing as “ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [39] (p. 2). Cloud computing lets clients access their personal financial files through the internet from anywhere. It is used extensively in finance, especially by banks, to reduce hardware, software, and human resource costs.
Cloud computing improves cash flows for banks, allowing them to rapidly provide and scale up services [40] and adopt newer technologies effectively [41]. Coupled with big data analytics, cloud computing enables banks to provide customized services and sound financial advice services [42]. The service Temenos Banking Cloud, for example, allows banks to launch and scale banking services quickly and at a low cost [43].

3.4. Blockchain

A blockchain comprises data sets, each composed of data packages or blocks. A block constitutes multiple transactions. With each additional block, the blockchain is extended, and together denotes a full ledger of the transaction history. These blocks can be validated by the network using cryptography [44]. Thus, a blockchain is a decentralized, open ledger where anyone can transact or validate transactions.
The impact of blockchain on the financial industry is far-reaching, promising lower costs and improved security [45]. When one block is added to another, it is through a verified transaction; hence, attackers cannot tamper with it once registered [46]. Also, blockchain allows transactions to be automated based on mathematical rules that are self-enforced; hence, transactions are largely secure, free of errors and illegal practices, and do not require verification from a reliable third party [47].
One of the most prominent uses of blockchain in finance is cryptocurrency [48], one example of which is Bitcoin, launched in 2008. It established a peer-to-peer system of payments based on electronic transactions, enabling different entities to send payments to one another without a central authority [49]. There are several other cryptocurrencies like Ethereum, Litecoin, Dash, and Ripple, and the industry is worth hundreds of billions of dollars [50].

4. Fintech Agent Technologies

This section will provide an overview of the key technological elements of fintech agents, specifically. Reviewing technologies related to fintech agents will deepen our understanding of theories and issues related to human-fintech agent interaction. First, we discuss a major design consideration in developing agents—embodiment. Then, we discuss the technological factors involved in creating disembodied and embodied fintech agents that display intelligence, understand the user, and manifest as social actors.

4.1. Types of Fintech Agents: Disembodied or Embodied

The way an agent is designed to signal its status as an interaction partner involves the decision of whether the agent should be embodied or disembodied. Embodiment refers to physical instantiation, i.e., bodily presence [51], whereas disembodiment refers to the absence of physical instantiation in either real or virtual forms. Embodied fintech agents can have bodily presence in either virtual form (agent has a digital body perceivable via a computer, mobile, virtual reality goggles, etc.) or physical form (with three-dimensional form in the actual world; we do not need specific technology to perceive and interact with it). Physically embodied agents include robots that interact with users, although this remains an area for further development in fintech.
As computing power increased and graphics interfaces became more widely used, embodied agents were proposed as ideal for digital collaborative environments and to engage in conversation with users [52,53], as they displayed body, shape, face, or other variations of form. Examples of virtually embodied fintech agents are HSBC Hong Kong’s virtual chatbot assistant, Amy (Figure 1), and Rachel (Figure 2), a digital assistant for the mortgage process faced by home buyers. Currently, most fintech agents are designed for virtual embodiment. One example of a physically embodied fintech agent is Xiaoi (Figure 3), an intelligent banking robot developed in China that can communicate verbally, guide customers to relevant queues, and make use of facial recognition and identification cards to check account balances [54].
Unlike embodied fintech agents, disembodied fintech agents do not have bodily form and thus rely on speech, text, or other modalities such as emoticons to simulate their presence as social actors. A prominent example is bank chatbots, which have become popular over the last decade. Despite not having bodily form, disembodied agents can display emotion or personality through verbal or textual cues and can successfully signal their presence as a social interaction partner [55,56]. For example, the chat-based personal finance management app Cleo uses a personality that appeals to younger users, with “roast” and “hype” options that can be brutally honest (Figure 4), yet are strongly socially present in younger users’ minds.
It is possible that in the future, we will see more physically embodied fintech agents deployed to aid customers at financial institutions’ physical locations since physical embodiment may allow the user to experience a more positive interaction with the agent [51,57]. Whether physical or virtual, the use of embodied agents is likely to be the trend [58].

4.2. Fintech Agent Technology

Fintech agents need to be intelligent enough to perform financial activities [59], be able to understand users quickly [60], and manifest themselves as social actors so that users can easily understand their behaviors and intentional stance quickly [61,62]. Hence, among the many dimensions of agent technologies, we focus on the following three areas in this section: technologies for agent intelligence, agents’ understanding of users, and their manifestation as social actors.

4.2.1. Technologies for Agent Intelligence

To be perceived as intelligent, agents must handle complex information, work in online environments, process large data sets, and be fast, efficient, and accurate when performing tasks [63,64]. Intelligent fintech agents start by analyzing vast amounts of data based on programmed models to provide responses during customer interaction [65,66]. For example, the automated insurance chatbot Magda, deployed by Polish insurer Link 4, provides constant customer support based on analyses of a vast knowledge database of motor, vehicle, and property insurance [67].
Beyond programmed intelligence, fintech agents learn from experience and the environment [68,69]. Using advanced ML, fintech agents continuously learn and draw from past knowledge, developing improved problem-solving capabilities [70] and overcoming the limitations of programmed intelligence. Such agents can handle questions that are ambiguous or new based on natural language processing. The more they interact with users, the more information and accuracy they gather [71,72]. Examples of fintech agents that can self-learn are HSBC Bank’s virtual assistant Amy, introduced earlier, which has an in-built customer feedback mechanism to enhance knowledge over time and answer complex queries [70].

4.2.2. Technologies for Agents to Understand Users

In this section, we review the technologies required for agents to understand user intention [73] and user emotion [74]. Further, since fintech agents provide financial information and advice, they need technologies to analyze users’ financial behavior.

Technologies for Understanding User Intention

To be useful, an agent needs to gauge user intention as soon as possible, determine how to collaborate with the user, and respond to the requirements of the user based on his or her current goal [73]. All the information regarding the user’s intention, including his or her actions in the environment, is used to provide an agent with a ranked list of the most probable user goals at every instant. ML is used to construct user models on an incremental basis by studying users as they perform their tasks [75]. Along with AI and ML, natural language understanding (NLU) is used to analyze the text users submit and match it to a certain intent, which the agent is programmed to respond to [76]. Agents that use voice interaction have a speech recognizer component that converts the customer’s voice input into a text message. The NLU component receives the text message, processes its meaning, and performs intent recognition and entry recognition (identifying numbers or names input by the user) to understand the user request or action [77].
For example, Capital One bank developed its own natural language processing technology for its intelligent agent, Eno, which can understand 2500 possible ways a user may ask for his bank balance, including misspellings and autocorrections, while simultaneously learning new variations and identifying user intent [78].

Technologies for Understanding User Emotion

When humans interact with machines, many of the same social principles are followed as humans communicate with one another [79]. Emotion recognition—an essential aspect of human communication—is thus a critical ability for interactive agents. Agents can recognize human emotions through facial emotions captured by a camera, through voice recorded by a microphone, or by measuring heart rate, sweat, and other physiological traits through electrodes, etc. [80]. Additionally, text-based fintech agents such as chatbots must understand the emotions a user conveys via text to provide emotionally appropriate responses, as non-verbal communication and cues can also play an important role in detecting human emotion sentiments [81]. These agents use deep learning and big data to detect users’ emotions via a keyword-based analysis of words and sentiments [82,83]. Fintech agents currently use emotion detection techniques at a rudimentary level, and a lot of potential exists in this space. For example, Rosbank, a Russian universal bank, is working with Neurodata Labs, an AI solution company, to detect real-time customer emotions using several parameters [84]. China-based Emotibot has created a chatbot capable of reading 22 emotional patterns in text and seven patterns from voice and facial expressions and has partnered with China Minsheng Bank [85].

Technologies for Understanding Users’ Financial Behaviors

Fintech agents often take on the traditional role of a human financial advisor and collect information related to users’ financial behaviors, including investment preferences, goals, and risk appetite [86]. After collecting financial and demographic data, fintech agents undertake user analyses (e.g., risk assessments based on demographics, investment history and preferences, and risk appetite), suggest suitable recommendations, keep track of the user’s behaviors (e.g., portfolio modification), and adjust the recommendations (e.g., asset allocation, managing taxes, product selection, and even trade execution) according to the user’s needs, goals, and behavioral tendencies [87,88]. As discussed, AI and ML are key technological mechanisms behind these capabilities [23]. For example, the robo-advisory app Wealthfront evaluates customer information in detail with the help of AI algorithms before recommending which stock to buy [89]. It assesses the risk appetite of the customer by asking them how they would react to substantial losses, which can happen due to a market decline, and whether they prefer to capitalize on the market and maximize their gains. Based on answers to these questions and other data like the number of years to retirement, income, and so on, an investment risk metric is built [90]. The app then uses a proprietary AI algorithm to recommend a portfolio to its clients [25].

4.2.3. Technologies for Agents’ Manifestation as Social Actors

In addition to having financial intelligence and the capability to understand users, fintech agents should be able to present themselves as social actors for efficient and natural interaction with users. Non-human entities such as fintech agents are able to elicit social responses from humans, although users clearly know that they are interacting with a non-human [91]. This is because humans have evolved to have social modules in the brain (e.g., the Theory of Mind Module, discussed in a later section), which enable a natural and efficient understanding of other humans or human-like entities [92]. When an agent presents itself with certain social cues, users’ brains selectively attend to that information and respond socially. An agent can present itself as a social actor through embodiment or by presenting human-like personalities and emotions. This section explores technologies for agent embodiment, personality, and emotion.

Technologies for Agent Embodiment

Many financial institutions use disembodied fintech agents in the form of chatbots. However, due to the importance of embodiment—having a perceptible physical body [93]—in human-computer interaction, computer games, and other technology applications [94], there is a growing trend of using virtually embodied agents such as animated characters in fintech. Currently, very few commercially available physically embodied fintech agents (i.e., fintech robots) exist, but they may become popular in the future, especially for assisting older adults [57,95]. Embodiment can involve more than giving an agent a perceptible body or face. For an embodied agent, conforming to cultural and social norms is essential. It must exhibit appropriate behavior and communication that is consistent with its physical embodiment, such as providing consistent verbal, non-verbal, and other behavioral cues [96].
Many technologies are used to create the look, voice, and behaviors of embodied agents [97]. For example, agents’ bodies are usually created with computer animation software (e.g., Adobe Character Animator, version 23.1), and their behaviors are controlled by a fixed set of algorithms [98]. Such software platforms can simulate embodied agents in 3D virtual environments [99].
For voice, synthetic voice, pre-recorded human voice, or even a mix of the two can be used [100]. Affective [80] and emotion AI technology [101] are used to make agents’ embodiment more compelling by enabling them to understand and react appropriately to user emotions [102]. Examples of virtually embodied fintech agents include the following: Raiffeisen Bank International AG (RBI) in Serbia developed REA, an AI-based digital assistant with blond hair and colored eyes. REA is available anytime to answer questions within five seconds and is popular among younger users [103]. The Royal Bank of Scotland’s virtual chatbot, Cora, answers customer queries and brings a human element to the digital banking experience. The digital teller wears the branded uniform of the bank, has ear piercings, and helps to answer queries on mortgages and how a customer should block a card if it is lost [104].

Technologies for Agent Personality

Personality refers to a set of qualities and character traits that give an individual a distinctive character [105]. Personality plays an important role in building and maintaining interpersonal relationships [106,107], as well as human-agent [52,108] relationships. Agent personalities are typically created by simulations of human personalities through neural networks [109,110,111] and then expressed via visual appearance, voice, or behavioral cues such as facial expression, gesture, or interaction styles [112]. Conversational agents can be designed in such a way that their personalities align with an organization’s image, which would be distinct for a financial institution or service as opposed to other contexts such as commerce or transport [113].
An example of a fintech chatbot with a rich personality Is Adam, a virtual banker at Tatra Bank in Slovakia. The chatbot speaks the Slovak language, provides all the required information about the bank’s products, and resolves customer queries. It has an agreeable and rich personality and is modest, eager to help, and self-confident [114]. Similarly, Australian bank ANZ has launched Jamie, an agent with a human face, voice, and facial expressions (Figure 5), which is capable of two-way voice communication with users. By combining neural networks and models of the human brain, it can express a distinct personality [115].

Technologies for Agent Emotion

Virtual agents are perceived as more believable and relatable when they express emotions [116]. Emotions like anger, anxiety, thoughtfulness, and confidence can be expressed by a virtual character through simple body gestures. This increases its effectiveness, as the facial expressions of a virtual agent can provide feedback to users [117]. Moreover, emotions can be communicated vocally by disembodied agents that cannot display visual cues [118]. Thus, AI-based agents can display emotions in terms of facial expressions, tone of voice, or physical postures based on the design and hardware [119,120]. Several computational models of emotions are used to enable agents to process emotional stimuli and generate emotional responses. Apart from computational technologies, such models are also based on findings from other branches of science, especially psychology [121]. One proposed way of instilling emotions in agents is for the system to apply a sentiment analysis model for the extraction of emotions from user input. This data is sent to another module known as the action selection module. Using Artificial Intelligence Markup Language (AIML) format, the module searches for the best answer based on emotions, which is then sent to the output system [122]. Currently, there are no commercially available fintech agents that express their emotional states to users, possibly because of the serious nature of financial tasks.

5. Issues and Theories Related to Human-Fintech Agent Interaction

This section examines the issues and theories related to: (1) agents’ understanding of user intention, emotion, and financial behavior, (2) agents’ manifestation as social actors (embodiment, emotion, and personality), and (3) users’ social interaction with agents. As fintech agent interaction is a new and emerging field of study, studies on general human-agent interaction will also be included.

5.1. Issues and Theories Concerning Fintech Agents Understanding Users

5.1.1. Understanding User Intention

Understanding user intention enables an agent to predict the next interaction required, estimate the user’s learning and performance of the task, and predict and prepare for potential mistakes [123]. User intention is largely deduced from user inputs or overt signals to the system. Older input devices included tools such as a mouse and keyboard, but these methods have expanded to include speech, touch, and gestures, which are more natural ways of interacting and easier to learn and perform [124]. Studies have also employed search behaviors [125] and eye tracking data [126] to estimate user intention. However, while user inputs and eye tracking can provide detailed information, the underlying cognitive processes and intentions are not explicitly revealed and need to be inferred [127]. Modeling these semantic intentions [128] accurately and thoroughly remains a challenge for machines.
Humans are much more adept than computers at understanding human semantic intention because the structure of the human brain contains specialized neural mechanisms (mental modules) [92]. One of these, the Theory of Mind Module (ToMM) [95], allows humans with typical brain development to deduce and predict the intentions, desires, and beliefs of others. Researchers do not yet have a full picture of the mechanisms of ToMM, but aspects of it have been simulated in interactive technology. These include educational game agents that demonstrate that they are aware of other agents’ intentions and mental models [129], a robot equipped with an internal model of itself and the actors in its environment to anticipate the consequences of an action [130], and the robots Kismet and Cog, designed to demonstrate their mental states via movement and facial expression [131]. ToMM has also been discussed as a requirement for inclusion in the design of technology such as social robots [132] and autonomous robots [133] to better facilitate human interaction.
Along this line of research and development, we propose that future research should consider designing fintech agents with specialized modules to observe specific social cues from users and better determine their intentions. A user’s internal states may be manifested via eye movements, speech, touch, handwriting, and facial expression. An agent with a simulation of ToMM could include dedicated sensors for detecting these cues. A thorough understanding of user intention is crucial in the financial context, as user behavior is not uniform and individual user factors have been found to influence financial decision-making [134]. While current robo-advisory apps can identify different risk profiles among their users, agents that can discern and predict user intention more accurately on an individual level may have the potential to minimize losses.
The concept of user intention has also been explored in the broader sense of whether a user plans to adopt a technology. The Unified Theory of Acceptance and Use of Technology (UTAUT), originally conceived for organizational contexts, has also been applied as a framework to analyze adoption intention in consumer contexts. For instance, UTAUT has been utilized to investigate the intention to adopt mobile financial services, online banking, and blockchain technology [135]. Additionally, social cues have been incorporated into a fintech chatbot to determine if users intend to continue using the technology [136].

5.1.2. Understanding User Emotion

Emotion, a subset of affect, is generally conceptualized as having valence (a positive–negative continuum), intensity (mild to strong), and duration (brief to enduring). It can significantly arouse and orient humans towards behavioral responses such as flight when experiencing fear [137]. Studies have shown that human consciousness arises from the interplay of cognitive processes and emotions. Emotional activity in the brain has a significant impact on cognitive mechanisms such as learning, memory, and decision-making [138]. This makes it crucial for interactive agents to understand users’ emotions [84], especially in the financial context, as emotional biases may influence rational thought processes [139].
Developments in affective computing have enabled objective and reliable measures of human emotion with little human involvement [140], providing significant opportunity for the development of fintech agents that can assess user emotion. Computers can assess emotional data from visual, audio, textual, physiological, and behavioral modalities. Visual modalities (images and videos) provide information such as facial expression [141] and body gestures [142]. Audio modalities provide data such as speech, everyday sounds, acoustics, and elements of vocal data such as pitch and intensity, which are used in emotion and sentiment analysis. Textual modalities, including client interaction logs, social media posts, and reviews, have been used to gain insights into emotion [143]. One drawback of these modalities is that participants may not produce a continuous stream of visual, speech, or text data for emotion analysis. These modalities can be complemented by physiological measures such as heart rate [144], galvanic skin response [145], electrocardiography and electroencephalography [146], respiration rate and eye gaze [147], among others. Finally, behavioral modalities such as the user’s gestures, postures, and actions with the computer mouse and keyboard have been used to determine affective states [148].
Once agents have identified user emotion, this data can be integrated into the agent’s response [149]. Agents can thus be a useful means of providing cognitive and emotional feedback and enhancing online learning and interaction. For example, a speech-based emotion recognition system was developed in an interactive robot, which aided the human-robot interaction process [150]. To enhance their usefulness, agents should be taught sub-categories of emotions to enable better inferences and provide more tailored responses [151]. Integrating affective measurement capabilities specifically into fintech agents is in the preliminary stages of exploration, usually involving only one modality, but results from other domains provide promising opportunities for the development of next-generation fintech agents. In addition, future studies should address the shortcomings of using a single modality. People express their emotions in different ways; for example, some may use the tonal range of their voices to express their emotions, while others may rely more on facial expression [152]. Fintech agents equipped with multimodal sensors, which can analyze various forms of data such as tone of voice, facial expressions, eye gaze, physiological responses, and other tactile cues, will be capable of accounting for individual differences in emotional expression. This will enable further exploration of fintech agents as social actors that can adapt to a user’s emotional style, assess their emotional state, and deliver an appropriate response [144].

5.1.3. Understanding Users’ Financial Behaviors

Apart from understanding users’ intentions and emotions, fintech agents must also understand users’ financial behaviors, such as their decision biases and investment history. Traditional utility theory assumes the rationality of practitioners. In contrast, behavioral finance scholars have examined how psychology influences the behavior of financial practitioners [141,153]. Fintech agents, especially robo-advisors that automate investments for users, must be programmed with theoretical knowledge of these biases and identify individual user characteristics to minimize financial losses. It is unclear if current robo-advisory services are specifically programmed to account for human biases, but preliminary research suggests that fintech agents may have the potential to mitigate them [88,154]. Some financial biases include overconfidence, or overestimating one’s skills and chances of success [155]; conservatism bias, which is when investors choose to maintain their initial forecasts or views without paying adequate attention to new information [156]; confirmation bias, where people give attention to or actively seek information that is in line with their beliefs or the hypothesis at hand [157]; and loss aversion bias, which is the phenomenon of people preferring to avoid losses than to make gains [158]. Fintech agents may be used to further our understanding of these financial biases and to examine what advisory approach is best suited to different user emotions and intentions. Relative to a human advisor, the ease with which a fintech agent can customize its approach to different users and incorporate data from users’ past behaviors holds promise for further research.
Beyond human biases, financial behaviors can also be influenced by market conditions such as high volatility as well as the investor’s personality. A preliminary study [159] found that when it comes to allocating funds to a robo-advisor, a complementary-attraction effect was exhibited, where investors with dominant personalities allocated more funds for investment when they interacted with robo-advisors with a submissive personality. This suggests that fintech agents should ascertain the personality of the user along with their risk appetite to foster a better interaction experience and profit-making behavior. The preliminary study also demonstrated that during high market volatility, investors prefer submissive robo-advisors regardless of their own personality. These suggest that a fintech agent may be perceived better if it takes a more controlled approach to its recommendations, catering its advice not only to the personality of its users but also to changing market conditions.

5.2. Issues and Theories Concerning Agents Manifesting as Social Actors

Once agents have determined what the user wishes to achieve, they must respond in the best way possible. As experts in social interaction, humans are more likely to enjoy the interaction with agents and feel that they are competent when agents follow social conventions and expectations [106]. This section explores three issues—agent embodiment, agent emotion, and agent personality—which have been dominant topics of study in human-agent interaction.

5.2.1. Embodiment

While embodiment is not a prerequisite for social interaction, it can quickly establish the possibility of interaction and has therefore received significant attention [106]. Agent embodiment can be anthropomorphic (human-like), zoomorphic (animal-like), caricatured (character-like), or functional (task-oriented). Among these, the human-like method—anthropomorphism—has been regarded as an efficient way of enhancing users’ experiences with an agent. Anthropomorphism originally referred to the human tendency to superimpose human functional and behavioral characteristics on animals or objects to allow humans to rationalize their actions with greater ease [160]. In human-computer interaction (HCI), however, the term has been used to refer to equipping agents with human-like characteristics such as face, body, voice, emotion, personality, and even identity. Anthropomorphism in HCI increases social bonding and user perception of agent competence [161], which is crucial for the serious nature of financial transactions. Hence, we believe that anthropomorphism is likely to be the dominant method for designing fintech agents.
Research on embodied agents in fintech is very limited, but studies in related areas such as e-commerce and e-learning (which can be likened to a financial advisory relationship) can provide insights. A study found that online shoppers underwent a better experience when interacting with an anthropomorphic agent that had a human voice and trusted the agent more to help them with their purchase decisions [162]. Most studies on the effects of anthropomorphic agents in e-commerce show that anthropomorphism has generally positive impacts on user experience and buying intention [163,164,165]. In the context of e-learning, a study found that students’ performance improved with an anthropomorphic agent (human-voiced, human-like gestures, facial expression, eye gaze, and body movement) compared to an agent with only human voice [166]. Most studies on the effects of anthropomorphic agents in e-learning show that anthropomorphism has positive impacts on learning and memory [167,168,169].
As technologies progress, fintech agents have developed from static images on the screen to fully animated characters incorporating facial expressions and lip movements that synchronize with speech (e.g., Figure 2). This form factor allows clients to have more natural face-to-face conversations at any time, combining the features of human interaction with the benefits of agent interaction. Furthermore, the inclusion of human-like behaviors such as eye movements has the potential to boost users’ communication with agents [170].
In spite of recent technological developments, the implementation of highly realistic anthropomorphic agents has its challenges. These include having a negative effect if the accompanying conversational abilities are underdeveloped [171], difficulties in scaling up due to differences in manners, tone, and speech across different populations [172], needing to maintain a degree of artificiality to prevent unrealistic expectations [162,173], and needing to avoid the uncanny valley effect, where an agent begins to look eerie [174]. An alternative is to use caricatured agents, as in the case of REA, the Serbian fintech agent, and Joy, a virtual assistant on DBS Bank’s website for corporate banking (Figure 6), to control unwarranted user expectations and prevent the uncanny valley effect.
As technologies to embody agents develop and become scalable, we expect that more financial firms will move towards embodiment. Given the lack of studies directly testing the effects of embodiment in fintech, future research should examine the effects of embodiment on trust, liking, perceived competence, and other important dimensions of financial relationships [175].

5.2.2. Agent Emotion

When humans interact, emotions are used to infer an interaction partner’s internal states, behavior, or traits. Similarly, when computers portray emotions, this can act as a proxy to indicate the computer’s internal state. This can foster productive and engaging interactions between users and technology [176]. Typically, artificial emotions are created based on emotional theories that categorize emotions as distinct types such as fear, anger, joy, sadness, disgust, and surprise [143]. Alternatively, emotions may be viewed as a system consisting of two dimensions, namely, arousal and valence. [177]. Some approaches to designing artificial emotions in machines combine both methods.
In human communication, mimicry fosters liking for the interaction partner [178]. An experiment using fMRI techniques found that even when users are explicitly informed that they are interacting with a computer, agents that display positive emotion in response to the user’s smile allowed the user to experience positive emotion [179]. This finding established that the robust theory about human mimicry may apply to human-agent communication too. This makes the inclusion of emotional display mechanisms in fintech agents a crucial area for research. This adds to previous research [180] that found that agents displaying self-oriented emotion had little or no effect on a user’s reactions to the agent, but agents displaying empathic emotion had major positive effects on both liking and trust. Additionally, the emotion displayed by an agent must be calibrated according to its intended users’ demographic variables. For example, older adults may need more emotionally expressive empathic agents since they may be less adept than younger adults at pinpointing the emotions manifested by certain agent designs [181]. In the financial context, agents displaying emotion are not common yet. A preliminary attempt can be seen in the Dutch bank ING’s fintech chatbot, Inga (Figure 7), which is designed to respond with empathy when a customer loses a credit card [182]. Although Inga is less embodied than the agent in Figure 2, which displays more body language, Inga uses emoticons to convey empathy for the user.
In other contexts, interactive agents have primarily displayed emotion via speech, facial expression, and body language. When it comes to identifying an agent’s emotion, a disembodied voice agent was found to be just as effective as an embodied agent in conveying happy, content, and bored emotions [183]. A study found that students recognized the emotional state of the human or agent instructor, experienced the same emotional state as the human or agent instructor, and felt more motivated when the instructors displayed positive emotion [184]. Given that different channels, such as facial expression and voice, can contribute to emotional expression, a study examined which channel plays the most important role in emotion display [185]. Examining three channels (torso/limbs/head, face, and speech) and their contribution to five emotions (happiness, sadness, anger, fear, and surprise), this research found that the biggest contributor to the perceived believability of the animated emotion was the agent’s body, followed by its face and speech. This is in line with previous research stating that humans have a strong tendency to respond to motion and to find semantic significance in motor action [106]. Studies examining the effects of fintech agents’ body movements on user trust and liking will meaningfully extend the current literature. In addition, future studies should compare the distinctive effects of different categories of agent emotion (e.g., joy versus sadness) and dimensions of agent emotion (e.g., valence and arousal) in diverse financial contexts (e.g., bull versus bear markets; gain versus loss situations). This research can lead to the development of suitably emotional agents in finance.

5.2.3. Agent Personality

Personality has been defined as the characteristic patterns of thinking, feeling, and behaving that distinguish individuals. Personality could serve as a useful affordance of the technology that guides users towards understanding an agent’s behavior and easing the interaction [106]. The proliferation of computers will demand a more natural form of communication, for example, using embodied agents that are psychologically sensitive to the user [186]. Using personality to categorize humans significantly reduces cognitive load in an interaction, making it a useful construct to implement in human-computer interaction as well [187].
The Big Five model is a widely accepted typology of personality that comprehensively represents the fundamental traits of human personality [188]. This model identifies five main dimensions of personality: extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Each of these broad dimensions is further broken down into more specific characteristics. Additionally, personality has also been defined as patterns of interpersonal interaction styles, which is a useful model for studying human-agent interaction [189]. In this area of research, two critical dimensions of personality are affiliation, which refers to how agreeable or quarrelsome an interaction partner is, and dominance, which refers to how dominant or submissive an interaction partner is. Research on personality in human-fintech agent interaction has focused primarily on the user’s personality and how that impacts one’s decision to use fintech agents. However, there is currently limited research on the design and effects of fintech agents’ personalities.
Advisory service is a highly esteemed profession. If an agent should take over this role, its personality should be in line with what users expect from a human advisor. In terms of the affiliation dimension, fintech agents clearly need to be agreeable. In terms of the dominance dimension, however, it is unclear if a fintech agent should be dominant (taking leads, confidently making decisions) or submissive (letting customers take more leads, suggesting financial decisions). Preliminary research suggests that submissive robo-advisors may be preferred, although the user’s own personality and the volatility of the market should be considered as well [190]. Future research on fintech agents should examine the personality of the agent as a factor affecting human-fintech agent interaction.
Preliminary qualitative research [159] found that a general context chatbot that identified users’ personalities, adapted to them, and demonstrated its own personality through language cues provided a positive interaction experience for the participants. Examples from industry use are Rachel, Jamie, and Cleo, agents discussed earlier in this review [191]. Research on personalizing a mobile learning user interface according to the user’s personality showed that the personalization helped to stimulate learning [192]. This holds potential for future exploration in the fintech context.

5.3. Issues and Theories Concerning Social Interaction with Fintech Agents

The previous two sections discussed the importance of fintech agents understanding their users and how agents should present themselves as interaction partners to users. These cues trigger heuristics in the human mind, whereby the human-interactive agent interaction becomes social. In the following section, we review two major research paradigms relevant to people’s social interaction with technology, which could have direct implications for studies on human-fintech agent interaction.

5.3.1. The Media Equation (TME) and Computers Are Social Actors (CASA) Paradigms

The Media Equation (TME) theory and its derivation, the Computers are Social Actors (CASA) research paradigm [79], demonstrate that people process mediated experiences as though they were real and engage in fundamentally social interactions with interactive technologies like computers. The primary focus of TME and CASA revolves around how people react to the physical and social attributes of media and interactive technologies. Physical attributes include features like screen size, audio quality, and synchrony, while social attributes involve factors such as gender, personality, and manners. People tend to respond to virtual or nonhuman stimuli in the same way they would respond to actual human beings or real objects, despite being aware of their virtual or nonhuman nature [79]. For example, the automatic human tendency to pay attention to motion in our surroundings applies to media too—we pay more attention to motion on a screen even though we know it is not real. Similarly, when a computer provides fundamental social cues such as politeness and flattery, the user evaluates the computer better, just like how humans evaluate polite humans better than others.
CASA has been studied and verified in many HCI contexts, including human-agent interaction [93], human-smartphone interaction [193], and human-robot interaction [51]. People respond socially to these interactive technologies due to mindless behavior, whereby the user does not pay attention to all relevant features of the situation, such as the fact that the social cue is from an inanimate object. Instead, the user focuses on the social cues and relies on the overuse of human social categories, overlearned social behaviors, and premature cognitive commitments that are made based on the salience of a technology’s social cues [62]. While TME explains that the human brain has not evolved enough to distinguish between actual stimuli and technology-mediated stimuli [80], other research argues that with continued interaction with an agent, users’ responses to social cues change and are different from their responses to other humans. This suggests that users could develop agent-specific social responses that are subtly different from general social responses to other humans [194]. Research in this area will provide insight into how technology should be designed to create natural and enjoyable interactions.
Human-fintech agent interaction is an emerging area of interest and is likely to attract more scholarly attention, given that finance is a very personally relevant context for users. Examining the principles of TME and CASA in a fintech context (such as giving an agent motion where necessary or programming it to be polite) would lead to an expanded understanding of the mechanisms underlying human-agent interaction and how they may function in the financial context. This will allow financial institutions to create agents that are better accepted, trusted, and liked. Future studies should also examine the effects of long-term interaction with fintech agents, given the recent developments in CASA literature and the long-term nature of the human-fintech agent relationship.

5.3.2. Social Presence Theory

Following the robust findings of TME and CASA, scholars have examined the phenomenon where people automatically behave as though they are interacting with another human, even though the experience cannot exist without human-made technology. This phenomenon is called presence, a psychological state where the virtual nature of the objects goes unnoticed, and they are experienced as though they are actual objects [195]. Presence can be experienced with objects, social actors, and representations of one’s own self. Among these three types of presence—physical, social, and self—social presence is highly relevant to the study of human-fintech agent interaction. Exploring factors that increase social presence and the effects of social presence when interacting with fintech agents, will have important theoretical and practical implications.
Several factors can affect social presence when interacting with human-agent communication, including the behavioral realism of the agent (e.g., nodding in response), the level of interactivity provided by the agent’s design, psychological factors such as similarity attraction principles (e.g., agents of the same ethnicity), and individual user factors such as gender and familiarity [196]. While embodiment in general can increase presence, highly anthropomorphic agents may create unrealistic expectations that the agents cannot fulfill, ultimately leading to lower levels of presence [197]. In text-based communication, higher synchronicity (immediate responses) and the use of emoticons have been shown to increase social presence with agents [173]. In e-commerce, agents that signal their expertise in their tasks can create higher social presence, which mediates trust [198].
The effects of social presence have been examined in various technology-mediated contexts, including finance, e-learning, and healthcare. Feelings of social presence lead to increased enjoyment, trust, perceived usefulness, positive evaluation, intention to use the technology, memory and task performance, persuasion, and message processing [187,196,199,200,201,202].
Given the well-recorded effects of social presence in various HCI contexts, future studies should further examine the effects of social presence in fintech. More nuanced approaches are needed, however, because of the private and sensitive nature of financial matters. A study showed that participants interacting with fintech chatbots prefer a mechanical chatbot over a human-like one when it comes to sharing sensitive financial information [203], suggesting that the role of social presence in fintech might be subtly different from its role in other contexts.
In addition, individual user factors are an important consideration in designing agents with social presence. For example, a user’s financial knowledge or previous investment experience should be considered, as financial transactions can be complex. Hence, beginner investors might prefer interacting with an anthropomorphic agent with a high level of social presence. Other factors, such as market conditions or interaction duration (long-term versus short-term), should also be examined in future studies examining the effects of social presence in fintech.

6. Discussion

This review article demonstrated the ecosystem of technical and social factors involved in the development of interactive fintech agents. It identified the different aspects or elements through which fintech agents are perceived as interaction partners. These elements provide a theory-based reference for the development of technologies for interactive fintech agents.
First, interactive agents can employ embodied and disembodied modes of communication, each of which has its advantages and disadvantages. Understanding these differences will aid financial institutions and scholars in developing user-friendly interactive agents. For example, embodied agents can provide a richer interaction experience by making use of nonverbal communication modalities such as facial expressions. Embodied agents may also be personalized based on users’ culture or preferences. On the other hand, disembodied agents may be preferred in situations where online accessibility or technical infrastructure is less developed.
Second, this review highlighted that interactive agents must have the intelligence to understand and execute different financial tasks. Beyond this, it would be useful for engineers and designers to examine how agents can demonstrate their intelligence to users. Some examples include the following: displaying the user’s question while they process the answer, having embodied agents provide visual cues such as nods to show that they understand the user’s request, or by predicting the user’s subsequent requests. Demonstrating intelligence can help agents gain users’ trust and their continued use.
Third, apart from understanding the financial tasks assigned to them, interactive agents must aim to understand their users. This review identified three domains of user-related knowledge that interactive agents must focus on: users’ intentions, their emotions, and their financial behaviors. Technology development in these domains will allow interactive agents to perform their tasks in a closer approximation of a traditional human financial advisor.
Fourth, to further bridge the gap between interactive fintech agents and human financial advisors, this review explained how it is important for agents to manifest as social actors, i.e., take on human-like traits to some degree. By presenting these traits or social cues, agents can trigger heuristics in the minds of users and make the interaction seem more natural. Three ways that agents can present themselves as social actors were discussed in this review: presenting themselves with form, displaying a personality, and displaying pseudo-emotions.
These aspects of human-financial agent interaction provide a critical overview of the areas where technical development should focus, which can lead to more efficient and enjoyable interaction experiences.
This review also highlighted some areas for future research and development. Foremost among these are the need to develop embodied agents and the need for research to determine if anthropomorphic agents are perceived positively in the financial industry. Furthermore, agents with mechanisms to display emotions such as empathy may be successful and well-liked, although further empirical research in this area is required. For example, in highly volatile market conditions, users may prefer agents that display less emotion. In this context, less emotion from an agent could be an advantage, as users may appreciate the agent’s purely rational decision-making, in contrast to the human decision-making process, which can be fueled by fear or herd behavior. Similarly, the effect of motion or animation, as well as agent personality, must be further investigated in the context of interactive fintech agents. Moreover, research that examines users’ feelings of social presence when interacting with a fintech agent would be useful to determine the extent to which fintech agents must present themselves as actual interaction partners. This is an important consideration in finance, a field where the human touch from a financial advisor may hold as much importance as the neutrality or rationality that a computer program can provide. Hence, the question of how human-like or machine-like a fintech agent must present itself must be investigated.
In addition, tailored experiences are another important area for future research. Agents that are intelligent enough to personalize themselves to each user would allow for greater ease of interaction. For example, agents that can learn about a user’s personality over time and adapt to suit the user or market conditions would be beneficial. Currently, some fintech agents have limited interaction, such as robo-advisors that present general information to users but do not interact on a more personalized level with individual users. This is a major area for future development. We can also expect to see hybrid models, where fintech agents and human financial advisors work in tandem to cater to and assist users.
Other areas for research include examining the effects of agents that detect user emotion and convey artificial or pseudo-emotion as a means of reflecting their internal states. A modular approach to designing fintech agents may allow for increased levels of social presence, enhancing the interaction experience. This would aid in the development of physically embodied fintech agents whose presence in the same physical space as the user may create opportunities for novel and enjoyable financial experiences. Social presence could be examined as a mediator of the effects of interaction with a fintech agent on users’ financial behaviors. As fintech agents continue to develop and seek to differentiate themselves from competition from both humans and other agents, they may be able to use customized voice modalities to interview new users to obtain detailed responses instead of relying on traditional, static questionnaires [204]. Such interactions would require advances in technology as well as careful implementation.

7. Conclusions

This paper examined a fast-expanding category of media technology—computer agents used in finance to interact with users and assist in completing financial tasks and goals. The use of interactive agents has seen considerable growth, even in contexts where the interaction could have potentially critical outcomes for users, such as healthcare [205]. However, there is a lack of review articles on the range of technologies and social interaction considerations that are important to the development of interactive agents in the context of finance. The financial context, like healthcare, has a significant degree of personal relevance to users, as they may rely on the agents for information and to take crucial decisions. Hence, both the back-end technology and the user-facing interactive aspect of the technology can have severe consequences for financial institutions and users. Considering the growth of interactive agents in this context and the serious nature of financial interactions, this review aimed to contribute to the understanding of fintech agents by providing an overview of the technologies and theoretical issues involved in creating successful interactive agents. Furthermore, this review offers a unique contribution to fintech agent understanding as it includes a social-scientific perspective. By discussing the major theoretical issues related to interaction, this review can guide the understanding of interactive agents as social actors, which will benefit scholars as well as practitioners.
The main technologies required to develop fintech, and fintech agents in particular, were presented. These emphasize the technological advancements enabling these agents’ crucial role as interaction partners with users. This was followed by a discussion of how issues and theories from a social science perspective are necessary and relevant to human-fintech agent interaction. These theoretical issues explain how an interactive agent may understand the user, how agents can manifest as social actors, and how users can perceive virtual objects as though they were actual social actors during their interaction with the technology.
Some issues are beyond the scope of this paper, but must be taken into consideration when agents are used in daily life. For example, while robo-advisors are meant to overcome the behavioral biases of human investors, researchers argue that since agents are programmed by humans, they are not free from biases such as giving more allocation weight to domestic stocks [206]. It remains to be seen if fintech agents can drastically reduce errors that a human financial advisor may be prone to and how their performance influences the interaction with users. Furthermore, when users interact with agents that take an in-depth approach to understanding users, there is a risk of users’ data getting leaked or misused. The ethics of artificial conversation partners should also be examined to ensure that humans who have grown to trust their interactions with fintech agents do not become victims of malicious programmers looking to exploit less savvy users.
Moreover, an ethical challenge persists with embodied agents. The examples of anthropomorphic fintech agents discussed in this review and available in the industry are predominantly female, which can raise concerns about perpetuating gender stereotypes such as women being assistants or taking on service roles [182]. A similar trend can be observed with disembodied chatbots, many of which have been given female names. Future research must consider the social implications of gendered agents in the finance industry, which is already male-dominated [207].
Finally, the use of fintech agents brings attention to the field of machine ethics, which aims to create agents that follow a set of ethical principles when making decisions or taking actions [208]. To establish an ethical frame of reference, organizations must have a clear understanding of their ethical stance, including how business should be conducted and what standards should be followed [209]. Fintech agents such as robo-advisors may raise ethical concerns, such as whether their interactions should prioritize the performance of a single user’s portfolio or the overall performance of all portfolios they manage, or perhaps even prioritize the overall stability of the financial market for the long-term benefit of all users. Furthermore, robo-advisors may not fulfill their fiduciary duty when advising clients if they are not advanced enough to provide adequately personalized financial advice [210]. One potential solution is to develop AI technology that has a strong moral code and understands the consequences of violating it. This would require significantly more data and continuous scenario simulation for introspective guidance [206].
Nevertheless, interactive fintech agents are likely to be an important element of the financial landscape for users of all levels of experience. Fintech provides a good opportunity for practitioners and social scientists to test social science theories in a novel context that is deeply personal to users. Human-fintech agent interaction may also function as a platform for advancing our understanding of behavioral economics and examining ways to overcome human limitations with the aid of agents. By incorporating social science theories, developers and researchers will have the opportunity to create successful fintech platforms that promote a natural and enjoyable interaction with users.
The emerging area of human-fintech agent interaction is an exciting opportunity for scholars to advance the state of research. Findings from human-fintech agent interaction will also inform other emerging fields, such as autonomous vehicle agents and agents in the healthcare context. Like finance, both these areas involve high levels of trust between users and service providers. Understanding the interaction between users and fintech agents using behavioral science will provide a good starting point for research in these other critical areas.

Author Contributions

Conceptualization, K.M.L. and A.P.; methodology, A.P. and S.G.; writing—original draft preparation, S.G. and A.P.; writing—review and editing, S.G. and K.M.L.; supervision, K.M.L.; funding acquisition, K.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Wee Kim Wee School of Communication and Information, Nanyang Technological University [WKWSCI-SUG-2023-01]; and the Ministry of Education, Singapore [RG78/18 (NS)].

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Harroch, R.D.; Guzy, M. 10 Key Issues for Fintech Startup Companies. Forbes. Available online: https://www.forbes.com/sites/allbusiness/2019/10/12/fintech-startup-companies-key-challenges/?sh=63918d243e45 (accessed on 22 September 2021).
  2. O’Grady, M.A. How Fintech Became a Hit in Brazil. Wall Str. J. Available online: https://www.wsj.com/articles/how-fintech-became-a-hit-in-brazil-nubank-innovation-credit-cards-banking-accessibility-11638714533 (accessed on 6 December 2021).
  3. Gai, K.; Qiu, M.; Sun, X. A Survey on FinTech. J. Netw. Comput. Appl. 2018, 103, 262–273. [Google Scholar] [CrossRef]
  4. Isbister, K.; Layton, T. Agents: What (or Who) Are They? In Advances in Human Computer Interaction; Nielsen, J., Norwood, J.J., Eds.; Ablex Publishing Corporation: New York, NY, USA, 1995; Volume 5, pp. 67–86. [Google Scholar]
  5. Maes, P.; Wexelblat, A. Interface Agents. In Proceedings of the Conference Companion on Human Factors in Computing Systems (CHI’96), Vancouver, BC, Canada, 13–18 April 1996; Association for Computing Machinery. pp. 369–370. [Google Scholar] [CrossRef]
  6. Koch-Rogge, M.; Westermann, G. Digitalization in the Service Economy—The Case of Banking Services. In Dienstleistungen 4.0; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2017; pp. 465–479. [Google Scholar] [CrossRef]
  7. Shneiderman, B.; Maes, P. Direct Manipulation vs. Interface Agents. Interactions 1997, 4, 42–61. [Google Scholar] [CrossRef]
  8. Milewski, A.E.; Lewis, S.H. Delegating to Software Agents. Int. J. Hum. Comput. Stud. 1997, 46, 485–500. [Google Scholar] [CrossRef] [Green Version]
  9. Gilbert, D. Intelligent Agents: The Right Information at the Right Time. IBM Intelligent Agent White Paper. 1997. Available online: https://fmfi-uk.hq.sk/Informatika/Uvod%20Do%20Umelej%20Inteligencie/clanky/ibm-iagt.pdf (accessed on 31 December 2021).
  10. Laurel, B. Interface Agents: Metaphors with Character. In The Art of Human—Computer Interface Design; Laurel, B., Ed.; Addison-Wesley: Boston, MA, USA, 1990. [Google Scholar]
  11. Maes, P. Agents That Reduce Work and Information Overload. Commun. ACM 1994, 37, 30–40. [Google Scholar] [CrossRef]
  12. Chandler, D.; Munday, R. Social Actor. In A Dictionary of Media and Communication; Oxford University Press: Oxford, UK, 2020; Available online: https://www.oxfordreference.com/view/10.1093/acref/9780198841838.001.0001/acref-9780198841838-e-3410 (accessed on 10 April 2023).
  13. Gomber, P.; Kauffman, R.J.; Parker, C.; Weber, B.W. On the Fintech Revolution: Interpreting the Forces of Innovation, Disruption, and Transformation in Financial Services. J. Manag. Inf. Syst. 2018, 35, 220–265. [Google Scholar] [CrossRef]
  14. McCarthy, J. “What Is Artificial Intelligence?” Computer Science Department. Stanford University. 1997. Available online: http://www-formal.stanford.edu/jmc/whatisai/whatisai.html (accessed on 25 April 2023).
  15. Samuel, A.L. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
  16. Schuld, M.; Sinayskiy, I.; Petruccione, F. An Introduction to Quantum Machine Learning. Contemp. Phys. 2015, 56, 172–185. [Google Scholar] [CrossRef] [Green Version]
  17. Deng, L.; Yu, D. Deep Learning: Methods and Applications. Found. Trends® Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef] [Green Version]
  18. Zheng, Y.; Wen, X. The Application of Artificial Intelligence Technology in Cloud Computing Environment Resources. J. Web Eng. 2021, 20. [Google Scholar] [CrossRef]
  19. Mansour, R.F.; Soto, C.; Soto-Díaz, R.; Escorcia Gutierrez, J.; Gupta, D.; Khanna, A. Design of Integrated Artificial Intelligence Techniques for Video Surveillance on IoT Enabled Wireless Multimedia Sensor Networks. Int. J. Interact. Multimed. Artif. Intell. 2022, 7, 14. [Google Scholar] [CrossRef]
  20. Proaño-Guevara, D.; Blanco Valencia, X.P.; Rosero-Montalvo, P.D.; Peluffo-Ordóñez, D.H. Electromiographic Signal Processing Using Embedded Artificial Intelligence: An Adaptive Filtering Approach. Int. J. Interact. Multimed. Artif. Intell. 2022, 7, 40. [Google Scholar] [CrossRef]
  21. Haleem, A.; Javaid, M.; Asim Qadri, M.; Pratap Singh, R.; Suman, R. Artificial Intelligence (AI) Applications for Marketing: A Literature-Based Study. Int. J. Intell. Netw. 2022, 3, 119–132. [Google Scholar] [CrossRef]
  22. Noreen, U.; Shafique, A.; Ahmed, Z.; Ashfaq, M. Banking 4.0: Artificial Intelligence (AI) in Banking Industry & Consumer’s Perspective. Sustainability 2023, 15, 3682. [Google Scholar] [CrossRef]
  23. Kapsis, I. Artificial Intelligence in Financial Services: Systemic Implications and Regulatory Responses. Bank. Financ. Serv. Policy Rep. 2020, 39, 1–21. [Google Scholar]
  24. Phoon, K.; Koh, F. Robo-Advisors and Wealth Management. J. Altern. Investig. 2017, 20, 79–94. [Google Scholar] [CrossRef]
  25. Koksal, I. How AI Is Expanding the Applications of Robo Advisory. Forbes. 18 April 2020. Available online: https://www.forbes.com/sites/ilkerkoksal/2020/04/18/how-ai-is-expanding-the-applications-of-robo-advisory/ (accessed on 20 May 2021).
  26. Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The Impact of Chatbot Conversational Skill on Engagement and Perceived Humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
  27. Mhlanga, D. Industry 4.0 in finance: The impact of artificial intelligence (AI) on digital financial inclusion. Int. J. Financ. Stud. 2020, 8, 45. [Google Scholar] [CrossRef]
  28. Hwang, S.; Kim, J. Toward a Chatbot for Financial Sustainability. Sustainability 2021, 13, 3173. [Google Scholar] [CrossRef]
  29. Tokic, D. BlackRock Robo-Advisor 4.0: When Artificial Intelligence Replaces Human Discretion. Strateg. Chang. 2018, 27, 285–290. [Google Scholar] [CrossRef]
  30. Ozmen Garibay, O.; Winslow, B.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.; Falco, G.; Fiore, S.M.; Garibay, I.; Grieman, K.; et al. Six Human-Centered Artificial Intelligence Grand Challenges. Int. J. Hum. Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
  31. Luan, H.; Geczy, P.; Lai, H.; Gobert, J.; Yang, S.J.H.; Ogata, H.; Baltes, J.; Guerra, R.; Li, P.; Tsai, C.-C. Challenges and Future Directions of Big Data and Artificial Intelligence in Education. Front. Psychol. 2020, 11, 580820. [Google Scholar] [CrossRef] [PubMed]
  32. Verma, J.P.; Agrawal, S. Big Data Analytics: Challenges and Applications for Text, Audio, Video, and Social Media Data, Video. IJSCAI 2016, 5, 41–51. [Google Scholar] [CrossRef]
  33. Tenn, J. Robo Investing App. Yahoo Finance. Q.ai Launches Beta Version of Its AI-Powered. Available online: https://finance.yahoo.com/news/q-ai-launches-beta-version-142748487.html (accessed on 17 February 2021).
  34. Srivastava, U.; Gopalkrishnan, S. Impact of Big Data Analytics on Banking Sector: Learning for Indian Banks. Procedia Comput. Sci. 2015, 50, 643–652. [Google Scholar] [CrossRef] [Green Version]
  35. Aziz, N.A.; Long, F.; Wan Hussain, W.M.H. Examining the Effects of Big Data Analytics Capabilities on Firm Performance in the Malaysian Banking Sector. Int. J. Financ. Stud. 2023, 11, 23. [Google Scholar] [CrossRef]
  36. Nayak, B.; Bhattacharyya, S.S.; Krishnamoorthy, B. Integrating Wearable Technology Products and Big Data Analytics in Business Strategy. J. Syst. Inf. Technol. 2019, 21, 255–275. [Google Scholar] [CrossRef]
  37. Snapshot Rewards You for Good Driving. An Ins. Comp. You Can Rely on|Progressive. Available online: https://www.progressive.com/auto/discounts/snapshot/ (accessed on 25 April 2023).
  38. Hernández-Nieves, E.; Parra-Domínguez, J.; Chamoso, P.; Rodríguez-González, S.; Corchado, J.M. A Data Mining and Analysis Platform for Investment Recommendations. Electronics 2021, 10, 859. [Google Scholar] [CrossRef]
  39. Mell, P.M.; Grance, T. The NIST Definition of Cloud Computing; NIST: Gaithersburg, MD, USA, 2011. [Google Scholar] [CrossRef]
  40. Apostu, A.; Rednic, E.; Puican, F. Modeling Cloud Architecture in Banking Systems. Procedia Econ. Fin. 2012, 3, 543–548. [Google Scholar] [CrossRef] [Green Version]
  41. Cheng, M.; Qu, Y.; Jiang, C.; Zhao, C. Is Cloud Computing the Digital Solution to the Future of Banking? J. Financ. Stab. 2022, 63, 101073. [Google Scholar] [CrossRef]
  42. Misra, S.C.; Doneria, K. Application of Cloud Computing in Financial Services: An Agent-Oriented Modelling Approach. J. Modell. Manag. 2018, 13, 994–1006. [Google Scholar] [CrossRef]
  43. Temenos. 15 November 2021. Available online: https://www.temenos.com/ (accessed on 21 November 2021).
  44. Nofer, M.; Gomber, P.; Hinz, O.; Schiereck, D. Blockchain. Bus. Inf. Syst. Eng. 2017, 59, 183–187. [Google Scholar] [CrossRef]
  45. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R.; Khan, S. A Review of Blockchain Technology Applications for Financial Services. BenchCouncil Trans. Benchmarks Stand. Eval. 2022, 2, 100073. [Google Scholar] [CrossRef]
  46. Rawat, D.B.; Njilla, L.; Kwiat, K.; Kamhoua, C. IShare: Blockchain-Based Privacy-Aware Multi-agent Information Sharing Games for Cybersecurity. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC), Maui, HI, USA, 5–8 March 2018; Volume 2018. [Google Scholar] [CrossRef]
  47. Kowalski, M.; Lee, Z.W.Y.; Chan, T.K.H. Blockchain Technology and Trust Relationships in Trade Finance. Technol. Forecast. Soc. Chang. 2021, 166, 120641. [Google Scholar] [CrossRef]
  48. Chen, W.; Xu, Z.; Shi, S.; Zhao, Y.; Zhao, J. A Survey of Blockchain Applications in Different Domains. In Proceedings of the 2018 International Conference on Blockchain Technology and Application—ICBTA, Xi’an, China, 10–12 December 2018; pp. 17–21. [Google Scholar] [CrossRef] [Green Version]
  49. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008. Available online: https://bitcoin.org/bitcoin.pdf (accessed on 25 April 2023).
  50. Milutinović, M. Cryptocurrency. Ekonomika 2018, 64, 105–122. [Google Scholar] [CrossRef] [Green Version]
  51. Lee, K.M.; Jung, Y.; Kim, J.; Kim, S.R. Are Physically Embodied Social Agents Better than Disembodied Social Agents? The Effects of Physical Embodiment, Tactile Interaction, and People’s Loneliness in Human–Robot Interaction. Int. J. Hum.-Comput. Stud. 2006, 64, 962–973. [Google Scholar] [CrossRef]
  52. Benford, S.; Bowers, J.; Fahlén, L.E.; Greenhalgh, C.; Snowdon, D. Embodiments, Avatars, Clones and Agents for Multi-user, Multi-sensory Virtual Worlds. Multimed. Syst. 1997, 5, 93–104. [Google Scholar] [CrossRef]
  53. Cassell, J.; Bickmore, T.; Billinghurst, M.; Campbell, L.; Chang, K.; Vilhjálmsson, H.; Yan, H. Embodiment in Conversational Interfaces. In Proceedings of the S.I.G.C.H.I. Conference on Human Factors in Computing Systems (CHI ’99), Pittsburgh, PA, USA, 15–20 May 1999; Association for Computing Machinery. pp. 520–527. [Google Scholar] [CrossRef]
  54. Xie, F. Zhu Pinpin: Making Robots with ‘Brains’. China Today. 10 October 2016. Available online: http://www.chinatoday.com.cn/english/report/2016-10/10/content_728688.htm (accessed on 5 January 2022).
  55. Isbister, K.; Nass, C. Consistency of Personality in Interactive Characters: Verbal Cues, Non-verbal Cues, and User Characteristics. Int. J. Hum. Comput. Stud. 2000, 53, 251–267. [Google Scholar] [CrossRef] [Green Version]
  56. Nass, C.; Lee, K.M. Does Computer-Synthesized Speech Manifest Personality? Experimental Tests of Recognition, Similarity-Attraction, and Consistency-Attraction. J. Exp. Psychol. Appl. 2001, 7, 171–181. [Google Scholar] [CrossRef] [PubMed]
  57. Li, J. The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots and Virtual Agents. Int. J. Hum. Comput. Stud. 2015, 77, 23–37. [Google Scholar] [CrossRef]
  58. Pfeifer, R.; Iida, F. Embodied Artificial Intelligence: Trends and Challenges. In Lecture Notes in Computer Science; Embodied Artificial Intelligence; Iida, F., Pfeifer, R., Steels, L., Kuniyoshi, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; p. 3139. [Google Scholar] [CrossRef] [Green Version]
  59. Bradshaw, J.M. Software Agents; AAAI Press: Washington, DC, USA, 1997. [Google Scholar]
  60. Clark, L.; Pantidi, N.; Cooney, O.; Doyle, P.; Garaialde, D.; Edwards, J.; Spillane, B.; Gilmartin, E.; Murad, C.; Munteanu, C.; et al. What Makes a Good Conversation? Challenges in Designing Truly Conversational Agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar] [CrossRef] [Green Version]
  61. Dennett, D.C. Intentional Systems. J. Philos. 1971, 68, 87–106. [Google Scholar] [CrossRef]
  62. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  63. Kumar, V.; Dixit, A.; Javalgi, R.G.; Dass, M. Research Framework, Strategies, and Applications of Intelligent Agent Technologies (IATs) in Marketing. J. Acad. Mark. Sci. 2016, 44, 24–45. [Google Scholar] [CrossRef]
  64. Quah, J.T.; Chua, Y.W. Chatbot Assisted Marketing in Financial Service Industry. Services Computing. In Services Computing–SCC 2019: 16th International Conference, Held as Part of the Services Conference Federation, SCF 2019, San Diego, CA, USA, June 25–30, 2019, Proceedings 16; State (Corporation) Commission; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 2019, pp. 107–114. [Google Scholar] [CrossRef]
  65. Alaaeldin, R.; Asfoura, E.; Kassem, G.; Abdel-Haq, M.S. Developing Chatbot System to Support Decision Making Based on Big Data Analytics. Acad. Inf. Manag. Sci. J. 2021, 24, 1–15. [Google Scholar]
  66. Ridha, M.; Haura Maharani, K. Implementation of Artificial Intelligence Chatbot in Optimizing Customer Service in Financial Technology Company PT. FinAccel Finance Indonesia. ICVEAST 2022. Proceedings 2022, 83, 21. [Google Scholar] [CrossRef]
  67. Riikkinen, M.; Saarijärvi, H.; Sarlin, P.; Lähteenmäki, I. Using Artificial Intelligence to Create Value in Insurance. Int. J. Bank Mark. 2018, 36, 1145–1168. [Google Scholar] [CrossRef]
  68. Chen, Z.; Liu, B. Lifelong Machine Learning, 2nd ed.; Morgan & Claypool Publishers: San Rafael, CA, USA, 2018. [Google Scholar]
  69. Suhel, S.F.; Shukla, V.K.; Vyas, S.; Mishra, V.P. Conversation to Automation in Banking Through Chatbot Using Artificial Machine Intelligence Language. In Proceedings of the 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 4–5 June 2020; pp. 611–618. [Google Scholar] [CrossRef]
  70. Marous, J. Meet 11 of the Most Interesting Chatbots in Banking. The Financial Brand. 2018. Available online: https://thefinancialbrand.com/news/banking-technology/chatbots-banking-trends-ai-cx-71251/ (accessed on 4 November 2021).
  71. Ahmadvand, A. User Intent Inference for Web Search and Conversational Agents. In Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 3–7 February 2020; pp. 911–912. [Google Scholar] [CrossRef] [Green Version]
  72. Margaret, D.S.; Elangovan, N.; Balaji, V.; Sriram, M. The Influence and Impact of AI-Powered Intelligent Assistance for Banking Services. In Proceedings of the International Conference on Emerging Trends in Business & Management (ICETBM 2023), Kalavakkam, India, 24–25 February 2023; pp. 374–385. [Google Scholar] [CrossRef]
  73. Armentano, M.G.; Amandi, A.A. Personalized Detection of User Intentions. Knowl. Based Syst. 2011, 24, 1169–1180. [Google Scholar] [CrossRef]
  74. Kalsum, T.; Anwar, S.M.; Majid, M.; Khan, B.; Ali, S.M. Emotion Recognition from Facial Expressions Using Hybrid Feature Descriptors. IET Image Process. 2018, 12, 1004–1012. [Google Scholar] [CrossRef]
  75. Schiaffino, S.; Armentano, M.; Amandi, A. Building Respectful Interface Agents. Int. J. Hum. Comput. Stud. 2010, 68, 209–222. [Google Scholar] [CrossRef]
  76. Bansal, S.; Borysowich, C. Financial Chatbots: A Landscape of White Label Banking Products. White Paper. 2017. Available online: https://www.capco.com/Intelligence/Capco-Intelligence/Financial-Chatbots (accessed on 18 November 2021).
  77. Iovine, A.; Degemmis, M.; Narducci, F.; Semeraro, G.; Filisetti, D.; Ingoglia, D.; Lekkas, G.P. A Virtual Customer Assistant for the Wealth Management Domain in the UWMP Project. In Proceedings of the IUI ’20 Workshops, Cagliari, Italy, 17 March 2020; ACM: New York, NY, USA, 2020. [Google Scholar]
  78. Castellanos, S. Capital One’s Tech Transformation. Wall St. J. 30 October 2018. Available online: https://www.wsj.com/articles/capital-ones-tech-transformation-1540865280 (accessed on 15 December 2021).
  79. Reeves, B.; Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  80. Picard, R.W. Affective Computing: Challenges. Int. J. Hum. Comput. Stud. 2003, 59, 55–64. [Google Scholar] [CrossRef]
  81. Teye, M.T.; Missah, Y.M.; Ahene, E.; Frimpong, T. Evaluation of Conversational Agents: Understanding Culture, Context and Environment in Emotion Detection. IEEE Access 2022, 10, 24976–24984. [Google Scholar] [CrossRef]
  82. Chatterjee, A.; Gupta, U.; Chinnakotla, M.K.; Srikanth, R.; Galley, M.; Agrawal, P. Understanding Emotions in Text Using Deep Learning and Big Data. Comput. Hum. Behav. 2019, 93, 309–317. [Google Scholar] [CrossRef]
  83. Pophale, S.; Gandhi, H.; Gupta, A.K. Emotion Recognition Using Chatbot System. In Proceedings of the International Conference on Recent Trends in Machine Learning, IoT, SMART Cities and Applications; Springer: Singapore, 2021; pp. 579–587. [Google Scholar] [CrossRef]
  84. Iribarren, M. Analyzing Emotion in Customer’s Voices: Rosbank and AI Startup Neurodata Lab. 4 March 2019. Available online: https://voicebot.ai/2019/03/04/analyzing-emotion-in-customers-voices-rosbank-and-ai-startup-neurodata-lab/ (accessed on 19 May 2021).
  85. Udemans, C. AI Startup Emotibot Raises $45 Million in Series B+. Up and Comers. 25 October 2019. Available online: https://technode.com/2019/10/25/emotibot-raises-45-million-series-b/ (accessed on 25 April 2023).
  86. Jung, D.; Dorner, V.; Glaser, F.; Morana, S. Robo-Advisory: Digitalization and Automation of Financial Advisory. Bus. Inf. Syst. Eng. 2018, 60, 81–86. [Google Scholar] [CrossRef]
  87. Jung, D.; Dorner, V.; Weinhardt, C.; Pusmaz, H. Designing a Robo-Advisor for Risk-Averse, Low-Budget Consumers. Electron. Mark. 2018, 28, 367–380. [Google Scholar] [CrossRef]
  88. Shanmuganathan, M. Behavioural Finance in an Era of Artificial Intelligence: Longitudinal Case Study of Robo-Advisors in Investment Decisions. J. Behav. Exp. Fin. 2020, 27, 100297. [Google Scholar] [CrossRef]
  89. Akkerman, F. Developing a Robo-Advisor Typology—Lessons from Action Design Research. In Proceedings of the 11th IBA Bachelor Thesis Conference, Virtul, 17–18 January 2018; Beterinbeleggen.nl. Available online: http://purl.utwente.nl/essays/75436 (accessed on 25 April 2023).
  90. Alsabah, H.; Capponi, A.; Ruiz Lacedelli, O.; Stern, M. Robo-Advising: Learning Investors’ Risk Preferences via Portfolio Choices. J. Financ. Econ. 2021, 19, 369–392. [Google Scholar] [CrossRef] [Green Version]
  91. Cosmides, L.; Tooby, J. Origins of Domain Specificity: The Evolution of Functional Organization. In Mapping the Mind: Domain Specificity in Cognition and Culture; Hirschfeld, L.A., Gelman, S.A., Eds.; Cambridge University Press: Cambridge, UK, 1994; pp. 85–116. [Google Scholar] [CrossRef]
  92. Pfeifer, R.; Scheier, C. Understanding Intelligence; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  93. Deng, E.; Mutlu, B.; Mataric, M.J. Embodiment in Socially Interactive Robots. Found. Trends® Robot. 2019, 7, 251–356. [Google Scholar] [CrossRef]
  94. Van Pinxteren, M.M.E.; Pluymaekers, M.; Lemmink, J.G.A.M. Human-Like Communication in Conversational Agents: A Literature Review and Research Agenda. J. Serv. Manag. 2020, 31, 203–225. [Google Scholar] [CrossRef]
  95. Baron-Cohen, S. Mindblindness: An Essay on Autism and Theory of Mind; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  96. André, E.; Pelachaud, C. Interacting with Embodied Conversational Agents. In Speech Technology; Springer: New York, NY, USA, 2010; pp. 123–149. [Google Scholar] [CrossRef]
  97. Huisman, G.; Bruijnes, M.; Kolkmeier, J.; Jung, M.; Darriba Frederiks, A.; Rybarczyk, Y. Touching Virtual Agents: Embodiment and Mind. In Innovative and Creative Developments in Multimodal Interaction Systems: 9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, Lisbon, Portugal, July 15–August 9, 2013. Proceedings 9; Springer: Berlin/Heidelberg, Germany, 2014; pp. 114–138. [Google Scholar] [CrossRef] [Green Version]
  98. Gillies, M.; Robeterson, D.; Ballin, D. Direct Manipulation Like Tools for Designing Intelligent Virtual Agents. In International Workshop on Intelligent Virtual Agents; Springer: Berlin/Heidelberg, Germany, 2005; pp. 430–441. [Google Scholar] [CrossRef] [Green Version]
  99. Kaur, D.P.; Singh, N.P.; Banerjee, B. A Review of Platforms for Simulating Embodied Agents in 3D Virtual Environments. Artif. Intell. Rev. 2022, 56, 3711–3753. [Google Scholar] [CrossRef]
  100. Chérif, E.; Lemoine, J. Anthropomorphic Virtual Assistants and the Reactions of Internet Users: An Experiment on the Assistant’s Voice. Rech. Appl. Mark. 2019, 34, 28–47. [Google Scholar] [CrossRef]
  101. Somers, M. Emotion AI, Explained; MIT Press: Cambridge, MA, USA, 2019; Available online: https://mitsloan.mit.edu/ideas-made-to-matter/emotion-ai-explained (accessed on 15 February 2022).
  102. Randhavane, T.; Bera, A.; Kapsaskis, K.; Sheth, R.; Gray, K.; Manocha, D. EVA: Generating Emotional Behavior of Virtual Agents Using Expressive Features of Gait and Gaze. In Proceedings of the A.C.M. Symposium on Applied Perception, Barcelona, Spain, 19–20 September 2019. [Google Scholar] [CrossRef] [Green Version]
  103. Lapčević, J. REA—An AI-Based Avatar as the Most Popular Banking Officer in Serbia. Discover CEE. 11 June 2019. Available online: https://www.discover-cee.com/rea-an-ai-based-avatar-as-the-most-popular-banking-officer-in-serbia/ (accessed on 19 May 2021).
  104. Rumney, E. British Bank RBS Hires ‘Digital Human’ Cora on Probation. Reuters. 21 February 2018. Available online: https://www.reuters.com/article/us-rbs-avatar-idUSKCN1G523L (accessed on 19 May 2021).
  105. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A Survey of Socially Interactive Robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  106. Ruane, E.; Farrell, S.; Ventresque, A. User Perception of Text-Based Chatbot Personality. In Chatbot Research and Design: 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers 4; Springer: Berlin/Heidelberg, Germany, 2021; pp. 32–47. [Google Scholar] [CrossRef]
  107. Tang, J.; Chen, M.; Yang, C.; Chung, T.; Lee, Y. Personality Traits, Interpersonal Relationships, Online Social Support, and Facebook Addiction. Telemat. Inform. 2016, 33, 102–108. [Google Scholar] [CrossRef]
  108. Roy, Q.; Ghafurian, M.; Li, W.; Hoey, J. Users, Tasks, and Conversational Agents: A Personality Study. In Proceedings of the 9th International Conference on Human-Agent Interaction, Virtual Event, Japan, 9–11 November 2021; pp. 174–182. [Google Scholar] [CrossRef]
  109. Bennett, C. Emergent Robotic Personality Traits via Agent-Based Simulation of Abstract Social Environments. Information 2021, 12, 103. [Google Scholar] [CrossRef]
  110. Safron, A.; DeYoung, C.G. Integrating Cybernetic Big Five Theory with the Free Energy Principle: A New Strategy for Modeling Personalities as Complex Systems. In Measuring and Modeling Persons and Situations; Read, D.S.J., Harms, P., Slaughter, A., Eds.; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar]
  111. Read, S.J.; Miller, L.C. Virtual Personalities: A Neural Network Model of Personality. Pers. Soc. Psychol. Rev. 2002, 6, 357–369. [Google Scholar] [CrossRef]
  112. Ahmad, R.; Siemon, D.; Gnewuch, U.; Robra-Bissantz, S. A Framework of Personality Cues for Conversational Agents. In Proceedings of the Annual Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar] [CrossRef]
  113. Dryer, D.C. Getting Personal with Computers: How to Design Personalities for Agents. Appl. Artif. Intell. 1999, 13, 273–295. [Google Scholar] [CrossRef] [Green Version]
  114. Minarik, D. Artificial Intelligence: How to Teach AdamTB to Understand Slovak. 18 June 2020. Available online: https://www.discover-cee.com/artificial-intelligence-how-to-teach-adamtb-to-understand-slovak/ (accessed on 19 November 2021).
  115. Hot off the Press: Introducing Jamie—ANZ. Soul Mach. Available online: https://www.soulmachines.com/2018/07/hot-off-the-press-introducing-jamie-anzs-new-digital-assistant/https (accessed on 19 November 2021).
  116. Nayak, V.; Turk, M. Emotional Expression in Virtual Agents Through Body Language. In Advances in Visual Computing; Springer: Berlin/Heidelberg, Germany, 2005; pp. 313–320. [Google Scholar] [CrossRef]
  117. Hortensius, R.; Hekele, F.; Cross, E.S. The Perception of Emotion in Artificial Agents. IEEE Trans. Cogn. Dev. Syst. 2017, 10, 852–864. [Google Scholar] [CrossRef] [Green Version]
  118. Torre, I.; Goslin, J.; White, L. If Your Device Could Smile: People Trust Happy-Sounding Artificial Agents More. Comput. Hum. Behav. 2020, 105, 106215. [Google Scholar] [CrossRef]
  119. Liu-Thompkins, Y.; Okazaki, S.; Li, H. Artificial Empathy in Marketing Interactions: Bridging the Human-AI Gap in Affective and Social Customer Experience. J. Acad. Mark. Sci. 2022, 50, 1198–1218. [Google Scholar] [CrossRef]
  120. Weber-Guskar, E. How to Feel About Emotionalized Artificial Intelligence? When Robot Pets, Holograms, and Chatbots Become Affective Partners. Ethics Inf. Technol. 2021, 23, 601–610. [Google Scholar] [CrossRef]
  121. Rodríguez, L.; Ramos, F. Computational Models of Emotions for Autonomous Agents: Major Challenges. Artif. Intell. Rev. 2015, 43, 437–465. [Google Scholar] [CrossRef]
  122. Sutoyo, R.; Chowanda, A.; Kurniati, A.; Wongso, R. Designing an Emotionally Realistic Chatbot Framework to Enhance Its Believability with AIML and Information States. Procedia Comput. Sci. 2019, 157, 621–628. [Google Scholar] [CrossRef]
  123. Fu, E.Y.; Kwok, T.C.K.; Wu, E.Y.; Leong, H.V.; Ngai, G.; Chan, S.C.F. Your Mouse Reveals Your Next Activity: Towards Predicting User Intention from Mouse Interaction. In Proceedings of the IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), Turin, Italy, 4–8 July 2017; Volume 2017, pp. 869–874. [Google Scholar] [CrossRef]
  124. Shi, Y. Interpreting User Input Intention in Natural Human Computer Interaction. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP’18), Singapore, 8–11 July 2018; Association for Computing Machinery. pp. 277–278. [Google Scholar] [CrossRef]
  125. Caruccio, L.; Deufemia, V.; Polese, G. Understanding User Intent on the Web Through Interaction Mining. J. Vis. Lang. Comput. 2015, 31, 230–236. [Google Scholar] [CrossRef]
  126. Mele, M.L.; Federici, S. Gaze and Eye-Tracking Solutions for Psychological Research. Cogn. Process. 2012, 13 (Suppl. S1), S261–S265. [Google Scholar] [CrossRef]
  127. van Gog, T.; Jarodzka, H. Eye Tracking as a Tool to Study and Enhance Cognitive and Metacognitive Processes in Computer-Based Learning Environments. In International Handbook of Metacognition and Learning Technologies; Azevedo, R., Aleven, V., Eds.; Springer: New York, NY, USA, 2013; Volume 28, pp. 143–156. [Google Scholar] [CrossRef]
  128. Chen, Z.; Lin, F.; Liu, H.; Liu, Y.; Ma, W.-Y.; Wenyin, L. User Intention Modeling in Web Applications Using Data Mining. World Wide Web 2002, 5, 181–191. [Google Scholar] [CrossRef]
  129. Si, M.; Marsella, S.C. Encoding Theory of Mind in Character Design for Pedagogical Interactive Narrative. Adv. Hum. Comput. Interact. 2014, 2014, 1–10. [Google Scholar] [CrossRef] [Green Version]
  130. Winfield, A.F.T. Experiments in Artificial Theory of Mind: From Safety to Story—Telling. Front. Robot. A.I. 2018, 5, 75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  131. Turkle, S.; Breazeal, C.; Dasté, O.; Scassellati, B. First Encounters with Kismet and Cog: Children’s Relationship with Humanoid Robots. In Digital Media: Transfer in Human Communication; Messaris, P., Humphreys, L., Eds.; Peter Lang Publishing: Pieterlen and Bern, Switzerland, 2006. [Google Scholar]
  132. Scassellati, B. Theory of Mind for a Humanoid Robot. Auton. Robot. 2002, 12, 13–24. [Google Scholar] [CrossRef]
  133. Cuzzolin, F.; Morelli, A.; Cîrstea, B.; Sahakian, B.J. Knowing Me, Knowing You: Theory of Mind in AI. Knowing Me, Knowing You. Psychol. Med. 2020, 50, 1057–1061. [Google Scholar] [CrossRef] [PubMed]
  134. De Bondt, W.F.M.; Thaler, R.H. Chapter 13. Financial Decision-Making in Markets and Firms: A Behavioral Perspective. In Handbooks in Operations Research and Management Science; Elsevier: Amsterdam, The Netherlands, 1995; Volume 9, pp. 385–410. [Google Scholar] [CrossRef]
  135. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. M.I.S. Q. 2012, 36, 157–178. [Google Scholar] [CrossRef] [Green Version]
  136. Huang, S.Y.B.; Lee, C.-J. Predicting Continuance Intention to Fintech Chatbot. Comput. Hum. Behav. 2022, 129, 107027. [Google Scholar] [CrossRef]
  137. Burleson, B.R.; Rack, J.J. Emotion. In The International Encyclopedia of Communication; Donsbach, W., Ed.; Blackwell Publishing: Oxford, UK, 2008. [Google Scholar] [CrossRef]
  138. Plass, J.L.; Kalpan, U. Chapter 7. Emotional Design in Digital Media for Learning. In Emotions, Technology, Design, and Learning; Tettegah, S.Y., Gartmeier, M., Eds.; Academic Press: Cambridge, MA, USA, 2016; pp. 131–161. [Google Scholar] [CrossRef]
  139. Filbeck, G.; Ricciardi, V.; Evensky, H.R.; Fan, S.Z.; Holzhauer, H.M.; Spieler, A. Behavioral Finance: A Panel Discussion. J. Behav. Exp. Fin. 2017, 15, 52–58. [Google Scholar] [CrossRef]
  140. D’Mello, S.; Kappas, A.; Gratch, J. The Affective Computing Approach to Affect Measurement. Emot. Rev. 2018, 10, 174–183. [Google Scholar] [CrossRef] [Green Version]
  141. Ekman, P.; Rosenberg, E.L. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS); Oxford University Press: Oxford, UK, 1997. [Google Scholar]
  142. Kapur, A.; Kapur, A.; Virji-Babul, N.; Tzanetakis, G.; Driessen, P.F. Gesture-Based Affective Computing on Motion Capture Data. In Affective Computing and Intelligent Interaction; Lect. Notes Comput. Sci. ACII 2005; Tao, J., Tan, T., Picard, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3784. [Google Scholar] [CrossRef] [Green Version]
  143. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef] [Green Version]
  144. Hamdi, H.; Richard, P.; Suteau, A.; Allain, P. Emotion Assessment for Affective Computing Based on Physiological Responses. In Proceedings of the IEEE International Conference oncol. Fuzzy Systems, Brisbane, QLD, Australia, 10–15 June 2012; Volume 2012, pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  145. Kutt, K.; Nalepa, G.J.; Giżycka, B.; Jemiolo, P.; Adamczyk, M. BandReader—A Mobile Application for Data Acquisition from Wearable Devices in Affective Computing Experiments. In Proceedings of the 11th International Conference on Human System Interaction (HSI), Gdansk, Poland, 4–6 July 2018; Volume 2018, pp. 42–48. [Google Scholar] [CrossRef]
  146. Marín-Morales, J.; Higuera-Trujillo, J.L.; Greco, A.; Guixeres, J.; Llinares, C.; Scilingo, E.P.; Alcañiz, M.; Valenza, G. Affective Computing in Virtual Reality: Emotion Recognition from Brain and Heartbeat Dynamics Using Wearable Sensors. Sci. Rep. 2018, 8, 13657. [Google Scholar] [CrossRef]
  147. Bota, P.J.; Wang, C.; Fred, A.L.N.; Placido Da Silva, H.P. A Review, Current Challenges, and Future Possibilities on Emotion Recognition Using Machine Learning and Physiological Signals. IEEE Access 2019, 7, 140990–141020. [Google Scholar] [CrossRef]
  148. Zimmermann, P.; Guttormsen, S.; Danuser, B.; Gomez, P. Affective Computing—A Rationale for Measuring Mood with Mouse and Keyboard. Int. J. Occup. Saf. Ergon. 2003, 9, 539–551. [Google Scholar] [CrossRef] [PubMed]
  149. Soltani, M.; Zarzour, H.; Babahenini, M.C. Facial Emotion Detection in Massive Open Online Courses. In Trends and Advances in Information Systems and Technologies; Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 277–286. [Google Scholar] [CrossRef]
  150. Skillicorn, D.; Alsadhan, N.; Billingsley, R.; Williams, M.A. Measuring Human Emotion in Short Documents to Improve Social Robot and Agent Interactions. In Lecture Notes in Computer Science; Advances in Artificial Intelligence; Meurs, M.J., Rudzicz, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; p. 11489. [Google Scholar] [CrossRef]
  151. Anjum, M. Emotion Recognition from Speech for an Interactive Robot Agent; IEEE Publications: Piscataway, NJ, USA, 2019; Volume 2019, pp. 363–368. [Google Scholar] [CrossRef]
  152. Morency, L.-P.; Mihalcea, R.; Doshi, P. Towards Multimodal Sentiment Analysis: Harvesting Opinions from the Web. In Proceedings of the 13th International Conference on Multimodal Interfaces (ICMI ’11), Alicante, Spain, 14–18 November 2011; Association for Computing Machinery. pp. 169–176. [Google Scholar] [CrossRef]
  153. Sewell, M. Behavioural Finance. 2007. Available online: http://www.behaviouralfinance.net/behavioural-finance.pdf (accessed on 12 December 2021).
  154. Bhatia, A.; Chandani, A.; Chhateja, J. Robo Advisory and Its Potential in Addressing the Behavioral Biases of Investors—A Qualitative Study in Indian Context. J. Behav. Exp. Fin. 2020, 25, 100281. [Google Scholar] [CrossRef]
  155. Barber, B.M.; Odean, T. Boys Will Be Boys: Gender, Overconfidence, and Common Stock Investment. Q. J. Econ. 2001, 116, 261–292. [Google Scholar] [CrossRef]
  156. Luo, G.Y. Conservatism Bias in the Presence of Strategic Interaction. Quant. Fin. 2013, 13, 989–996. [Google Scholar] [CrossRef]
  157. Nickerson, R.S. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Rev. Gen. Psychol. 1998, 2, 175–220. [Google Scholar] [CrossRef]
  158. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision Under Risk. Econometrica 1979, 47, 263–291. [Google Scholar] [CrossRef] [Green Version]
  159. Xu, Y.; Gopi, S.; Lee, K.M. Social Responses to Robo-Advisors: Effects of Robo-Advisor Personality and Participant Personality in Volatile Market Conditions. In Proceedings of the ICA Conference, Gold Coast, Australia, 21–25 May 2020. [Google Scholar]
  160. Duffy, B.R. Anthropomorphism and the Social Robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  161. Epley, N.; Waytz, A.; Cacioppo, J.T. On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  162. Qiu, L.; Benbasat, I. Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. J. Manag. Inf. Syst. 2009, 25, 145–182. Available online: http://www.jstor.org/stable/40398956 (accessed on 25 April 2023).
  163. Nguyen, H.S.; Mladenow, A.; Strauss, C.; Auer-Srnka, K. Voice Commerce: Anthropomorphism Using Voice Assistants. In Proceedings of the 23rd International Conference on Information Integration and Web Intelligence (iiWAS2021), Linz, Austria, 29 November–1 December 2021; Association for Computing Machinery. pp. 434–442. [Google Scholar] [CrossRef]
  164. Yuan, L.; Dennis, A.R. Acting Like Humans? Anthropomorphism and Consumer’s Willingness to Pay in Electronic Commerce. J. Manag. Inf. Syst. 2019, 36, 450–477. [Google Scholar] [CrossRef]
  165. Roy, R.; Naidoo, V. Enhancing Chatbot Effectiveness: The Role of Anthropomorphic Conversational Styles and Time Orientation. J. Bus. Res. 2021, 126, 23–34. [Google Scholar] [CrossRef]
  166. Mayer, R.E.; DaPra, C.S. An Embodiment Effect in Computer-Based Learning with Animated Pedagogical Agents. J. Exp. Psychol. Appl. 2012, 18, 239–252. [Google Scholar] [CrossRef]
  167. Krämer, N.C.; Bente, G. Personalizing E-learning: The Social Effects of Pedagogical Agents. Educ. Psychol. Rev. 2010, 22, 71–87. [Google Scholar] [CrossRef]
  168. Chae, S.W.; Lee, K.C.; Seo, Y.W. Exploring the Effect of Avatar Trust on Learners’ Perceived Participation Intentions in an E-learning Environment. Int. J. Hum. Comput. Interact. 2016, 32, 373–393. [Google Scholar] [CrossRef]
  169. Shiban, Y.; Schelhorn, I.; Jobst, V.; Hörnlein, A.; Puppe, F.; Pauli, P.; Mühlberger, A. The Appearance Effect: Influences of Virtual Agent Features on Performance and Motivation. Comput. Hum. Behav. 2015, 49, 5–11. [Google Scholar] [CrossRef]
  170. Ghiglino, D.; Willemse, C.; De Tommaso, D.; Wykowska, A. Mind the Eyes: Artificial Agents’ Eye Movements Modulate Attentional Engagement and Anthropomorphic Attribution. Front. Robot. AI 2021, 8, 642796. [Google Scholar] [CrossRef] [PubMed]
  171. Diederich, S.; Brendel, A.B.; Kolbe, L.M. Designing Anthropomorphic Enterprise Conversational Agents. Bus. Inf. Syst. Eng. 2020, 62, 193–209. [Google Scholar] [CrossRef] [Green Version]
  172. Culley, K.E.; Madhavan, P. A Note of Caution Regarding Anthropomorphism in HCI Agents. Comput. Hum. Behav. 2013, 29, 577–579. [Google Scholar] [CrossRef]
  173. Nowak, K.L.; Biocca, F. The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence Teleoperators Virtual Environ. 2003, 12, 481–494. [Google Scholar] [CrossRef]
  174. Mori, M. The Uncanny Valley. IEEE Robot. Autom. Mag. 2012, 19, 98–100, (original work published 1970). [Google Scholar] [CrossRef]
  175. Gimbel, H. Key Strategies to Build Client Relationships for Financial Planners. Forbes. 11 August 2020. Available online: https://www.forbes.com/sites/forbesfinancecouncil/2020/08/11/key-strategies-to-build-client-relationships-for-financial-planners/?sh=3343d0d730dd (accessed on 13 December 2021).
  176. Beale, R.; Creed, C. Affective Interaction: How Emotional Agents Affect Users. Int. J. Hum. Comput. Stud. 2009, 67, 755–776. [Google Scholar] [CrossRef]
  177. Barrett, L.F. Discrete Emotions or Dimensions? The Role of Valence Focus and Arousal Focus. Cogn. Emot. 1998, 12, 579–599. [Google Scholar] [CrossRef]
  178. Salazar Kämpf, M.; Liebermann, H.; Kerschreiter, R.; Krause, S.; Nestler, S.; Schmukle, S.C. Disentangling the Sources of Mimicry: Social Relations Analyses of the Link Between Mimicry and Liking. Psychol. Sci. 2018, 29, 131–138. [Google Scholar] [CrossRef]
  179. Numata, T.; Asa, Y.; Kitagaki, T.; Hashimoto, T.; Karasawa, K. Young and Elderly Users’ Emotion Recognition of Dynamically Formed Expressions Made by Non- Human Virtual Agent. In Proceedings of the 7th International Conference on Human-Agent Interaction (HAI ’19), Kyoto, Japan, 6–10 October 2019; Association for Computing Machinery. pp. 253–255. [Google Scholar] [CrossRef]
  180. Brave, S.; Nass, C.; Hutchinson, K. Computers That Care: Investigating the Effects of Orientation of Emotion Exhibited by an Embodied Computer Agent. Int. J. Hum. Comput. Stud. 2005, 62, 161–178. [Google Scholar] [CrossRef]
  181. Greco, C.; Buono, C.; Buch-Cardona, P.; Cordasco, G.; Escalera, S.; Esposito, A.; Fernandez, A.; Kyslitska, D.; Kornes, M.S.; Palmero, C.; et al. Emotional Features of Interactions with Empathic Agents. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar] [CrossRef]
  182. Olson, P. Banks Are Promoting ‘Female’ Chatbots to Help Customers, Raising Concerns of Stereotyping. Forbes. Available online: https://www.forbes.com/sites/parmyolson/2019/02/27/banks-are-promoting-female-chatbots-to-help-customers-raising-concerns-of-stereotyping (accessed on 14 December 2021).
  183. Lawson, A.P.; Mayer, R.E. The Power of Voice to Convey Emotion in Multimedia Instructional Messages. Int. J. Artif. Intell. Educ. 2022, 32, 971–990. [Google Scholar] [CrossRef]
  184. Horovitz, T.; Mayer, R.E. Learning with Human and Virtual Instructors Who Display Happy or Bored Emotions in Video Lectures. Comput. Hum. Behav. 2021, 119, 106724. [Google Scholar] [CrossRef]
  185. Anasingaraju, S.; Adamo-Villani, N.; Dib, H.N. The Contribution of Different Body Channels to the Expression of Emotion in Animated Pedagogical Agents. Int. J. Technol. Hum. Interact. 2020, 16, 70–88. [Google Scholar] [CrossRef]
  186. Ball, G.; Breese, J. Emotion and Personality in a Conversational Agent. In Embodied Conversational Agents; Cassell, J., Sullivan, J., Churchill, E., Prevost, S., Eds.; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  187. Lee, K.M. Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence Teleoperators Virtual Environ. 2004, 13, 494–505. [Google Scholar] [CrossRef]
  188. Hall, A.E. Big Five Personality Traits and Media Use. In The International Encyclopedia of Media Psychology; Bulck, J., Ed.; John Wiley: Hoboken, NJ, USA, 2020. [Google Scholar] [CrossRef]
  189. Carson, R.C. Interaction Concepts of Personality; Aldine Publishing: Chicago, IL, USA, 1969. [Google Scholar]
  190. Ahmad, M.; Shah, S.Z.A. Overconfidence Heuristic-Driven Bias in Investment Decision-Making and Performance: Mediating Effects of Risk Perception and Moderating Effects of Financial Literacy. J. Econ. Admin. Sci. 2022, 38, 60–90. [Google Scholar] [CrossRef]
  191. UneeQ Case Study: InstaMortgage. Available online: https://www.digitalhumans.com/case-studies/instamortgage (accessed on 25 April 2023).
  192. Sarsam, S.M.; Al-Samarraie, H. A First Look at the Effectiveness of Personality Dimensions in Promoting Users’ Satisfaction with the System. SAGE. Open 2018, 8, 2158244018769125. [Google Scholar] [CrossRef] [Green Version]
  193. Carolus, A.; Schmidt, C.; Schneider, F.; Mayr, J.; Muench, R. Are People Polite to Smartphones? In Human-Computer Interaction: Interaction in Context; Lect. Notes Comput. Sci. H.C.I.; Kurosu, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10902. [Google Scholar] [CrossRef]
  194. Gambino, A.; Fox, J.; Ratan, R.A. Building a Stronger CASA: Extending the Computers Are Social Actors Paradigm. Hum.-Mach. Commun. 2020, 1, 71–86. [Google Scholar] [CrossRef] [Green Version]
  195. Lee, K.M. Presence, Explicated. Commun. Theor. 2004, 14, 27–50. [Google Scholar] [CrossRef]
  196. Oh, C.S.; Bailenson, J.N.; Welch, G.F. A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Front. Robot. A.I. 2018, 5, 114. [Google Scholar] [CrossRef] [Green Version]
  197. Park, E.K.; Sundar, S.S. Can Synchronicity and Visual Modality Enhance Social Presence in Mobile Messaging? Comput. Hum. Behav. 2015, 45, 121–128. [Google Scholar] [CrossRef]
  198. Tan, S.-M.; Liew, T.W. Designing Embodied Virtual Agents as Product Specialists in a Multi-product Category E-commerce: The Roles of Source Credibility and Social Presence. Int. J. Hum. Comput. Interact. 2020, 36, 1136–1149. [Google Scholar] [CrossRef]
  199. Skalski, P.; Tamborini, R. The Role of Social Presence in Interactive Agent-Based Persuasion. Media Psychol. 2007, 10, 385–413. [Google Scholar] [CrossRef]
  200. Heerink, M.; Krose, B.; Evers, V.; Wielinga, B. The Influence of Social Presence on Enjoyment and Intention to Use of a Robot and Screen Agent by Elderly Users. In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 695–700. [Google Scholar] [CrossRef] [Green Version]
  201. Hassanein, K.; Head, M.; Ju, C. A Cross-Cultural Comparison of the Impact of Social Presence on Website Trust, Usefulness and Enjoyment. Int. J. Electron. Bus. 2009, 7, 625–641. [Google Scholar] [CrossRef]
  202. Lee, K.M.; Park, N. Presence. In The International Encyclopedia of Communication Theory and Philosophy; KB, Craig, R.T., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar] [CrossRef]
  203. Ng, M.; Coopamootoo, K.P.L.; Toreini, E.; Aitken, M.; Elliot, K.; van Moorsel, A. Simulating the Effects of Social Presence on Trust, Privacy Concerns & Usage Intentions in Automated Bots for Finance. In Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Genoa, Italy, 7–11 September 2020; Volume 2020, pp. 190–199. [Google Scholar] [CrossRef]
  204. Bin Sawad, A.; Narayan, B.; Alnefaie, A.; Maqbool, A.; Mckie, I.; Smith, J.; Yuksel, B.; Puthal, D.; Prasad, M.; Kocaballi, A.B. A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions. Sensors 2022, 22, 2625. [Google Scholar] [CrossRef] [PubMed]
  205. Beilfuss, L. The Future Robo Adviser: Smart and Ethical? Wall St. J. 19 June 2018. Available online: https://www.wsj.com/articles/the-future-robo-adviser-smart-and-ethical-1529460240 (accessed on 15 December 2021).
  206. Scholz, P.; Grossmann, D.; Goldberg, J. Robo Economicus? The Impact of Behavioral Biases on Robo-Advisory. In Robo-Advisory; Scholz, P., Ed.; Palgrave Macmillan: London, UK, 2021; pp. 53–69. [Google Scholar] [CrossRef]
  207. King, M.; Ortenblad, M.; Ladge, J.J. What Will It Take to Make Finance More Gender-Balanced? Harv. Bus. Rev. 10 December 2018. Available online: https://hbr.org/2018/12/what-will-it-take-to-make-finance-more-gender-balanced (accessed on 14 December 2021).
  208. Anderson, M.; Anderson, S.L. Machine Ethics: Creating an Ethical Intelligent Agent. A.I. Mag. 2007, 28, 15. [Google Scholar] [CrossRef]
  209. Brendel, A.B.; Mirbabaie, M.; Lembcke, T.-B.; Hofeditz, L. Ethical Management of Artificial Intelligence. Sustainability 2021, 13, 1974. [Google Scholar] [CrossRef]
  210. Clarke, D. Robo-Advisors—Market Impact and Fiduciary Duty of Care to Retail Investors. SSRN Electron. J. 2020. [Google Scholar] [CrossRef]
Figure 2. InstaMortgage’s virtual assistant, Rachel. Source: https://www.digitalhumans.com/case-studies/instamortgage (accessed on 25 April 2023).
Figure 2. InstaMortgage’s virtual assistant, Rachel. Source: https://www.digitalhumans.com/case-studies/instamortgage (accessed on 25 April 2023).
Electronics 12 03301 g002
Figure 3. Embodied fintech agent, Xiaoi. Source: http://www.chinatoday.com.cn/english/report/2016-10/10/content_728688.htm (accessed on 25 April 2023).
Figure 3. Embodied fintech agent, Xiaoi. Source: http://www.chinatoday.com.cn/english/report/2016-10/10/content_728688.htm (accessed on 25 April 2023).
Electronics 12 03301 g003
Figure 4. Cleo’s personality-driven interaction design. Source: https://moneytothemasses.com/banking/cleo-review-the-ai-chatbot-that-manages-your-money-for-you (accessed on 20 April 2023).
Figure 4. Cleo’s personality-driven interaction design. Source: https://moneytothemasses.com/banking/cleo-review-the-ai-chatbot-that-manages-your-money-for-you (accessed on 20 April 2023).
Electronics 12 03301 g004
Figure 5. Jamie, ANZ Bank’s digital assistant. Source: https://www.finews.asia/finance/27781-anz-ai-artificial-intelligence-assistant-digital-experience (accessed on 25 April 2023).
Figure 5. Jamie, ANZ Bank’s digital assistant. Source: https://www.finews.asia/finance/27781-anz-ai-artificial-intelligence-assistant-digital-experience (accessed on 25 April 2023).
Electronics 12 03301 g005
Figure 6. Joy, a caricatured fintech agent for corporate banking. Source: https://www.dbs.com.hk/sme/business-banking/frequently-asked-questions.page (accessed 30 April 2023).
Figure 6. Joy, a caricatured fintech agent for corporate banking. Source: https://www.dbs.com.hk/sme/business-banking/frequently-asked-questions.page (accessed 30 April 2023).
Electronics 12 03301 g006
Figure 7. Fintech agent Inga, designed to respond with empathy when a client loses a card. Source: https://medium.com/design-ing/how-we-designed-inga-a-delightful-banking-chatbot-for-ing-941d18c4646f (accessed 25 April 2023).
Figure 7. Fintech agent Inga, designed to respond with empathy when a client loses a card. Source: https://medium.com/design-ing/how-we-designed-inga-a-delightful-banking-chatbot-for-ing-941d18c4646f (accessed 25 April 2023).
Electronics 12 03301 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pal, A.; Gopi, S.; Lee, K.M. Fintech Agents: Technologies and Theories. Electronics 2023, 12, 3301. https://doi.org/10.3390/electronics12153301

AMA Style

Pal A, Gopi S, Lee KM. Fintech Agents: Technologies and Theories. Electronics. 2023; 12(15):3301. https://doi.org/10.3390/electronics12153301

Chicago/Turabian Style

Pal, Anagh, Shreya Gopi, and Kwan Min Lee. 2023. "Fintech Agents: Technologies and Theories" Electronics 12, no. 15: 3301. https://doi.org/10.3390/electronics12153301

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop