Next Issue
Volume 4, September
Previous Issue
Volume 4, March
 
 

Big Data Cogn. Comput., Volume 4, Issue 2 (June 2020) – 12 articles

Cover Story (view full-size image): In this work, a novel, hybrid recommender system for cultural places is proposed, combining user preference with cultural tourist typologies. Starting with the McKercher typology as a user classification research base, which extracts five categories of heritage tourists out of two variables (cultural centrality and depth of user experience), and using a questionnaire, an enriched cultural tourist typology is developed, where three additional variables governing cultural visitor types are also proposed (frequency of visits, visiting knowledge and duration of the visit). The extracted categories per user are fused in a robust collaborative filtering, matrix factorization-based recommendation algorithm as extra user features. The obtained results on reference data collected from eight cities exhibit an improvement in system performance, thereby indicating the robustness of the presented approach. View [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
14 pages, 1323 KiB  
Article
Leveraging the Organisational Legacy: Understanding How Businesses Integrate Legacy Data into Their Big Data Plans
by Sanjay Jha, Meena Jha, Liam O’Brien, Michael Cowling and Marilyn Wells
Big Data Cogn. Comput. 2020, 4(2), 15; https://doi.org/10.3390/bdcc4020015 - 23 Jun 2020
Cited by 1 | Viewed by 5406
Abstract
Big Data can help users attain a competitive advantage, and evidence suggests that by utilising Big Data, organisations can generate insight that can help strengthen their decision-making capabilities. However, a key issue remains that much data is trapped in legacy systems, and is [...] Read more.
Big Data can help users attain a competitive advantage, and evidence suggests that by utilising Big Data, organisations can generate insight that can help strengthen their decision-making capabilities. However, a key issue remains that much data is trapped in legacy systems, and is hence not being appropriately retrieved and utilised. This paper builds on the existing literature base to investigate the challenges and issues organisations face in utilising Big Data. Through results of a survey with 97 respondents, this work shows that these issues can be categorised into six areas, including issues of format and structure of the data, as well as identification of the key need for a framework and architecture for organising Big Data. Full article
Show Figures

Figure 1

23 pages, 3397 KiB  
Article
#lockdown: Network-Enhanced Emotional Profiling in the Time of COVID-19
by Massimo Stella, Valerio Restocchi and Simon De Deyne
Big Data Cogn. Comput. 2020, 4(2), 14; https://doi.org/10.3390/bdcc4020014 - 16 Jun 2020
Cited by 46 | Viewed by 9747
Abstract
The COVID-19 pandemic forced countries all over the world to take unprecedented measures, like nationwide lockdowns. To adequately understand the emotional and social repercussions, a large-scale reconstruction of how people perceived these unexpected events is necessary but currently missing. We address this gap [...] Read more.
The COVID-19 pandemic forced countries all over the world to take unprecedented measures, like nationwide lockdowns. To adequately understand the emotional and social repercussions, a large-scale reconstruction of how people perceived these unexpected events is necessary but currently missing. We address this gap through social media by introducing MERCURIAL (Multi-layer Co-occurrence Networks for Emotional Profiling), a framework which exploits linguistic networks of words and hashtags to reconstruct social discourse describing real-world events. We use MERCURIAL to analyse 101,767 tweets from Italy, the first country to react to the COVID-19 threat with a nationwide lockdown. The data were collected between the 11th and 17th March, immediately after the announcement of the Italian lockdown and the WHO declaring COVID-19 a pandemic. Our analysis provides unique insights into the psychological burden of this crisis, focussing on—(i) the Italian official campaign for self-quarantine (#iorestoacasa), (ii) national lockdown (#italylockdown), and (iii) social denounce (#sciacalli). Our exploration unveils the emergence of complex emotional profiles, where anger and fear (towards political debates and socio-economic repercussions) coexisted with trust, solidarity, and hope (related to the institutions and local communities). We discuss our findings in relation to mental well-being issues and coping mechanisms, like instigation to violence, grieving, and solidarity. We argue that our framework represents an innovative thermometer of emotional status, a powerful tool for policy makers to quickly gauge feelings in massive audiences and devise appropriate responses based on cognitive data. Full article
(This article belongs to the Special Issue Knowledge Modelling and Learning through Cognitive Networks)
Show Figures

Figure 1

12 pages, 227 KiB  
Case Report
The “Social” Side of Big Data: Teaching BD Analytics to Political Science Students
by Giampiero Giacomello and Oltion Preka
Big Data Cogn. Comput. 2020, 4(2), 13; https://doi.org/10.3390/bdcc4020013 - 5 Jun 2020
Cited by 6 | Viewed by 5062
Abstract
In an increasingly technology-dependent world, it is not surprising that STEM (Science, Technology, Engineering, and Mathematics) graduates are in high demand. This state of affairs, however, has made the public overlook the case that not only computing and artificial intelligence are naturally interdisciplinary, [...] Read more.
In an increasingly technology-dependent world, it is not surprising that STEM (Science, Technology, Engineering, and Mathematics) graduates are in high demand. This state of affairs, however, has made the public overlook the case that not only computing and artificial intelligence are naturally interdisciplinary, but that a huge portion of generated data comes from human–computer interactions, thus they are social in character and nature. Hence, social science practitioners should be in demand too, but this does not seem the case. One of the reasons for such a situation is that political and social science departments worldwide tend to remain in their “comfort zone” and see their disciplines quite traditionally, but by doing so they cut themselves off from many positions today. The authors believed that these conditions should and could be changed and thus in a few years created a specifically tailored course for students in Political Science. This paper examines the experience of the last year of such a program, which, after several tweaks and adjustments, is now fully operational. The results and students’ appreciation are quite remarkable. Hence the authors considered the experience was worth sharing, so that colleagues in social and political science departments may feel encouraged to follow and replicate such an example. Full article
19 pages, 450 KiB  
Article
A Personalized Heritage-Oriented Recommender System Based on Extended Cultural Tourist Typologies
by Markos Konstantakis, Georgios Alexandridis and George Caridakis
Big Data Cogn. Comput. 2020, 4(2), 12; https://doi.org/10.3390/bdcc4020012 - 4 Jun 2020
Cited by 18 | Viewed by 7539
Abstract
Recent developments in digital technologies regarding the cultural heritage domain have driven technological trends in comfortable and convenient traveling, by offering interactive and personalized user experiences. The emergence of big data analytics, recommendation systems and personalization techniques have created a smart research field, [...] Read more.
Recent developments in digital technologies regarding the cultural heritage domain have driven technological trends in comfortable and convenient traveling, by offering interactive and personalized user experiences. The emergence of big data analytics, recommendation systems and personalization techniques have created a smart research field, augmenting cultural heritage visitor’s experience. In this work, a novel, hybrid recommender system for cultural places is proposed, that combines user preference with cultural tourist typologies. Starting with the McKercher typology as a user classification research base, which extracts five categories of heritage tourists out of two variables (cultural centrality and depth of user experience) and using a questionnaire, an enriched cultural tourist typology is developed, where three additional variables governing cultural visitor types are also proposed (frequency of visits, visiting knowledge and duration of the visit). The extracted categories per user are fused in a robust collaborative filtering, matrix factorization-based recommendation algorithm as extra user features. The obtained results on reference data collected from eight cities exhibit an improvement in system performance, thereby indicating the robustness of the presented approach. Full article
(This article belongs to the Special Issue Big Data Analytics for Cultural Heritage)
Show Figures

Figure 1

15 pages, 758 KiB  
Article
Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks
by Shayan Taheri, Aminollah Khormali, Milad Salem and Jiann-Shiun Yuan
Big Data Cogn. Comput. 2020, 4(2), 11; https://doi.org/10.3390/bdcc4020011 - 22 May 2020
Cited by 8 | Viewed by 6201
Abstract
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using combination of pre-trained convolutional neural [...] Read more.
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using combination of pre-trained convolutional neural network and an external GAN, that is, Pix2Pix conditional GAN, to determine the transformations between adversarial examples and clean data, and to automatically synthesize new adversarial examples. These adversarial examples are employed to strengthen the model, attack, and defense in an iterative pipeline. Our simulation results demonstrate the success of the proposed method. Full article
(This article belongs to the Special Issue Big Data and Cognitive Computing: Feature Papers 2020)
Show Figures

Figure 1

17 pages, 2294 KiB  
Concept Paper
Seven Properties of Self-Organization in the Human Brain
by Birgitta Dresp-Langley
Big Data Cogn. Comput. 2020, 4(2), 10; https://doi.org/10.3390/bdcc4020010 - 10 May 2020
Cited by 20 | Viewed by 9232
Abstract
The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their [...] Read more.
The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: (1) modular connectivity, (2) unsupervised learning, (3) adaptive ability, (4) functional resiliency, (5) functional plasticity, (6) from-local-to-global functional organization, and (7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward. Full article
(This article belongs to the Special Issue Knowledge Modelling and Learning through Cognitive Networks)
Show Figures

Figure 1

16 pages, 1425 KiB  
Article
A Dynamic Intelligent Policies Analysis Mechanism for Personal Data Processing in the IoT Ecosystem
by Konstantinos Demertzis, Konstantinos Rantos and George Drosatos
Big Data Cogn. Comput. 2020, 4(2), 9; https://doi.org/10.3390/bdcc4020009 - 27 Apr 2020
Cited by 8 | Viewed by 5624
Abstract
The evolution of the Internet of Things is significantly affected by legal restrictions imposed for personal data handling, such as the European General Data Protection Regulation (GDPR). The main purpose of this regulation is to provide people in the digital age greater control [...] Read more.
The evolution of the Internet of Things is significantly affected by legal restrictions imposed for personal data handling, such as the European General Data Protection Regulation (GDPR). The main purpose of this regulation is to provide people in the digital age greater control over their personal data, with their freely given, specific, informed and unambiguous consent to collect and process the data concerning them. ADVOCATE is an advanced framework that fully complies with the requirements of GDPR, which, with the extensive use of blockchain and artificial intelligence technologies, aims to provide an environment that will support users in maintaining control of their personal data in the IoT ecosystem. This paper proposes and presents the Intelligent Policies Analysis Mechanism (IPAM) of the ADVOCATE framework, which, in an intelligent and fully automated manner, can identify conflicting rules or consents of the user, which may lead to the collection of personal data that can be used for profiling. In order to clearly identify and implement IPAM, the problem of recording user data from smart entertainment devices using Fuzzy Cognitive Maps (FCMs) was simulated. FCMs are an intelligent decision-making system that simulates the processes of a complex system, modeling the correlation base, knowing the behavioral and balance specialists of the system. Respectively, identifying conflicting rules that can lead to a profile, training is done using Extreme Learning Machines (ELMs), which are highly efficient neural systems of small and flexible architecture that can work optimally in complex environments. Full article
(This article belongs to the Special Issue Advanced Data Mining Techniques for IoT and Big Data)
Show Figures

Figure 1

21 pages, 2678 KiB  
Article
Artificial Intelligence-Enhanced Predictive Insights for Advancing Financial Inclusion: A Human-Centric AI-Thinking Approach
by Meng-Leong How, Sin-Mei Cheah, Aik Cheow Khor and Yong Jiet Chan
Big Data Cogn. Comput. 2020, 4(2), 8; https://doi.org/10.3390/bdcc4020008 - 27 Apr 2020
Cited by 31 | Viewed by 8691
Abstract
According to the World Bank, a key factor to poverty reduction and improving prosperity is financial inclusion. Financial service providers (FSPs) offering financially-inclusive solutions need to understand how to approach the underserved successfully. The application of artificial intelligence (AI) on legacy data can [...] Read more.
According to the World Bank, a key factor to poverty reduction and improving prosperity is financial inclusion. Financial service providers (FSPs) offering financially-inclusive solutions need to understand how to approach the underserved successfully. The application of artificial intelligence (AI) on legacy data can help FSPs to anticipate how prospective customers may respond when they are approached. However, it remains challenging for FSPs who are not well-versed in computer programming to implement AI projects. This paper proffers a no-coding human-centric AI-based approach to simulate the possible dynamics between the financial profiles of prospective customers collected from 45,211 contact encounters and predict their intentions toward the financial products being offered. This approach contributes to the literature by illustrating how AI for social good can also be accessible for people who are not well-versed in computer science. A rudimentary AI-based predictive modeling approach that does not require programming skills will be illustrated in this paper. In these AI-generated multi-criteria optimizations, analysts in FSPs can simulate scenarios to better understand their prospective customers. In conjunction with the usage of AI, this paper also suggests how AI-Thinking could be utilized as a cognitive scaffold for educing (drawing out) actionable insights to advance financial inclusion. Full article
Show Figures

Figure 1

28 pages, 10611 KiB  
Article
Hydria: An Online Data Lake for Multi-Faceted Analytics in the Cultural Heritage Domain
by Kimon Deligiannis, Paraskevi Raftopoulou, Christos Tryfonopoulos, Nikos Platis and Costas Vassilakis
Big Data Cogn. Comput. 2020, 4(2), 7; https://doi.org/10.3390/bdcc4020007 - 23 Apr 2020
Cited by 15 | Viewed by 6457
Abstract
Advancements in cultural informatics have significantly influenced the way we perceive, analyze, communicate and understand culture. New data sources, such as social media, digitized cultural content, and Internet of Things (IoT) devices, have allowed us to enrich and customize the cultural experience, but [...] Read more.
Advancements in cultural informatics have significantly influenced the way we perceive, analyze, communicate and understand culture. New data sources, such as social media, digitized cultural content, and Internet of Things (IoT) devices, have allowed us to enrich and customize the cultural experience, but at the same time have created an avalanche of new data that needs to be stored and appropriately managed in order to be of value. Although data management plays a central role in driving forward the cultural heritage domain, the solutions applied so far are fragmented, physically distributed, require specialized IT knowledge to deploy, and entail significant IT experience to operate even for trivial tasks. In this work, we present Hydria, an online data lake that allows users without any IT background to harvest, store, organize, analyze and share heterogeneous, multi-faceted cultural heritage data. Hydria provides a zero-administration, zero-cost, integrated framework that enables researchers, museum curators and other stakeholders within the cultural heritage domain to easily (i) deploy data acquisition services (like social media scrapers, focused web crawlers, dataset imports, questionnaire forms), (ii) design and manage versatile customizable data stores, (iii) share whole datasets or horizontal/vertical data shards with other stakeholders, (iv) search, filter and analyze data via an expressive yet simple-to-use graphical query engine and visualization tools, and (v) perform user management and access control operations on the stored data. To the best of our knowledge, this is the first solution in the literature that focuses on collecting, managing, analyzing, and sharing diverse, multi-faceted data in the cultural heritage domain and targets users without an IT background. Full article
(This article belongs to the Special Issue Big Data Analytics for Cultural Heritage)
Show Figures

Figure 1

22 pages, 4394 KiB  
Article
A Semantic Mixed Reality Framework for Shared Cultural Experiences Ecosystems
by Costas Vassilakis, Konstantinos Kotis, Dimitris Spiliotopoulos, Dionisis Margaris, Vlasios Kasapakis, Christos-Nikolaos Anagnostopoulos, Georgios Santipantakis, George A. Vouros, Theodore Kotsilieris, Volha Petukhova, Andrei Malchanau, Ioanna Lykourentzou, Kaj Michael Helin, Artem Revenko, Nenad Gligoric and Boris Pokric
Big Data Cogn. Comput. 2020, 4(2), 6; https://doi.org/10.3390/bdcc4020006 - 20 Apr 2020
Cited by 6 | Viewed by 6451
Abstract
This paper presents SemMR, a semantic framework for modelling interactions between human and non-human entities and managing reusable and optimized cultural experiences, towards a shared cultural experience ecosystem that might seamlessly accommodate mixed reality experiences. The SemMR framework synthesizes and integrates interaction data [...] Read more.
This paper presents SemMR, a semantic framework for modelling interactions between human and non-human entities and managing reusable and optimized cultural experiences, towards a shared cultural experience ecosystem that might seamlessly accommodate mixed reality experiences. The SemMR framework synthesizes and integrates interaction data into semantically rich reusable structures and facilitates the interaction between different types of entities in a symbiotic way, within a large, virtual, and fully experiential open world, promoting experience sharing at the user level, as well as data/application interoperability and low-effort implementation at the software engineering level. The proposed semantic framework introduces methods for low-effort implementation and the deployment of open and reusable cultural content, applications, and tools, around the concept of cultural experience as a semantic trajectory or simply, experience as a trajectory (eX-trajectory). The methods facilitate the collection and analysis of data regarding the behaviour of users and their interaction with other users and the environment, towards optimizing eX-trajectories via reconfiguration. The SemMR framework supports the synthesis, enhancement, and recommendation of highly complex reconfigurable eX-trajectories, while using semantically integrated disparate and heterogeneous related data. Overall, this work aims to semantically manage interactions and experiences through the eX-trajectory concept, towards delivering enriched cultural experiences. Full article
(This article belongs to the Special Issue Big Data Analytics for Cultural Heritage)
Show Figures

Graphical abstract

22 pages, 3322 KiB  
Article
Big Data Analytics for Search Engine Optimization
by Ioannis C. Drivas, Damianos P. Sakas, Georgios A. Giannakopoulos and Daphne Kyriaki-Manessi
Big Data Cogn. Comput. 2020, 4(2), 5; https://doi.org/10.3390/bdcc4020005 - 2 Apr 2020
Cited by 26 | Viewed by 15506
Abstract
In the Big Data era, search engine optimization deals with the encapsulation of datasets that are related to website performance in terms of architecture, content curation, and user behavior, with the purpose to convert them into actionable insights and improve visibility and findability [...] Read more.
In the Big Data era, search engine optimization deals with the encapsulation of datasets that are related to website performance in terms of architecture, content curation, and user behavior, with the purpose to convert them into actionable insights and improve visibility and findability on the Web. In this respect, big data analytics expands the opportunities for developing new methodological frameworks that are composed of valid, reliable, and consistent analytics that are practically useful to develop well-informed strategies for organic traffic optimization. In this paper, a novel methodology is implemented in order to increase organic search engine visits based on the impact of multiple SEO factors. In order to achieve this purpose, the authors examined 171 cultural heritage websites and their retrieved data analytics about their performance and user experience inside them. Massive amounts of Web-based collections are included and presented by cultural heritage organizations through their websites. Subsequently, users interact with these collections, producing behavioral analytics in a variety of different data types that come from multiple devices, with high velocity, in large volumes. Nevertheless, prior research efforts indicate that these massive cultural collections are difficult to browse while expressing low visibility and findability in the semantic Web era. Against this backdrop, this paper proposes the computational development of a search engine optimization (SEO) strategy that utilizes the generated big cultural data analytics and improves the visibility of cultural heritage websites. One step further, the statistical results of the study are integrated into a predictive model that is composed of two stages. First, a fuzzy cognitive mapping process is generated as an aggregated macro-level descriptive model. Secondly, a micro-level data-driven agent-based model follows up. The purpose of the model is to predict the most effective combinations of factors that achieve enhanced visibility and organic traffic on cultural heritage organizations’ websites. To this end, the study contributes to the knowledge expansion of researchers and practitioners in the big cultural analytics sector with the purpose to implement potential strategies for greater visibility and findability of cultural collections on the Web. Full article
(This article belongs to the Special Issue Big Data Analytics for Cultural Heritage)
Show Figures

Figure 1

53 pages, 5668 KiB  
Review
Big Data and Its Applications in Smart Real Estate and the Disaster Management Life Cycle: A Systematic Analysis
by Hafiz Suliman Munawar, Siddra Qayyum, Fahim Ullah and Samad Sepasgozar
Big Data Cogn. Comput. 2020, 4(2), 4; https://doi.org/10.3390/bdcc4020004 - 26 Mar 2020
Cited by 92 | Viewed by 24991
Abstract
Big data is the concept of enormous amounts of data being generated daily in different fields due to the increased use of technology and internet sources. Despite the various advancements and the hopes of better understanding, big data management and analysis remain a [...] Read more.
Big data is the concept of enormous amounts of data being generated daily in different fields due to the increased use of technology and internet sources. Despite the various advancements and the hopes of better understanding, big data management and analysis remain a challenge, calling for more rigorous and detailed research, as well as the identifications of methods and ways in which big data could be tackled and put to good use. The existing research lacks in discussing and evaluating the pertinent tools and technologies to analyze big data in an efficient manner which calls for a comprehensive and holistic analysis of the published articles to summarize the concept of big data and see field-specific applications. To address this gap and keep a recent focus, research articles published in last decade, belonging to top-tier and high-impact journals, were retrieved using the search engines of Google Scholar, Scopus, and Web of Science that were narrowed down to a set of 139 relevant research articles. Different analyses were conducted on the retrieved papers including bibliometric analysis, keywords analysis, big data search trends, and authors’ names, countries, and affiliated institutes contributing the most to the field of big data. The comparative analyses show that, conceptually, big data lies at the intersection of the storage, statistics, technology, and research fields and emerged as an amalgam of these four fields with interlinked aspects such as data hosting and computing, data management, data refining, data patterns, and machine learning. The results further show that major characteristics of big data can be summarized using the seven Vs, which include variety, volume, variability, value, visualization, veracity, and velocity. Furthermore, the existing methods for big data analysis, their shortcomings, and the possible directions were also explored that could be taken for harnessing technology to ensure data analysis tools could be upgraded to be fast and efficient. The major challenges in handling big data include efficient storage, retrieval, analysis, and visualization of the large heterogeneous data, which can be tackled through authentication such as Kerberos and encrypted files, logging of attacks, secure communication through Secure Sockets Layer (SSL) and Transport Layer Security (TLS), data imputation, building learning models, dividing computations into sub-tasks, checkpoint applications for recursive tasks, and using Solid State Drives (SDD) and Phase Change Material (PCM) for storage. In terms of frameworks for big data management, two frameworks exist including Hadoop and Apache Spark, which must be used simultaneously to capture the holistic essence of the data and make the analyses meaningful, swift, and speedy. Further field-specific applications of big data in two promising and integrated fields, i.e., smart real estate and disaster management, were investigated, and a framework for field-specific applications, as well as a merger of the two areas through big data, was highlighted. The proposed frameworks show that big data can tackle the ever-present issues of customer regrets related to poor quality of information or lack of information in smart real estate to increase the customer satisfaction using an intermediate organization that can process and keep a check on the data being provided to the customers by the sellers and real estate managers. Similarly, for disaster and its risk management, data from social media, drones, multimedia, and search engines can be used to tackle natural disasters such as floods, bushfires, and earthquakes, as well as plan emergency responses. In addition, a merger framework for smart real estate and disaster risk management show that big data generated from the smart real estate in the form of occupant data, facilities management, and building integration and maintenance can be shared with the disaster risk management and emergency response teams to help prevent, prepare, respond to, or recover from the disasters. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop