Next Issue
Volume 3, September
Previous Issue
Volume 3, March
 
 

Big Data Cogn. Comput., Volume 3, Issue 2 (June 2019) – 15 articles

Cover Story (view full-size image): Word embeddings have been successful in many natural language processing tasks, although they characterize the meaning of a word by uninterpretable “context signatures”. Such a representation can render the results obtained using embeddings as difficult to interpret. Neighboring word vectors may have similar meanings, but in what way are they similar? That similarity may represent a synonymy, metonymy, or even antonymy relation. In the cognitive psychology literature, in contrast, concepts are frequently represented by their relations with properties. These properties are produced by test subjects when asked to describe the important features of concepts. As such, they form a natural, intuitive feature space. In this work, we present a neural network-based method for mapping a distributional semantic space onto a human-built property space automatically. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
12 pages, 1086 KiB  
Article
Peacekeeping Conditions for an Artificial Intelligence Society
by Hiroshi Yamakawa
Big Data Cogn. Comput. 2019, 3(2), 34; https://doi.org/10.3390/bdcc3020034 - 22 Jun 2019
Cited by 2 | Viewed by 6540
Abstract
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, [...] Read more.
In a human society with emergent technology, the destructive actions of some pose a danger to the survival of all of humankind, increasing the need to maintain peace by overcoming universal conflicts. However, human society has not yet achieved complete global peacekeeping. Fortunately, a new possibility for peacekeeping among human societies using the appropriate interventions of an advanced system will be available in the near future. To achieve this goal, an artificial intelligence (AI) system must operate continuously and stably (condition 1) and have an intervention method for maintaining peace among human societies based on a common value (condition 2). However, as a premise, it is necessary to have a minimum common value upon which all of human society can agree (condition 3). In this study, an AI system to achieve condition 1 was investigated. This system was designed as a group of distributed intelligent agents (IAs) to ensure robust and rapid operation. Even if common goals are shared among all IAs, each autonomous IA acts on each local value to adapt quickly to each environment that it faces. Thus, conflicts between IAs are inevitable, and this situation sometimes interferes with the achievement of commonly shared goals. Even so, they can maintain peace within their own societies if all the dispersed IAs think that all other IAs aim for socially acceptable goals. However, communication channel problems, comprehension problems, and computational complexity problems are barriers to realization. This problem can be overcome by introducing an appropriate goal-management system in the case of computer-based IAs. Then, an IA society could achieve its goals peacefully, efficiently, and consistently. Therefore, condition 1 will be achievable. In contrast, humans are restricted by their biological nature and tend to interact with others similar to themselves, so the eradication of conflicts is more difficult. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Show Figures

Figure 1

12 pages, 2941 KiB  
Article
Tensor Decomposition for Salient Object Detection in Images
by Junxiu Zhou, Yangyang Tao and Xian Liu
Big Data Cogn. Comput. 2019, 3(2), 33; https://doi.org/10.3390/bdcc3020033 - 19 Jun 2019
Viewed by 4246
Abstract
The fundamental challenge of salient object detection is to find the decision boundary that separates the salient object from the background. Low-rank recovery models address this challenge by decomposing an image or image feature-based matrix into a low-rank matrix representing the image background [...] Read more.
The fundamental challenge of salient object detection is to find the decision boundary that separates the salient object from the background. Low-rank recovery models address this challenge by decomposing an image or image feature-based matrix into a low-rank matrix representing the image background and a sparse matrix representing salient objects. This method is simple and efficient in finding salient objects. However, it needs to convert high-dimensional feature space into a two-dimensional matrix. Therefore, it does not take full advantage of image features in discovering the salient object. In this article, we propose a tensor decomposition method which considers spatial consistency and tries to make full use of image feature information in detecting salient objects. First, we use high-dimensional image features in tensor to preserve spatial information about image features. Following this, we use a tensor low-rank and sparse model to decompose the image feature tensor into a low-rank tensor and a sparse tensor, where the low-rank tensor represents the background and the sparse tensor is used to identify the salient object. To solve the tensor low-rank and sparse model, we employed a heuristic strategy by relaxing the definition of tensor trace norm and tensor l1-norm. Experimental results on three saliency benchmarks demonstrate the effectiveness of the proposed tensor decomposition method. Full article
Show Figures

Figure 1

30 pages, 2154 KiB  
Review
Big Data and Business Analytics: Trends, Platforms, Success Factors and Applications
by Ifeyinwa Angela Ajah and Henry Friday Nweke
Big Data Cogn. Comput. 2019, 3(2), 32; https://doi.org/10.3390/bdcc3020032 - 10 Jun 2019
Cited by 85 | Viewed by 42067
Abstract
Big data and business analytics are trends that are positively impacting the business world. Past researches show that data generated in the modern world is huge and growing exponentially. These include structured and unstructured data that flood organizations daily. Unstructured data constitute the [...] Read more.
Big data and business analytics are trends that are positively impacting the business world. Past researches show that data generated in the modern world is huge and growing exponentially. These include structured and unstructured data that flood organizations daily. Unstructured data constitute the majority of the world’s digital data and these include text files, web, and social media posts, emails, images, audio, movies, etc. The unstructured data cannot be managed in the traditional relational database management system (RDBMS). Therefore, data proliferation requires a rethinking of techniques for capturing, storing, and processing the data. This is the role big data has come to play. This paper, therefore, is aimed at increasing the attention of organizations and researchers to various applications and benefits of big data technology. The paper reviews and discusses, the recent trends, opportunities and pitfalls of big data and how it has enabled organizations to create successful business strategies and remain competitive, based on available literature. Furthermore, the review presents the various applications of big data and business analytics, data sources generated in these applications and their key characteristics. Finally, the review not only outlines the challenges for successful implementation of big data projects but also highlights the current open research directions of big data analytics that require further consideration. The reviewed areas of big data suggest that good management and manipulation of the large data sets using the techniques and tools of big data can deliver actionable insights that create business values. Full article
Show Figures

Figure 1

20 pages, 5757 KiB  
Article
Weakly-Supervised Image Semantic Segmentation Based on Superpixel Region Merging
by Quanchun Jiang, Olamide Timothy Tawose, Songwen Pei, Xiaodong Chen, Linhua Jiang, Jiayao Wang and Dongfang Zhao
Big Data Cogn. Comput. 2019, 3(2), 31; https://doi.org/10.3390/bdcc3020031 - 10 Jun 2019
Cited by 3 | Viewed by 4671
Abstract
In this paper, we propose a semantic segmentation method based on superpixel region merging and convolutional neural network (CNN), referred to as regional merging neural network (RMNN). Image annotation has always been an important role in weakly-supervised semantic segmentation. Most methods use manual [...] Read more.
In this paper, we propose a semantic segmentation method based on superpixel region merging and convolutional neural network (CNN), referred to as regional merging neural network (RMNN). Image annotation has always been an important role in weakly-supervised semantic segmentation. Most methods use manual labeling. In this paper, super-pixels with similar features are combined using the relationship between each pixel after super-pixel segmentation to form a plurality of super-pixel blocks. Rough predictions are generated by the fully convolutional networks (FCN) so that certain super-pixel blocks will be labeled. We perceive and find other positive areas in an iterative way through the marked areas. This reduces the feature extraction vector and reduces the data dimension due to super-pixels. The algorithm not only uses superpixel merging to narrow down the target’s range but also compensates for the lack of weakly-supervised semantic segmentation at the pixel level. In the training of the network, we use the method of region merging to improve the accuracy of contour recognition. Our extensive experiments demonstrated the effectiveness of the proposed method with the PASCAL VOC 2012 dataset. In particular, evaluation results show that the mean intersection over union (mIoU) score of our method reaches as high as 44.6%. Because the cavity convolution is in the pooled downsampling operation, it does not degrade the network’s receptive field, thereby ensuring the accuracy of image semantic segmentation. The findings of this work thus open the door to leveraging the dilated convolution to improve the recognition accuracy of small objects. Full article
Show Figures

Figure 1

11 pages, 292 KiB  
Article
Mapping Distributional Semantics to Property Norms with Deep Neural Networks
by Dandan Li and Douglas Summers-Stay
Big Data Cogn. Comput. 2019, 3(2), 30; https://doi.org/10.3390/bdcc3020030 - 25 May 2019
Cited by 6 | Viewed by 5187
Abstract
Word embeddings have been very successful in many natural language processing tasks, but they characterize the meaning of a word/concept by uninterpretable “context signatures”. Such a representation can render results obtained using embeddings difficult to interpret. Neighboring word vectors may have similar meanings, [...] Read more.
Word embeddings have been very successful in many natural language processing tasks, but they characterize the meaning of a word/concept by uninterpretable “context signatures”. Such a representation can render results obtained using embeddings difficult to interpret. Neighboring word vectors may have similar meanings, but in what way are they similar? That similarity may represent a synonymy, metonymy, or even antonymy relation. In the cognitive psychology literature, in contrast, concepts are frequently represented by their relations with properties. These properties are produced by test subjects when asked to describe important features of concepts. As such, they form a natural, intuitive feature space. In this work, we present a neural-network-based method for mapping a distributional semantic space onto a human-built property space automatically. We evaluate our method on word embeddings learned with different types of contexts, and report state-of-the-art performances on the widely used McRae semantic feature production norms. Full article
Show Figures

Figure 1

14 pages, 1785 KiB  
Article
Assessment of Cognitive Aging Using an SSVEP-Based Brain–Computer Interface System
by Saraswati Sridhar and Vidya Manian
Big Data Cogn. Comput. 2019, 3(2), 29; https://doi.org/10.3390/bdcc3020029 - 24 May 2019
Cited by 7 | Viewed by 4737
Abstract
Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and its timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain–computer interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. [...] Read more.
Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and its timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain–computer interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. The BCI system presented analyzes steady-state visually evoked potentials (SSVEPs) elicited in subjects of varying age to detect cognitive aging, predict its magnitude, and identify its relationship with SSVEP features (band power and frequency detection accuracy), which were hypothesized to indicate cognitive decline due to aging. The BCI system was tested with subjects of varying age to assess its ability to detect aging-induced cognitive deterioration. Rectangular stimuli flickering at theta, alpha, and beta frequencies were presented to subjects, and frontal and occipital Electroencephalographic (EEG) responses were recorded. These were processed to calculate detection accuracy for each subject and calculate SSVEP band power. A neural network was trained using the features to predict cognitive age. The results showed potential cognitive deterioration through age-related variations in SSVEP features. Frequency detection accuracy declined after age group 20–40, and band power declined throughout all age groups. SSVEPs generated at theta and alpha frequencies, especially 7.5 Hz, were the best indicators of cognitive deterioration. Here, frequency detection accuracy consistently declined after age group 20–40 from an average of 96.64% to 69.23%. The presented system can be used as an effective diagnosis tool for age-related cognitive decline. Full article
Show Figures

Figure 1

22 pages, 2753 KiB  
Article
A Data-Driven Machine Learning Approach for Corrosion Risk Assessment—A Comparative Study
by Chinedu I. Ossai
Big Data Cogn. Comput. 2019, 3(2), 28; https://doi.org/10.3390/bdcc3020028 - 18 May 2019
Cited by 43 | Viewed by 7436
Abstract
Understanding the corrosion risk of a pipeline is vital for maintaining health, safety and the environment. This study implemented a data-driven machine learning approach that relied on Principal Component Analysis (PCA), Particle Swarm Optimization (PSO), Feed-Forward Artificial Neural Network (FFANN), Gradient Boosting Machine [...] Read more.
Understanding the corrosion risk of a pipeline is vital for maintaining health, safety and the environment. This study implemented a data-driven machine learning approach that relied on Principal Component Analysis (PCA), Particle Swarm Optimization (PSO), Feed-Forward Artificial Neural Network (FFANN), Gradient Boosting Machine (GBM), Random Forest (RF) and Deep Neural Network (DNN) to estimate the corrosion defect depth growth of aged pipelines. By modifying the hyperparameters of the FFANN algorithm with PSO and using PCA to transform the operating variables of the pipelines, different Machine Learning (ML) models were developed and tested for the X52 grade of pipeline. A comparative analysis of the computational accuracy of the corrosion defect growth was estimated for the PCA transformed and non-transformed parametric values of the training data to know the influence of the PCA transformation on the accuracy of the models. The result of the analysis showed that the ML modelling with PCA transformed data has an accuracy that is 3.52 to 5.32 times better than those carried out without PCA transformation. Again, the PCA transformed GBM model was found to have the best modeling accuracy amongst the tested algorithms; hence, it was used for computing the future corrosion defect depth growth of the pipelines. This helped to compute the corrosion risks using the failure probabilities at different lifecycle phases of the asset. The excerpts from the results of this study indicate that my technique is vital for the prognostic health monitoring of pipelines because it will provide information for maintenance and inspection planning. Full article
Show Figures

Figure 1

18 pages, 1808 KiB  
Article
Automatic Human Brain Tumor Detection in MRI Image Using Template-Based K Means and Improved Fuzzy C Means Clustering Algorithm
by Md Shahariar Alam, Md Mahbubur Rahman, Mohammad Amazad Hossain, Md Khairul Islam, Kazi Mowdud Ahmed, Khandaker Takdir Ahmed, Bikash Chandra Singh and Md Sipon Miah
Big Data Cogn. Comput. 2019, 3(2), 27; https://doi.org/10.3390/bdcc3020027 - 13 May 2019
Cited by 103 | Viewed by 11924
Abstract
In recent decades, human brain tumor detection has become one of the most challenging issues in medical science. In this paper, we propose a model that includes the template-based K means and improved fuzzy C means (TKFCM) algorithm for detecting human brain tumors [...] Read more.
In recent decades, human brain tumor detection has become one of the most challenging issues in medical science. In this paper, we propose a model that includes the template-based K means and improved fuzzy C means (TKFCM) algorithm for detecting human brain tumors in a magnetic resonance imaging (MRI) image. In this proposed algorithm, firstly, the template-based K-means algorithm is used to initialize segmentation significantly through the perfect selection of a template, based on gray-level intensity of image; secondly, the updated membership is determined by the distances from cluster centroid to cluster data points using the fuzzy C-means (FCM) algorithm while it contacts its best result, and finally, the improved FCM clustering algorithm is used for detecting tumor position by updating membership function that is obtained based on the different features of tumor image including Contrast, Energy, Dissimilarity, Homogeneity, Entropy, and Correlation. Simulation results show that the proposed algorithm achieves better detection of abnormal and normal tissues in the human brain under small detachment of gray-level intensity. In addition, this algorithm detects human brain tumors within a very short time—in seconds compared to minutes with other algorithms. Full article
Show Figures

Figure 1

17 pages, 245 KiB  
Article
AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk
by Brandon Perry and Risto Uuk
Big Data Cogn. Comput. 2019, 3(2), 26; https://doi.org/10.3390/bdcc3020026 - 8 May 2019
Cited by 20 | Viewed by 10813
Abstract
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of [...] Read more.
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
10 pages, 1252 KiB  
Communication
The Emerging Role of Blockchain Technology Applications in Routine Disease Surveillance Systems to Strengthen Global Health Security
by Vijay Kumar Chattu, Anjali Nanda, Soosanna Kumary Chattu, Syed Manzoor Kadri and Andy W Knight
Big Data Cogn. Comput. 2019, 3(2), 25; https://doi.org/10.3390/bdcc3020025 - 8 May 2019
Cited by 48 | Viewed by 10800
Abstract
Blockchain technology has an enormous scope to revamp the healthcare system in many ways as it improves the quality of healthcare by data sharing among all the participants, selective privacy and ensuring data safety. This paper explores the basics of blockchain, its applications, [...] Read more.
Blockchain technology has an enormous scope to revamp the healthcare system in many ways as it improves the quality of healthcare by data sharing among all the participants, selective privacy and ensuring data safety. This paper explores the basics of blockchain, its applications, quality of experience and advantages in disease surveillance over the other widely used real-time and machine learning techniques. The other real-time surveillance systems lack scalability, security, interoperability, thus making blockchain as a choice for surveillance. Blockchain offers the capability of enhancing global health security and also can ensure the anonymity of patient data thereby aiding in healthcare research. The recent epidemics of re-emerging infections such as Ebola and Zika have raised many concerns regarding health security which resulted in strengthening the surveillance systems. We also discuss how blockchains can help in identifying the threats early and reporting them to health authorities for taking early preventive measures. Since the Global Health Security Agenda addresses global public health threats (both infectious and NCDs); strengthen the workforce and the systems; detect and respond rapidly and effectively to the disease threats; and elevate global health security as a priority. The blockchain has enormous potential to disrupt many current practices in traditional disease surveillance and health care research. Full article
(This article belongs to the Special Issue Health Assessment in the Big Data Era)
Show Figures

Figure 1

21 pages, 4338 KiB  
Article
Analysis of Information Security News Content and Abnormal Returns of Enterprises
by Chia-Ching Hung
Big Data Cogn. Comput. 2019, 3(2), 24; https://doi.org/10.3390/bdcc3020024 - 27 Apr 2019
Cited by 6 | Viewed by 3719
Abstract
As information technologies and the Internet have rapidly evolved, businesses have begun to use them to improve communication efficiency within and outside the organization. However, applications of information technologies are accompanied by information delivery, personal data protection, and information security problems. There are [...] Read more.
As information technologies and the Internet have rapidly evolved, businesses have begun to use them to improve communication efficiency within and outside the organization. However, applications of information technologies are accompanied by information delivery, personal data protection, and information security problems. There are potential risks inherent in any application of information technologies. Moreover, with the improvement of networking and computing capabilities, the impact of attacks from hackers and malicious software has also increased. A breach or leakage of important corporate data may not only damage the firm’s image but also sabotage the firm’s operation, resulting in financial losses. In this study, the content of information security news reports was analyzed in an attempt to clarify the association between information security news and corporate stock prices. Methods including decision trees, support vector machines (SVMs), and random forests were used to explore the associations of news related variables with abnormal returns. Results indicate that the news source and the presence of negative words in the news have an impact on abnormal returns. Full article
Show Figures

Figure 1

6 pages, 189 KiB  
Opinion
The Supermoral Singularity—AI as a Fountain of Values
by Eleanor Nell Watson
Big Data Cogn. Comput. 2019, 3(2), 23; https://doi.org/10.3390/bdcc3020023 - 11 Apr 2019
Cited by 4 | Viewed by 6358
Abstract
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our [...] Read more.
This article looks at the problem of moral singularity in the development of artificial intelligence. We are now on the verge of major breakthroughs in machine technology where autonomous robots that can make their own decisions will become an integral part of our way of life. This article presents a qualitative, comparative approach, which considers the differences between humans and machines, especially in relation to morality, and is grounded in historical and contemporary examples. This argument suggests that it is difficult to apply models of human morality and evolution to machines and that the creation of super-intelligent robots that will be able to make moral decisions could have potentially serious consequences. A runaway moral singularity could result in machines seeking to confront human moral transgressions in a quest to eliminate all forms of evil. This might also culminate in an all-out war in which humanity might be defeated. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
Show Figures

Graphical abstract

20 pages, 3634 KiB  
Article
Pruning Fuzzy Neural Network Applied to the Construction of Expert Systems to Aid in the Diagnosis of the Treatment of Cryotherapy and Immunotherapy
by Augusto Junio Guimarães, Paulo Vitor de Campos Souza, Vinícius Jonathan Silva Araújo, Thiago Silva Rezende and Vanessa Souza Araújo
Big Data Cogn. Comput. 2019, 3(2), 22; https://doi.org/10.3390/bdcc3020022 - 9 Apr 2019
Cited by 18 | Viewed by 5211
Abstract
Human papillomavirus (HPV) infection is related to frequent cases of cervical cancer and genital condyloma in humans. Up to now, numerous methods have come into existence for the prevention and treatment of this disease. In this context, this paper aims to help predict [...] Read more.
Human papillomavirus (HPV) infection is related to frequent cases of cervical cancer and genital condyloma in humans. Up to now, numerous methods have come into existence for the prevention and treatment of this disease. In this context, this paper aims to help predict the susceptibility of the patient to forms treatment using both cryotherapy and immunotherapy. These studies facilitate the choice of medications, which can be painful and embarrassing for patients who have warts on intimate parts. However, the use of intelligent models generates efficient results but does not allow a better interpretation of the results. To solve the problem, we present the method of a fuzzy neural network (FNN). A hybrid model capable of solving complex problems and extracting knowledge from the database will pruned through F-score techniques to perform pattern classification in the treatment of warts, and to produce a specialist system based on if/then rules, according to the experience obtained from the database collected through medical research. Finally, binary pattern-classification tests realized in the FNN and compared with other models commonly used for classification tasks capture results of greater accuracy than the current state of the art for this type of problem (84.32% for immunotherapy, and 88.64% for cryotherapy), and extract fuzzy rules from the problem database. It was found that the hybrid approach based on neural networks and fuzzy systems can be an excellent tool to aid the prediction of cryotherapy and immunotherapy treatments. Full article
(This article belongs to the Special Issue Health Assessment in the Big Data Era)
Show Figures

Figure 1

15 pages, 280 KiB  
Communication
Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence
by David Manheim
Big Data Cogn. Comput. 2019, 3(2), 21; https://doi.org/10.3390/bdcc3020021 - 5 Apr 2019
Cited by 11 | Viewed by 6605
Abstract
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems [...] Read more.
An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes. Full article
(This article belongs to the Special Issue Artificial Superintelligence: Coordination & Strategy)
14 pages, 2238 KiB  
Article
Mining Temporal Patterns to Discover Inter-Appliance Associations Using Smart Meter Data
by Sarah Osama, Marco Alfonse and Abdel-Badeeh M. Salem
Big Data Cogn. Comput. 2019, 3(2), 20; https://doi.org/10.3390/bdcc3020020 - 29 Mar 2019
Cited by 5 | Viewed by 3125
Abstract
With the emergence of the smart grid environment, smart meters are considered one of the main key enablers for developing energy management solutions in residential home premises. Power consumption in the residential sector is affected by the behavior of home residents through using [...] Read more.
With the emergence of the smart grid environment, smart meters are considered one of the main key enablers for developing energy management solutions in residential home premises. Power consumption in the residential sector is affected by the behavior of home residents through using their home appliances. Respecting such behavior and preferences is essential for developing demand response programs. The main contribution of this paper is to discover the association between appliances’ usage through mining temporal association rules in addition to applying the temporal clustering technique for grouping appliances with similar usage at a particular time. The proposed method is applied on a time-series dataset, which is the United Kingdom Domestic Appliance-Level Electricity (UK-DALE), and the results that are achieved discovered appliance–appliance associations that have similar usage patterns with respect to the 24 h of the day. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop