Advanced Machine Learning and Data Mining: A New Frontier in Artificial Intelligence Research

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 15824

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computing, Wrexham Glyndŵr University, Plas Coch Campus, Mold Road, Wrexham LL11 2AW, UK
Interests: cybersecurity; privacy; AI privacy

E-Mail Website
Guest Editor
Department of Computing, Wrexham Glyndŵr University, Plas Coch Campus, Mold Road, Wrexham LL11 2AW, UK
Interests: futurology; AI; big data; IoT; automation; technology ethics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Without data, there is no machine learning (ML), so there is no doubt that big data and ML are inextricably linked. However, much research to date has tended to treat them as separate areas of development. As we are confronted with today’s difficult problems and the wealth of held data continues to grow, it is vital that new, innovative ways of examining, testing, and using big data to produce useful information are both researched/developed and integrated. Whether this be for the social good (health diagnostics, for example) or corporate gain (competitive advantage), given the exponentially increase in both the volume of data and the velocity by which it is generated, the need for the expansion of direct cooperation of mining big data with ML is long overdue. For this Special Issue, as the individual fields of advanced machine learning and advanced data mining are well established, the focus will be specifically on their intersection: the point―or points―at which one aids, needs, or enhances the other.

This new frontier is almost boundless, but will eventually become the norm. Automatically learning and improving from experience without being explicitly programmed gives great opportunities. The quality of the data being used, its speed of acquisition, and the effectiveness of processing are all of vital importance―if Microsoft’s AI chatbot Tay taught us anything at all, it is certainly this.

But can ‘big data’ ever be too much data? Is ML only suited to small datasets, allowing more focused training? And is there a real concern for data privacy where we try to combine big data with ML? (For example, does this issue come into sharp focus particularly where social media is concerned?) Many have been lulled into a false sense of security when using these systems, many of which offer a treasure trove of data. Or, to take an entirely different direction, is there space for big data and ML in the judiciary: could consistency of sentencing be applied, for example. We cannot list all the potential application areas here, but the scope for exciting research at the boundary of big data and ML should be clear. In addition, of course, this new frontier in artificial intelligence research offers as many ethical questions as it does possibilities: could we, should we, and (how) will we?

This Special Issue solicits empirical, experimental, methodological, and theoretical research reporting original and unpublished results on big data and machine learning analysis and mining on topics in all realms of research along with applications to real life situations.

Dr. Nigel Houlden
Prof. Dr. Vic Grout
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • big data
  • machine learning
  • future technologies
  • ethics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1211 KiB  
Article
A Combined System Metrics Approach to Cloud Service Reliability Using Artificial Intelligence
by Tek Raj Chhetri, Chinmaya Kumar Dehury, Artjom Lind, Satish Narayana Srirama and Anna Fensel
Big Data Cogn. Comput. 2022, 6(1), 26; https://doi.org/10.3390/bdcc6010026 - 01 Mar 2022
Cited by 4 | Viewed by 4065
Abstract
Identifying and anticipating potential failures in the cloud is an effective method for increasing cloud reliability and proactive failure management. Many studies have been conducted to predict potential failure, but none have combined SMART (self-monitoring, analysis, and reporting technology) hard drive metrics with [...] Read more.
Identifying and anticipating potential failures in the cloud is an effective method for increasing cloud reliability and proactive failure management. Many studies have been conducted to predict potential failure, but none have combined SMART (self-monitoring, analysis, and reporting technology) hard drive metrics with other system metrics, such as central processing unit (CPU) utilisation. Therefore, we propose a combined system metrics approach for failure prediction based on artificial intelligence to improve reliability. We tested over 100 cloud servers’ data and four artificial intelligence algorithms: random forest, gradient boosting, long short-term memory, and gated recurrent unit, and also performed correlation analysis. Our correlation analysis sheds light on the relationships that exist between system metrics and failure, and the experimental results demonstrate the advantages of combining system metrics, outperforming the state-of-the-art. Full article
Show Figures

Figure 1

14 pages, 884 KiB  
Article
Deep Automation Bias: How to Tackle a Wicked Problem of AI?
by Stefan Strauß
Big Data Cogn. Comput. 2021, 5(2), 18; https://doi.org/10.3390/bdcc5020018 - 20 Apr 2021
Cited by 17 | Viewed by 10649
Abstract
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and [...] Read more.
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues. Full article
Show Figures

Figure 1

Back to TopTop