Women in Machine Learning 2018

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (31 October 2018) | Viewed by 32297

Special Issue Editor


E-Mail Website
Guest Editor
1. School of Computing Technologies, RMIT University, Melbourne 3000, Australia
2. School of Computing and Information Systems, The University of Melbourne, Melbourne 3010, Australia
Interests: biomedical natural language processing; computational linguistics; text mining; health informatics; computational biology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The last year has been a year in which many of the challenges of being a woman in technology, or a woman in STEMM more broadly, have risen strongly to the surface. Women in heavily male-majority disciplines may face unconscious or conscious biases in the path to seeing their work appreciated; for researchers, female authors are under-represented in high profile publication venues (https://doi.org/10.1101/275362) – particularly journals, where reviewing is generally not blind to the author names.

In this special issue, we aim to highlight the strength of the contributions that have been made by women in machine learning research and to give a special publication opportunity to these women. The key requirements for consideration for publication are:

  • A female-identifying first author -OR- a female-identifying senior author (e.g. group/laboratory head); both would be great, and an all-female author list even better.
  • Topics may include
    • original machine learning methods, or
    • novel applications of machine learning methods.

All submissions will be rigorously peer-reviewed. The Article Processing Charge (APC) for publication of the manuscripts will be waived for accepted manuscripts submitted to this issue.

We are looking forward to receiving your contribution.

Prof. Karin Verspoor
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • predictive modeling
  • data mining
  • supervised or unsupervised machine learning methods
  • machine learning applications
  • information extraction
  • knowledge discovery

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1802 KiB  
Article
Causal Discovery with Attention-Based Convolutional Neural Networks
by Meike Nauta, Doina Bucur and Christin Seifert
Mach. Learn. Knowl. Extr. 2019, 1(1), 312-340; https://doi.org/10.3390/make1010019 - 07 Jan 2019
Cited by 129 | Viewed by 27706
Abstract
Having insight into the causal associations in a complex system facilitates decision making, e.g., for medical treatments, urban infrastructure improvements or financial investments. The amount of observational data grows, which enables the discovery of causal relationships between variables from observation of their behaviour [...] Read more.
Having insight into the causal associations in a complex system facilitates decision making, e.g., for medical treatments, urban infrastructure improvements or financial investments. The amount of observational data grows, which enables the discovery of causal relationships between variables from observation of their behaviour in time. Existing methods for causal discovery from time series data do not yet exploit the representational power of deep learning. We therefore present the Temporal Causal Discovery Framework (TCDF), a deep learning framework that learns a causal graph structure by discovering causal relationships in observational time series data. TCDF uses attention-based convolutional neural networks combined with a causal validation step. By interpreting the internal parameters of the convolutional networks, TCDF can also discover the time delay between a cause and the occurrence of its effect. Our framework learns temporal causal graphs, which can include confounders and instantaneous effects. Experiments on financial and neuroscientific benchmarks show state-of-the-art performance of TCDF on discovering causal relationships in continuous time series data. Furthermore, we show that TCDF can circumstantially discover the presence of hidden confounders. Our broadly applicable framework can be used to gain novel insights into the causal dependencies in a complex system, which is important for reliable predictions, knowledge discovery and data-driven decision making. Full article
(This article belongs to the Special Issue Women in Machine Learning 2018)
Show Figures

Figure 1

13 pages, 684 KiB  
Article
The Winning Solution to the IEEE CIG 2017 Game Data Mining Competition
by Anna Guitart, Pei Pei Chen and África Periáñez
Mach. Learn. Knowl. Extr. 2019, 1(1), 252-264; https://doi.org/10.3390/make1010016 - 20 Dec 2018
Cited by 10 | Viewed by 3876
Abstract
Machine learning competitions such as those organized by Kaggle or KDD represent a useful benchmark for data science research. In this work, we present our winning solution to the Game Data Mining competition hosted at the 2017 IEEE Conference on Computational Intelligence and [...] Read more.
Machine learning competitions such as those organized by Kaggle or KDD represent a useful benchmark for data science research. In this work, we present our winning solution to the Game Data Mining competition hosted at the 2017 IEEE Conference on Computational Intelligence and Games (CIG 2017). The contest consisted of two tracks, and participants (more than 250, belonging to both industry and academia) were to predict which players would stop playing the game, as well as their remaining lifetime. The data were provided by a major worldwide video game company, NCSoft, and came from their successful massively multiplayer online game Blade and Soul. Here, we describe the long short-term memory approach and conditional inference survival ensemble model that made us win both tracks of the contest, as well as the validation procedure that we followed in order to prevent overfitting. In particular, choosing a survival method able to deal with censored data was crucial to accurately predict the moment in which each player would leave the game, as censoring is inherent in churn. The selected models proved to be robust against evolving conditions—since there was a change in the business model of the game (from subscription-based to free-to-play) between the two sample datasets provided—and efficient in terms of time cost. Thanks to these features and also to their ability to scale to large datasets, our models could be readily implemented in real business settings. Full article
(This article belongs to the Special Issue Women in Machine Learning 2018)
Show Figures

Figure 1

Back to TopTop