About the workshop
Machine Learning and in particular, Deep learning for irregular time series is an essential research area that aims to develop machine learning models capable of effectively handling time series data that is unevenly spaced, irregularly sampled, noisy, has multiresolution characteristics, contains anomalies, or has missing data. The need for research in deep learning for irregular time series is particularly important for real-world applications, where data is often irregularly collected. Time series data is prevalent in fields such as finance, healthcare, and environmental science, and improved analysis and modeling techniques can significantly enhance decision-making, enable accurate predictions, and increase understanding of complex systems. By advancing deep learning for irregular time series, we can make a positive impact on a variety of fields and contribute to the progress and well-being of society.
In many real-world applications, time series data is often collected at irregular intervals, making it challenging to model using traditional time series methods. Deep learning models, on the other hand, have shown promising results in handling irregular time series data due to their ability to learn complex temporal patterns from large datasets. The research in this area involves developing novel deep learning models that can effectively model irregular time series data, as well as developing new techniques for pre-processing and cleaning the data. Some specific research topics in this area include 1) Recurrent neural networks (RNNs) and their variants such as long short-term memory (LSTM) networks and gated recurrent units (GRUs) for modeling irregular time series data; 2) Attention mechanisms that can dynamically weigh the importance of different time steps in irregular time series; 3) Novel loss functions that can handle missing data and irregular sampling intervals; 4) Techniques for imputing missing data in irregular time series; 5) Anomaly detection.
Overall, deep learning for irregular time series has the potential to improve the accuracy and robustness of time series modeling in a variety of fields, including finance, healthcare, and environmental science.
In many real-world applications, we have the following two scenarios: 1) the amount of available training data is limited, or 2) there is a huge amount of available data which is carcely or not labeled due to high costs of data collection and annotation. As a result, the future of Artificial Intelligence (AI) will be about “doing more with less”. There is a need for focusing on modern AI techniques that can extract value from such challenging datasets. These considerations can also contribute to the increasing need to address sustainability and privacy aspects of ML and AI. Furthermore, there is a need to overcome the issue of limited availability of data and scarcity of labeled data for (multivariate) time series modeling. In this context, heterogeneity of the data (e.g. non-stationarity, multi-resolution, irregular sampling) as well as noise, pose further challenges.
The main scope of this workshop is to advance the state-of-the-art in time series analysis for “irregular” time series. We define time series to be “irregular” if they fall under one or several of the following categories: Short: univariate and multivariate time series with a limited amount of data and history; Multiresolution: multivariate time series where each signal has a different granularity or resolution in terms of sampling frequency; Noisy: univariate/multivariate time series with some additional perturbation appearing in different forms. In this class, we also include time series with missing data; Heterogeneous: multivariate time series, usually collected by many physical systems, that exhibit different types of embedded, statistical patterns and behaviours; Scarcely labeled and unlabeled: univariate/multivariate time series where only a small part of the data is labeled or completely unlabeled.
This workshop follows the successful ML4ITS2021 edition at ECML-PKDD 2021 and intends to offer the ideal context for dissemination and cross-pollination of novel ideas in designing machine learning models suitable to deal with irregular time series. Accordingly, topics of interest for the workshop include, but are not limited to:
- Generative models for Synthetic Data generation, including GANs, diffusion models and masked modeling in time series domain,
- Methods for Data Imputation and Denoising of time series data,
- Transfer Learning and Transformer architectures for Time Series forecasting and classification (e.g., using FNN, CNN, Recurrent NN, LSTM),
- Graph Neural Networks for Anomaly Detection and Failure Prediction in the time series domain,
- Quantification of uncertainties in time series analysis,
- Use of Deep Neural Networks (e.g., FNN, CNN, Recurrent NN, LSTMs) for Time Series modeling and forecasting,
- Unsupervised and Self-Supervised Learning for different Time Series related tasks,
- Few-Shot Learning and Time Series Classification in a low-data regime,
- Physical-informed Deep Neural Networks for Time Series Forecasting,
- (Deep) Reservoir Computing and Spiking Neural Networks for Time Series and Structured data analysis,
- Representation Learning for Time Series.
This workshop will concentrate on three specific areas: A) generative models for time series, including GANs, diffusion models, and masked modeling, B) self-supervised learning for time series, and C) global models.
Overall, generative models and global models are both promising areas for further research in time series analysis, and have the potential to significantly improve the accuracy and robustness of machine learning models for time series data. We encourage submissions that address these areas in the context of irregular time series.
- Massimiliano Ruocco (SINTEF Digital / Norwegian University of Science and Technology)
- Erlend Aune (Abelee / Norwegian University of Science and Technology / BI)
- Claudio Gallicchio (University of Pisa)
- Sara Malacarne (Telenor Research)
- Pierluigi Salvo Rossi (Norwegian University of Science and Technology)
- Alfredo Clemente (Sintef DIGITAL)
- Eliezer de Souza da Silva (Norwegian University of Science and Technology)
- Bjorn Magnus Mathisen (Sintef DIGITAL)
- Emil Stoltenberg (BI)
- Per Gunnar Auran (Sintef DIGITAL)
- Jo Eidsvik (Norwegian University of Science and Technology)
- Leif Anders Thorsrud (BI)
- Pablo Ortiz (Telenor Research)
- Vegard Larsen (BI)
- Juan-Pablo Ortega (Nanyang Technological University, Singapore)
- Azarakhsh Jalalvand (Princeton University, USA)
- Benjamin Paaßen (Bielefeld University, Germany)
- Petia Koprinkova-Hristova (Institute of Information and Communication Technologies, Bulgarian Academy of Sciences)
- Andrea Ceni (University of Pisa, Italy)
Papers must be written in English and formatted according to the Springer LNCS guidelines followed by the main conference. Regular and short papers presenting work completed or in progress are invited. Regular papers are expected to provide original and innovative contributions. Max length: 14 pages including references. Short papers, describing innovative ongoing research showing relevant preliminary results, are maximum 6 pages. We also allow presentation only contributions (no page restrictions, not included in proceedings), which may include work already published elsewhere or ongoing research that is relevant and may solicit fruitful discussion at the workshop. Submissions should be made through the workshop's CMT submission page. After logging in, create a new submission in your author console, and select the track on "ML4ITS2023".
Papers authors will have the faculty to opt-in or opt-out for publication of their submitted papers in the joint post-workshop proceedings published by Springer Communications in Computer and Information Science, organised by focused scope and possibly indexed by WOS. Notice that novelty is not essential for contributed papers that will not appear in the workshop proceedings, as we invite papers that have already been presented or published elsewhere with the aim of maximizing the dissemination and cross-pollination of ideas among the topic of the workshop.
At least one author of each accepted paper must have a full registration and be in-person to present the paper. Papers without a full registration or in-presence presentation won't be included in the post-workshop Springer proceedings.
The following deadlines are in AoE time zone (UTC – 12).
- Paper submission deadline:
June 12, 2023June 23, 2023
- Acceptance notification:
July 12, 2023July 21, 2023
- Camera ready deadline:
July 19, 2023August 7, 2023
- Workshop date and location: September 22, 2023, Torino, Italy
Accepted Contributions (Posters)
- Learning Self-supervised User Representations Using Contextualized Mixture of Experts in Transformers - Surajit Chakrabarty (Amazon)*; Rajat Agarwal (Amazon); Agniva Som (Amazon)
- Representation of Irregularly Sampled Time Series with Generative Language Models for Classification and Transfer Learning : a Case Study in Activities of Daily Living Recognition - Damien Bouchabou (ENSTA-Paris)*; Sao Mai Nguyen (Ensta Paris); Christophe Lohr (IMT Atlantique); Benoit LeDuc (Delta Dore); Ioannis Kanellos (IMT Atlantique)
- Vector Quantized Time Series Generation with Enhanced Sampling Scheme - Daesoo Lee (Norwegian University of Science and Technology (NTNU))*; Erlend Aune (NTNU); Sara Malacarne (Telenor ASA)
- Persistence Initialization: A novel adaptation of the Transformer architecture for Time Series Forecasting - Espen Haugsdal (Norwegian University of Science and Technology )*; Erlend Aune (NTNU); Massimiliano Ruocco (Norwegian University of Science and Technology)
- Maximal Temperature forecasting under spatio-temporal interrelations using Machine Learning - Gerardo Rubino (INRIA)*; Gonzalo Marco (UDELAR); María Eugenia Miranda (UDELAR); Pablo Rodríguez-Bocca (UDELAR)
- Improving Time Series Forecasting with Mixup Data Augmentation - Yun Zhou (Amazon)*; Liwen You (Amazon); Wenzhen Zhu (Amazon); Panpan Xu (Amazon AWS)
- In the Fast Lane: Enhanced Cycling Performance Prediction with Contrastive Learning - Claus Martinsen (NTNU); Henrik Fauskanger (NTNU); Luca Oggiano (Nablaflow); Massimiliano Ruocco (Norwegian University of Science and Technology)*
- Efficient estimation of prediction intervals in demand forecasting using weighted asymmetric loss - Milo Grillo (Humboldt University of Berlin); Yunpeng Han (Mapal OS); Agnieszka Werpachowska (Mapal OS)*
- Residual Reservoir Computing - Andrea Ceni (University of Pisa); Claudio Gallicchio (University of Pisa)*
- Probabilistic Demand Forecasting with Graph Neural Networks - Nikita Kozodoi (Amazon Web Services)*; Elizaveta Zinovyeva (Amazon Web Services); Simon Valentin (University of Edinburgh); João Pereira (adidas); Rodrigo Agundez (adidas)
- Echoes of Conflict: Quantifying Redundancy in Casualty Time Series - Thomas Chadefaux (Trinity College Dublin); Thomas SCHINCARIOL (Trinity College Dublin)*