Publications

List of publications

Computer Vision Self-supervised Learning Methods on Time Series (2022)

Computer Vision Self-supervised Learning Methods on Time Series (2022)

arXiv

Abstract Self-supervised learning (SSL) has had great success in both com- puter vision and natural language processing. These approaches often rely on cleverly crafted loss functions and training setups to avoid feature collapse. In this study, the effectiveness of mainstream SSL frameworks from computer vision and some SSL frameworks for time series are evaluated on the UCR, UEA and PTB-XL datasets, and we show that computer vision SSL frameworks can be effective for time series.
Ensemble and Self-supervised Learning for Improved Classification of Seismic Signals from the Åknes Rockslope (2022)

Ensemble and Self-supervised Learning for Improved Classification of Seismic Signals from the Åknes Rockslope (2022)

Mathematical Geosciences

Abstract A case study with seismic geophone data from the unstable Åknes rock slope in Norway is considered. This rock slope is monitored because there is a risk of severe flooding if the massive-size rock falls into the fjord. The geophone data is highly valuable because it provides 1000 Hz sampling rates data which are streamed to a web resource for real-time analysis. The focus here is on building a classifier for these data to distinguish different types of microseismic events which are in turn indicative of the various processes occurring on the slope.
Vnibcreg: Vicreg with neighboring-invariance and better-covariance evaluated on non-stationary seismic signal time series (2022)

Vnibcreg: Vicreg with neighboring-invariance and better-covariance evaluated on non-stationary seismic signal time series (2022)

arXiv

Abstract This paper presents a novel sampling scheme for masked non-autoregressive generative modeling. We identify the limitations of TimeVQVAE, MaskGIT, and Token-Critic in their sampling processes, and propose Enhanced Sampling Scheme (ESS) to overcome these limitations. ESS explicitly ensures both sample diversity and fidelity, and consists of three stages: Naive Iterative Decoding, Critical Reverse Sampling, and Critical Resampling. ESS starts by sampling a token set using the naive iterative decoding as proposed in MaskGIT, ensuring sample diversity.
VIbCReg: Variance-invariance-better-covariance regularization for self-supervised learning on time series (2021)

VIbCReg: Variance-invariance-better-covariance regularization for self-supervised learning on time series (2021)

arXiv

Abstract Self-supervised learning for image representations has recently had many breakthroughs with respect to linear evaluation and fine-tuning evaluation. These approaches rely on both cleverly crafted loss functions and training setups to avoid the feature collapse problem. In this paper, we improve on the recently proposed VICReg paper, which introduced a loss function that does not rely on specialized training loops to converge to useful representations. Our method improves on a covariance term proposed in VICReg, and in addition we augment the head of the architecture by an IterNorm layer that greatly accelerates convergence of the model.