Journal article
Auto-Attention Mechanism for Multi-View Deep Embedding Clustering
Research Areas Currently no objects available |
Publication Details Author list: Bassoma Diallo, Jie Hu, Tianrui Li, Ghufran Ahmad Khan, Xinyan Liang, Hongjun Wang Publication year: 2023 Volume number: 143 ISSN: 1873-5142 eISSN: 0031-3203 URL: https://doi.org/10.1016/j.patcog.2023.109764 Languages: English |
In several fields, deep learning has achieved tremendous success. Multi-view learning is a workable method for handling data from several sources. For clustering multi-view data, deep learning and multi-view learning are excellent options. However, a persistent challenge is the need for the current deep learning approach to independently drive divergent neural networks for different perspectives while working with data from multiple sources. The current methods have some drawbacks. The first is that a variety of viewpoints are used to calculate neural network statistics. Consequently, as the number of views rises, it results in a considerable calculation. The latter is that it tries vainly to unite various viewpoints at the training. Incorporating a triple fusion technique, this research suggests an innovative multi-view deep embedding clustering (MDEC) model. The suggested model may jointly acquire the specific knowledge in each view as well as the portion information of the collective views. The main goal of the MDEC is to lower the errors made when learning the features of each view and correlating data from many views. To address the optimization problem, MDEC model advises a suitable iterative updating approach. In testing modern deep learning and non-deep learning algorithms, the experimental study on small and large-scale multi-view data shows encouraging results for the MDEC model. In multi-view clustering, this work demonstrates the benefit of the deep learning-based approach over the non-ones. However, future work will address a variety of issues related to MDEC. © 2023, The Authors. All rights reserved.
Projects
Currently no objects available
Currently no objects available |
Documents
Currently no objects available