all AI news
Superior Parallel Big Data Clustering through Competitive Stochastic Sample Size Optimization in Big-means
March 28, 2024, 4:42 a.m. | Rustam Mussabayev, Ravil Mussabayev
cs.LG updates on arXiv.org arxiv.org
Abstract: This paper introduces a novel K-means clustering algorithm, an advancement on the conventional Big-means methodology. The proposed method efficiently integrates parallel processing, stochastic sampling, and competitive optimization to create a scalable variant designed for big data applications. It addresses scalability and computation time challenges typically faced with traditional techniques. The algorithm adjusts sample sizes dynamically for each worker during execution, optimizing performance. Data from these sample sizes are continually analyzed, facilitating the identification of the …
abstract advancement algorithm applications arxiv big big data clustering clustering algorithm computation cs.ai cs.dc cs.ir cs.lg data data applications k-means methodology novel optimization paper processing sample sampling scalability scalable stochastic through type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)