Using machine learning methods to improve the efficiency of the cosmetological services administration process

O.V. Zakharova, L.O. Spektorovska

Abstract


The article is devoted to solving the problem of improving the quality of cosmetic service provision during the rapid scaling of an applied system. This process is accompanied by the creation of a large number of new roles and types of services, a significant expansion of the client base and communication network. Accordingly, it also increases significantly the volume of information that requires processing. The aim of the study is to develop approaches that would increase the efficiency of administering cosmetic services through the automation of incoming message processing and their multi-criteria categorization. The criteria identified for categorization are: message type, priority, the specialist, and the type of service. The paper also includes a review of existing approaches, taking into account the formulation of the applied task. This allows to conclude: to achieve the stated objective it is advisable to use a combination of text data preprocessing, feature extraction methods, and classical machine learning models.

Problems in programming 2026; 1: 82-92


Keywords


machine learning; text classification; rule-based systems; gibrid systems; transformers; message cat egorization; routing of requests; prediction; training data

References


Dubrovik A. V., Volynec J. A. Automatic text classification. Proceedings of NaUKMA. Computer Science. Volume 8. 2025. P. 102-107.

Moez All. Understanding Text Classification in Python. 2022.

SpamAssassin configuration.

Hutto, C. J., & Gilbert, E. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Proceedings of ICWSM. 2014.

Esuli, A., & Sebastiani, F. SentiWordNet: A Publicly Available Lexical Resource for Opinion Mining. Proceedings of LREC 2006.

Rasa Open Source Documentation. 2025.

GoogleCloud Guide.

Shuo Xu, Yan Li, Zheng Wang. Bayesian Multinomial Naïve Bayes Classifier to Text Classification. Institute of Scientific and Technical Information of China. № 15. 2015.

C. Cortes and V. Vapnik. Support-vector networks. Machine Learning. November 1995.– P. 273–297.

Thorsten Joachims. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Support Vector Learning. Conference paper. 2005. – P. 137-142.

J. Kivinen, M. Warmuth, and P. Auer. The perceptron algorithm vs. winnow: Linear vs. logarithmic mistake bounds when few input variables are relevant. In Conference on Computational Learning Theory, 1995.

Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. LIBLINEAR: A Library for Large Linear Classification. 2008.

Kim, Y. Convolutional Neural Networks for Sentence Classification.

Hochreiter, S., & Schmidhuber, J. Long Short Term Memory. Neural Computation. 1997.

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding. 2019.

Liu, Y., et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach. 2019.

Conneau, A., et al. Unsupervised Cross-lingual Representation Learning at Scale. 2020.

Salton, G., & Buckley, C. Term-weighting approaches in automatic text retrieval. 1988.


Refbacks

  • There are currently no refbacks.