Machine Learning (ML) algorithms became the enabling condition of many classification, optimization and regression tasks that aim at fulfilling a wide variety of tasks in ICT systems. Particularly, they may be integrated into embedded systems for the purpose of load balancing, image processing, error detection, intrusion detection or failure prediction, to name a few. Those ML algorithms require data and time for learning their model, which also has to be stored when deployed into a real system. However, devices and sensors to be equipped with ML algorithms usually have tight constraints both in space and time which could be dedicated to those detection, prediction or optimization tasks, and also aim at performing very accurate tasks. Therefore, this presentation shows how different supervised and unsupervised ML algorithms have different accuracy, time and space complexity for performing different tasks. The discussion is carried out by means of plots, graphs and practical examples, showing the methodology to be used to derive the optimal ML algorithm to be installed in a device or a sensor to perform classification, optimization or regression tasks.
Per ulteriori dettagli su questa presentazione, cliccare sul pulsante seguente: