Audio signals and artificial neural networks for classification of plastic resins for recycling

Received: 02 June 2022, Revised: 17 Aug 2022, Accepted: 21 Nov 2022, Available online: 21 Dec 2022, Version of Record: 21 Dec 2022

Letícia Tessarini
,
Ana Maria Frattini Fileti
Faculdade de Engenharia Química, Universidade Estadual de Campinas – UNICAMP, Campinas SP, Brazil

Abstract


Given the increase in the consumption of packaging and plastic products that generate great amounts of municipal waste, efforts are needed to recycle these materials correctly. For the process to be accessible and environmentally friendly, it is crucial to separate the different types of resins before feeding the recycling chain. This classification is usually carried out manually by the employees of the cooperatives. However, most of the time, it is difficult to identify the separated plastic pieces due to the lack of the symbology of the resin used. The plastic ends up having an inappropriate destination. All usual techniques, such as density and burning tests, and others more refined, require specific equipment, such as Fourier transform infrared spectrometry (FTIR), magnetic resonance imaging (MRI), and image and color identification. Standard methods may fail to provide flexibility, and sophisticated ones may be expensive. The present study proposes to establish a methodology to separate the types of plastic through sound when they are crushed. Audio signals and sound waves differ from one resin to another. Therefore, the audio signals from the crumpled plastic samples of each category were recorded using a smartphone, to create a database. The audio data characteristics were extracted using the Mel-Frequency Cepstral Coefficients (MFCC) technique. Two types of neural networks were used for the classification of the coefficients: the convolutional neural network (CNN) and the recurrent network Long short-term memory (LSTM). The performance of both networks was analyzed through the metrics, accuracy, loss, confusion matrix, and inserting new data not seen in the training to observe if the class correctly classified. The LSTM network performed better than the CNN network, reaching an accuracy of 85% due to working with sequential data. However, when inserting new data in the data set crumpled more slowly than the training data, simulating an unfavorable condition and very likely to occur in the day-to-day cooperatives, the performance of the LSTM network dropped significantly, while CNN reached a still acceptable percentage, proving to be better in these situations.

Keywords
Plastic
Classification
Recycling
Artificial neural networks
Signal processing
Deep learning



Description



   

Indexed in scopus

https://www.scopus.com/results/authorNamesList.uri?sort=count-f&src=al&sid=ec241ed3cb00745b01b0524a9db75a0e&sot=al&sdt=al&sl=46&s=AUTHLASTNAME%28Tessarini%29+AND+AUTHFIRST%28Letícia%29&st1=Tessarini&st2=Letícia&orcidId=&selectionPageSearch=anl&reselectAuthor=false&activeFlag=true&showDocument=false&resultsPerPage=20&offset=1&jtp=false¤tPage=1&previousSelectionCount=0&tooManySelections=false&previousResultCount=0&authSubject=LFSC&authSubject=HLSC&authSubject=PHSC&authSubject=SOSC&exactAutho
      

Article metrics

10.31763/DSJ.v5i1.1674 Abstract views : | PDF views :

   

Cite

   

Full Text

Download

Conflict of interest


“Authors state no conflict of interest”


Funding Information


This research received no external funding or grants


Peer review:


Peer review under responsibility of Defence Science Journal


Ethics approval:


Not applicable.


Consent for publication:


Not applicable.


Acknowledgements:


None.