Veuillez utiliser cette adresse pour citer ce document : http://dspace.univ-bouira.dz:8080/jspui/handle/123456789/12569
Titre: Biometric Identification using Deep Ear Features
Auteur(s): Yacine Khaldi
Mots-clés: Biometrics, Ear recognition, Image-to-Image translation,
Active learning, Region of Interest
Date de publication: 9-mai-2022
Editeur: Université AKLI MOHAND OULHADJ-Bouira
Résumé: The human ear is one of the important biometric modalities for identifying individuals. It offers a unique benefit over other biometric models, such as the face or eye, beyond just; only the ear can be employed in some circumstances. Unlike the thumbprint or eye, the ear may be enrolled using a conventional camera; however, this has a serious drawback that requires employing ear detection algorithms before ear identification. In a few recent years, ear biometrics gained considerable attention and was addressed via various studies. Many steps of the ear biometric operation have been explored and solved, from ear detection, preparation, extraction of features to verification and identification. Machine learning techniques have been proved effective in solving different computer vision tasks such as image classification, object detection, and image segmentation. Recently, Deep Learning is a trend artificial intelligence technique that received much attention due to its superiority in solving problems, especially computer vision-related tasks. State-of-the-art researches on ear detection, identification, or verification used deep learning to complete those tasks and proved that it yielded a better performance against classic machine learning techniques. Thus, we employed deep learning to tackle all problems we identified during our researches. We proposed a solid experimental work by introducing new approaches to improve the identification process of the ear. The first issue we addressed was the loss of color information from test images, which might have a detrimental impact on the model's performance. A novel system based on image-to-image translation has thus been suggested that can restore missing data. The second issue we worked on was deleting non-ear pixels from photographs and creating a synthetic region of interest of the ear. Last, we proposed a new ear identification method that uses active unsupervised learning, which means that the classification model can learn new information during testing without the need for manual direction, correction, or decision-making. During the testing phase, new information can be used to improve the model's performance. According to obtained results, our proposed approaches were superior to many existing related works.
URI/URL: http://dspace.univ-bouira.dz:8080/jspui/handle/123456789/12569
Collection(s) :Mémoires Master

Fichier(s) constituant ce document :
Fichier Description TailleFormat 
Thesis Yacine Khaldi.pdf14,73 MBAdobe PDFVoir/Ouvrir


Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.