الفهرس | Only 14 pages are availabe for public view |
Abstract Arabic sign language (ArSL) is the most widely used within the deaf Arabic world. The recognition system of ArSL could be an innovation to empower communication between the deaf and others. Deep learning is a contemporary machine-learning model, which will be used to recognize Arabic sign language in two learnable steps. We demonstrated that Transfer learning and Recurrent Neural Networks could categorize sign language with promising results after testing hybrid models and perfect training settings. Even better, when augmented videos datasets was used, these results will improve. As there are no video datasets. Our dataset was created for 20 different Arabic sign languagewords from various persons (signers) that performed several times in videos format, then was expanded via a suitable mixing of video augment methods that have major limitations, especially in our case where the videos needed precise guidelines. For ArSL Recognition, the latest Deep learning algorithms were applied in this thesis, such as Transfer learning and Recurrent Neural Networks to extract spatial and temporal features. Moreover, the performance and accuracy were compared with the two datasets with and without data augmentation; the results showed that the hybrid models (Transfer learning and Recurrent Neural Networks) improve overall performance and accuracy. Furthermore, the methodologies were able to achieve more than 90% many times to reach 93.4% as the highest results. We tested our architecture with the Argentina sign language dataset, which showed recognition accuracy perfectly. Consequently, our architecture can distinguish semantic information in especially other video-based sign languages datasets. And, generally in video-based datasets by extracting spatial and temporal. |