Search In this Thesis
   Search In this Thesis  
العنوان
Using digital processing of speech and image to support interaction between the deaf community and normal people /
المؤلف
Ghoniem, Mohamed Mahmoud Mohamed El-Said.
هيئة الاعداد
باحث / محمد محمود محمد السيد غنيم
مشرف / عطا إبراهيم إمام الألفي
مشرف / يسري الهلالي
مناقش / --
مشرف / --
الموضوع
Signal processing. Digital communications. Disabled Persons.
تاريخ النشر
2004.
عدد الصفحات
p 119. :
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
Human-Computer Interaction
تاريخ الإجازة
01/01/2014
مكان الإجازة
جامعة المنصورة - كلية التربية النوعية - اعداد معلم الحاسب
الفهرس
Only 14 pages are availabe for public view

from 128

from 128

Abstract

Recently, there has been a serious need for the deaf community in the Arab world to be able to communicate and integrate with the rest of the society.
The deaf community has been accustomed to conducting most of its daily affairs in isolation and only with people capable of understanding sign language. This isolation deprives this sizable segment of the society from proper socialization, education, and aspiration to career growth. This lack of communications hinders the deaf community from deploying their talents and skills in benefiting the society at large. This thesis addresses this problem by proposing a solution for the user-independent Arabic sign language recognition technique that facilitates communication between the deaf community and the rest of the society.
Sign language is the primary means of communication among deaf and people with hearing difficulties. In many ways, the hearing-impaired community is similar to an ethnic community within society, complete with its own culture and language (in this case, sign language). Unfortunately, very few people have good knowledge of sign language. Interpreters can help, but are difficult to find in unforeseen emergencies where timely communication is extremely important (e.g. car accidents). Moreover, apart from being expensive, the use of interpreters is inconvenient when privacy is required. Hence, communication between sign language users and hearing people poses many challenges. When it is necessary to communicate with ‘‘vocal’’ people (for example, when shopping), signers often have to resort to pantomimic gestures or written notes to communicate their needs. However, many deaf are uncomfortable using notes especially if their writing skills are not very strong. With the advances we are witnessing in technology, it is becoming crucial to develop robust Human Machine Interface (HMI) systems that can support the hearing-impaired to integrate with society. In particular, systems have been developed for translating signs into spoken words for a number of sign languages. However, it is worth noting that there have been very limited attempts to develop systems that automate the translation of the Arabic Sign Language.
As a primary component of many sign languages and in particular the Arabic Sign Language (ArSL), hand gestures and finger-spelling language plays an important role in deaf learning and their communication. Therefore, sign language can be considered as a collection of gestures, movements, postures, and facial expressions corresponding to letters and words in natural languages. A sign language is a small subset of possible forms of gesture communication. Sign languages are highly structured and most of them have symbolic natures i.e. the meaning is not transparent from observing the corresponding gesture. Sign language communication involves manual and non-manual channels. In the manual channel, hands are used to express lexical meanings, while in the non-manual channel, signers use facial expressions, as well as head and upper body movements to express syntactic and semantic information. This thesis presents a proposed system to support communication between the deaf people and the normal. The system is based on the digital processing of both video and speech. The proposed system has two main stages; the first stage is concerned with translating sign language (gesture) to Arabic spoken voice. The second stage is concerned with translating the given spoken Arabic words into gesture.