{"id":2006,"date":"2021-09-28T07:31:41","date_gmt":"2021-09-28T07:31:41","guid":{"rendered":"https:\/\/achrafothman.net\/site\/?p=2006"},"modified":"2021-10-24T03:40:57","modified_gmt":"2021-10-24T03:40:57","slug":"qatar-sign-language-avatar-arabic","status":"publish","type":"post","link":"https:\/\/achrafothman.net\/site\/qatar-sign-language-avatar-arabic\/","title":{"rendered":"Meet the First Qatari Sign Language Avatar: A 3D Realistic Virtual Conversational Agent"},"content":{"rendered":"<p><span style=\"font-size: 10pt;\">Authors: Achraf Othman<\/span><\/p>\n<hr class=\"hrline\" \/>\n<p><span style=\"font-size: 10pt;\">Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 2 \u2022 September 2021 \u2022 Published: September 28, 2021 \u2022 <a href=\"https:\/\/achrafothman.net\/site\/wp-content\/uploads\/Qatar-Sign-Langauge-Avatar.pdf\" target=\"_blank\" rel=\"noopener\">PDF<\/a><\/span><\/p>\n<hr class=\"hrline\" \/>\n<div class=\"abstract\"><strong>Abstract- <\/strong><\/div>\n<p>When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However, we live in a day and age where humans, by default, are genetically engineered to want more and to crave more, which is completely acceptable. With standards of living going up with each passing sense, it makes sense that individuals with any kind of impairments demand more than just hearing aids. On September 28, 2021, Mada Center launched the first of its kind in the world, a 3D realistic virtual conversational agent for Qatari Sign Language with the aim to enhance ICT Accessibility in the Arab region and beyond. This article will present a brief overview of the avatar technology for sign language and an introduction about \u201cBu Hamad\u201d the Qatari Sign Language Interpreter.<\/p>\n<p><span style=\"font-size: 10pt;\"><strong>Keywords<\/strong>: Computational Sign Language Processing, Realistic Avatar, Qatari Sign Language<\/span><\/p>\n<p><strong>Introduction<\/strong><\/p>\n<p>Thankfully today, with the immense leap forward of Artificial Intelligence, more research is being undertaken to come up with better solutions. One such invention is the Avatar technology or the 3D interpretation technology. On September 28, 2021, Mada Center launched the first of its kind in the world, a 3D realistic virtual conversational agent for Qatari Sign Language with the aim to enhance ICT Accessibility in the Arab region and beyond. This article will present a brief overview of the avatar technology for sign language and an introduction about \u201cBu Hamad\u201d the Qatari Sign Language Interpreter. The innovation proposed by Mada Center is considered a cutting-edge technology because it is based on the latest advances in Artificial Intelligence and Big Data. The data used to interpret a written Arabic Text to Qatari Sign Language was captured from hundreds of wearables sensors. The reason behind all of this is the be sure that the avatar \u201cBu Hamad\u201d (Fig. 1) is taking into consideration all components of Sign Language. In the work of (<a href=\"https:\/\/www.igi-global.com\/article\/designing-high-accuracy-statistical-machine-translation-for-sign-language-using-parallel-corpus\/224983\"><span style=\"color: #0000ff;\">Othman et al., 2019<\/span><\/a>), a list of components of Sign Language is defined. This is project was supported by the Mada Innovation Program (MIP) for the period 2019-2021 (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9144818\"><span style=\"color: #0000ff;\">Al Thani et al., 2019<\/span><\/a>).<\/p>\n<p><strong>What is the Avatar technology?<\/strong><\/p>\n<p>As the name suggests, the Avatar technology (<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-642-23974-8_13\"><span style=\"color: #0000ff;\">Kipp et al., 2011<\/span><\/a>) is basically the 3D translation of any closed captions using a virtual conversational agent. The idea is that there is a certain interface, artificial intelligence or otherwise, which can understand the text, which will automatically generate the written text into sign language for the deaf individual to understand (<a href=\"https:\/\/www.igi-global.com\/article\/designing-high-accuracy-statistical-machine-translation-for-sign-language-using-parallel-corpus\/224983\"><span style=\"color: #0000ff;\">Othman et al., 2019<\/span><\/a>). This is called machine translation for sign language (<a href=\"https:\/\/repository.upenn.edu\/cis_reports\/346\/\" target=\"_blank\" rel=\"noopener\"><span style=\"color: #0000ff;\">Abeill\u00e9 et al., 1991<\/span><\/a>).<\/p>\n<div style=\"width: 770px;\" class=\"wp-video\"><!--[if lt IE 9]><script>document.createElement('video');<\/script><![endif]-->\n<video class=\"wp-video-shortcode\" id=\"video-2006-1\" width=\"770\" height=\"479\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/achrafothman.net\/site\/wp-content\/uploads\/20210928163713_FULLHD.mp4?_=1\" \/><a href=\"https:\/\/achrafothman.net\/site\/wp-content\/uploads\/20210928163713_FULLHD.mp4\">https:\/\/achrafothman.net\/site\/wp-content\/uploads\/20210928163713_FULLHD.mp4<\/a><\/video><\/div>\n<p>Video 1. Qatari Sign Language Avatar interpreting text on Mada Website (<span style=\"color: #0000ff;\">mada.org.qa<\/span>)<\/p>\n<p>In literature, we may find several techniques for machine translation. For example, their several works related to statistical machine translation (<a href=\"https:\/\/arxiv.org\/abs\/1112.0168\"><span style=\"color: #0000ff;\">Othman et Jemni, 2011<\/span><\/a>) and example-based machine translation (<a href=\"https:\/\/link.springer.com\/article\/10.1023\/A:1008109312730\"><span style=\"color: #0000ff;\">Somers, 1999<\/span><\/a>). This interpretation is constructed based on a dictionary of words and signs that are already fed to the device (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/6815292\"><span style=\"color: #0000ff;\">Jemni et al., 2013<\/span><\/a>) (<a href=\"https:\/\/d1wqtxts1xzle7.cloudfront.net\/45845433\/English-ASL_Gloss_Parallel_Corpus_2012_A20160521-3066-xga50m-with-cover-page-v2.pdf?Expires=1634529263&amp;Signature=VLLDtBiPirlJp9ApQe8I0DguU~yDrhwKbcrbAHy4xJ6dDCJmdg~qWdlNuAWOSh5jKSFfEzWm-opsw4gsJlXlVVTMlILSJ~B05uy~hG~04irUlBy8QcTK5lodzyGAU2w0c9dhewiafuHts9kBW9O-C48nNHklekGT6SEPNJb9moZkF4ec7sUBlKUL8KVOK~ZByf7xS9voKsv7CzcrgYyKnPJmbonImLECnhEfcTOrDkP2DyIKveNFNMXTyFXMOmsNn4NcS2bdIhzLYXu1nvIv1bg3N51JaCDhDPqW26DlFkLhstRVgp~G4l4fiXLNpd84wxC7ahaXbMbBqNjEaOzFHw__&amp;Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA\"><span style=\"color: #0000ff;\">Othman et al., 2012<\/span><\/a>). People can keep adding things or words to this dictionary, but only when their words are approved by a set of learned linguists and human interpreters (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/6578458\"><span style=\"color: #0000ff;\">Tmar et al., 2013<\/span><\/a>).<\/p>\n<p><img data-attachment-id=\"2010\" data-permalink=\"https:\/\/achrafothman.net\/site\/qatar-sign-language-avatar-arabic\/buhamad\/\" data-orig-file=\"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?fit=2587%2C1348&amp;ssl=1\" data-orig-size=\"2587,1348\" data-comments-opened=\"1\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"buhamad avatar sign language\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?fit=300%2C156&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?fit=770%2C402&amp;ssl=1\" decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-2010\" src=\"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=770%2C401&#038;ssl=1\" alt=\"avatar sign language\" width=\"770\" height=\"401\" srcset=\"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?w=2587&amp;ssl=1 2587w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=300%2C156&amp;ssl=1 300w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=1024%2C534&amp;ssl=1 1024w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=768%2C400&amp;ssl=1 768w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=1536%2C800&amp;ssl=1 1536w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?resize=2048%2C1067&amp;ssl=1 2048w, https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/buhamad.png?w=2310&amp;ssl=1 2310w\" sizes=\"(max-width: 770px) 100vw, 770px\" data-recalc-dims=\"1\" \/><\/p>\n<p>Fig 1. \u201cBu Hamad\u201d the first Qatari Virtual 3D Interpreter for Qatari Sign Language by Mada<\/p>\n<p><strong>Why would anyone need the Avatar technology?<\/strong><\/p>\n<p>You see, it is not exactly fair to expect every single hearing-impaired individual to learn sign language as well as all the languages that come along with functioning in an everyday environment and living an average life. Every sign language has its fair share of grammar, and it gets difficult to translate that grammar into visuals. Then, there is the issue of basic differences between the kinds of sign languages in various other languages. For instance, American Sign Language (ASL) is different from German Sign Language (DGS). Therefore, a hearing-impaired German citizen cannot completely communicate with a hearing-impaired Indian citizen.<\/p>\n<p>Another issue is the lack of understanding of sign language on the part of the listener. In an everyday situation, it cannot and is not possible to expect that a completely abled person will know and use sign language, which is a problem since signing must be a two-way form of communication. Therefore, the deaf individual will be left at a loss.<\/p>\n<p>At this point, the 3D technology would prove to be tremendously useful, because, as the idea goes, the interface would have a whole lot of sign language gestures fed to it. This would help both individuals communicate without the barrier of language. At the same time, it would also help individuals who do not know sign language communicate with deaf individuals.<\/p>\n<p><strong>Singing through Virtual Reality<\/strong><\/p>\n<p>One of the ways in which 3D is being widely used to make sign language easier and more universal is through capture motion. The basic idea is that an individual wears motion capture gloves and keeps signing with these, and then, the computer, equipped with motion tracks makes a record of those movements and directly translates it into text or visual language. The very idea of this saw light back in the year 2002 when a teenager by the name of Ryan Patterson developed such a glove that understood and comprehended the signs made by the person wearing the gloves and proceeded to generate a written text and sent it directly to a portable device.<\/p>\n<p>Of course, over the years, more and more complex and multitasking ways of such 3d capture have emerged to take the pace of this kind of glove. For instance, there are neural networks, which take this to the next step by not only capturing the hand motion, but also detecting the slight changes or aberrations in them from person to person, and even detect or develop a significant pattern between such hand gestures.<\/p>\n<p><strong>Why the need for 3D translations of sign language?<\/strong><\/p>\n<p>It is understandable that people may ask the need to translate closed captions into Sign Language. Especially when it comes to written texts, deaf people can read, so why this technology? You see, a deaf person\u2019s first language is not their native language, but their native sign language and native sign languages differ from one country to another. For deaf individuals, who primarily learn their sign language, learning a written language is harder. For instance, for an average deaf American, learning English is harder than learning that American Sign Language. This is the reason why a lot of deaf individuals have issues reading and writing.<\/p>\n<p>Thus, a lot of websites, to make written materials more readily available, use videos where someone automatically signs the written text. However, the major problem is that such videos need to be completely edited from the beginning, whenever the written text is edited. This costs time and money. This is where 3D signing comes in. The visually captured motions of sign language are first fed to the 3D avatar. This is then presented as sign language with motion blending and converted to a language understandable by the machine (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/8336054\"><span style=\"color: #0000ff;\">Othman and Jemni, 2017<\/span><\/a>).<\/p>\n<p><strong>What is the state of the 3D Sign language translation today?<\/strong><\/p>\n<p>Unfortunately, although sign language is not universal, not a lot of progress has been made when it comes to the Avatar technology, and more funding is required for research. The major cause of this, is, of course, the lack of a common sign language. Since languages keep on changing from region to region, it becomes extremely difficult to feed the signs of one word from every single language into the dictionary of the interface, not to mention that creating such a huge dictionary would cost a lot of money and would basically render the machine inaccessible to almost most of the people. Proposals are being set forth about the setting up of a common community that decides on a universal sign language, which, incorporates the signs and symbols from the major or the most spoken sign language. Moreover, several topics are not yet elaborated in-depth to understand the nature of Sign Language such as the recognition of prosodic pauses in Sign Language (<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9144795\"><span style=\"color: #0000ff;\">Lagha et Othman, 2019<\/span><\/a>).<\/p>\n<p><strong>References<\/strong><\/p>\n<ol>\n<li>Abeill\u00e9, Anne, Yves Schabes, and Aravind K Joshi. 1991. \u201cUsing Lexicalized Tags for Machine Translation.\u201d<\/li>\n<li>Al Thani, D., Al Tamimi, A., Othman, A., Habib, A., Lahiri, A., &amp; Ahmed, S. (2019, December). Mada Innovation Program: A Go-to-Market ecosystem for Arabic Accessibility Solutions. In 2019 7th International Conference on ICT &amp; Accessibility (ICTA) (pp. 1-3). IEEE.<\/li>\n<li>Jemni, M., Semreen, S., Othman, A., Tmar, Z. and Aouiti, N., 2013, October. Toward the creation of an Arab Gloss for arabic Sign Language annotation. In Fourth International Conference on Information and Communication Technology and Accessibility (ICTA) (pp. 1-5). IEEE.<\/li>\n<li>Kipp M., Heloir A., Nguyen Q., Sign Language Avatars: Animation and Comprehensibility, International Workshop on Intelligent Virtual Agents, pp 113-126, 2011.<\/li>\n<li>Lagha, I. and Othman, A., 2019, December. Understanding Prosodic Pauses in Sign Language from Motion-Capture and Video-data. In 2019 7th International Conference on ICT &amp; Accessibility (ICTA) (pp. 1-4). IEEE.<\/li>\n<li>Othman, A. and Jemni, M., 2011. Statistical sign language machine translation: from English written text to American sign language gloss. arXiv preprint arXiv:1112.0168.<\/li>\n<li>Othman, Achraf, and Mohamed Jemni. 2012. \u201cEnglish-Asl Gloss Parallel Corpus 2012: Aslg-Pc12.\u201d In 5th Workshop on the Representation and Processing of Sign Languages: Interactions Between Corpus and Lexicon LREC.<\/li>\n<li>Othman, A. and Jemni, M., 2017, December. An XML-gloss annotation system for sign language processing. In 2017 6th International Conference on Information and Communication Technology and Accessibility (ICTA) (pp. 1-7). IEEE.<\/li>\n<li>Othman, A. and Jemni, M., 2019. Designing high accuracy statistical machine translation for sign language using parallel corpus: case study english and american sign language. Journal of Information Technology Research (JITR), 12(2), pp.134-158.<\/li>\n<li>Somers, H., 1999. Example-based machine translation. Machine translation, 14(2), pp.113-157.<\/li>\n<li>Tmar, Z., Othman, A. and Jemni, M., 2013, March. A rule-based approach for building an artificial English-ASL corpus. In 2013 International Conference on Electrical Engineering and Software Applications (pp. 1-4). IEEE.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 2 \u2022 September 2021 \u2022 Published: September 28, 2021 \u2022 PDF Abstract- When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However, we live in a day<\/p>\n","protected":false},"author":1,"featured_media":2076,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":""},"categories":[6],"tags":[70,153,154,38,83],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1","uagb_featured_image_src":{"full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1",644,480,false],"thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=150%2C150&ssl=1",150,150,true],"medium":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=300%2C224&ssl=1",300,224,true],"medium_large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1",644,480,true],"large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1",644,480,true],"1536x1536":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1",644,480,true],"2048x2048":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1",644,480,true],"post-thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=270%2C180&ssl=1",270,180,true],"contentberg-main":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=644%2C480&ssl=1",644,480,true],"contentberg-main-full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=644%2C480&ssl=1",644,480,true],"contentberg-slider-stylish":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=644%2C480&ssl=1",644,480,true],"contentberg-slider-carousel":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=370%2C370&ssl=1",370,370,true],"contentberg-slider-grid-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=554%2C466&ssl=1",554,466,true],"contentberg-slider-grid-b-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=306%2C466&ssl=1",306,466,true],"contentberg-slider-bold-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=150%2C150&ssl=1",150,150,true],"contentberg-grid":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=370%2C245&ssl=1",370,245,true],"contentberg-list":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=260%2C200&ssl=1",260,200,true],"contentberg-list-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=370%2C305&ssl=1",370,305,true],"contentberg-thumb":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=87%2C67&ssl=1",87,67,true],"contentberg-thumb-alt":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?resize=150%2C150&ssl=1",150,150,true]},"uagb_author_info":{"display_name":"Achraf Othman","author_link":"https:\/\/achrafothman.net\/site\/author\/achraf-othman\/"},"uagb_comment_info":0,"uagb_excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 2 \u2022 September 2021 \u2022 Published: September 28, 2021 \u2022 PDF Abstract- When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However, we live in a day","jetpack_shortlink":"https:\/\/wp.me\/p8KjJN-wm","jetpack-related-posts":[{"id":570,"url":"https:\/\/achrafothman.net\/site\/avatar-3d-interpretation-and-its-direct-relation-to-the-deaf-community\/","url_meta":{"origin":2006,"position":0},"title":"Avatar (3D interpretation) and Its Direct Relation to the Deaf Community","date":"June 8, 2020","format":false,"excerpt":"When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However, we live in a day and age where humans, by default, are genetically engineered to want more and to crave more, which is completely\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"avatar sign language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/Capture-1.png?fit=803%2C577&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":175,"url":"https:\/\/achrafothman.net\/site\/virtual-conversation-agent-avatar-for-sign-language\/","url_meta":{"origin":2006,"position":1},"title":"Virtual Conversational Agent (avatar) for Sign Language","date":"July 16, 2017","format":"image","excerpt":"Sign language (SL) was first acknowledged as a separate language only in the 1960s. Similar to spoken language, it evolved from different cultural backgrounds. Every country has its own sign language with various dialects, which are based on different rules than the spoken language. Although several websites provide video clips\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"Virtual Conversation Agent (avatar) for Sign Language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/Capture.png?fit=848%2C514&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":942,"url":"https:\/\/achrafothman.net\/site\/overview-of-text-to-gloss-in-computational-sign-language-processing-slp\/","url_meta":{"origin":2006,"position":2},"title":"Overview of Text-to-Gloss in Computational Sign Language Processing (SLP)","date":"August 9, 2021","format":false,"excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 1 \u2022 August 2021 \u2022 Published: August 9, 2021 \u2022 PDF Abstract- Digital Accessibility to the content in web environments for people with hearing disabilities and with hearing impairment with a low level of literacy is becoming increasingly\u2026","rel":"","context":"In &quot;Research and Innovation Letters&quot;","img":{"alt_text":"Using Gloss in Sign Language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":31,"url":"https:\/\/achrafothman.net\/site\/statistical-sign-language-machine-translation-from-english-written-text-to-american-sign-language-gloss\/","url_meta":{"origin":2006,"position":3},"title":"Statistical sign language machine translation: from english written text to american sign language gloss","date":"September 19, 2011","format":"image","excerpt":"This works aims to design a statistical machine translation from English text to American Sign Language (ASL). The system is based on Moses tool with some modifications and the results are synthesized through a 3D avatar for interpretation. First, we translate the input text to gloss, a written form of\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"Statistical Machine Translation for sign language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt.png?fit=768%2C425&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":3356,"url":"https:\/\/achrafothman.net\/site\/new-dataset-jumla-qsl-22-a-dataset-of-qatari-sign-language-sentences\/","url_meta":{"origin":2006,"position":4},"title":"New dataset: JUMLA-QSL-22: A DATASET OF QATARI SIGN LANGUAGE SENTENCES","date":"February 20, 2023","format":false,"excerpt":"ABSTRACT Sign languages are the most common mode of communication with and between hearing-impaired individuals. In the Arab world, Arabic sign language is used with different dialects supporting a distinct set of rules for the gestures used. With research on natural language processing advancing, models have been developed to translate\u2026","rel":"","context":"In &quot;Publications&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/dataset.jpg?fit=1200%2C903&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":3705,"url":"https:\/\/achrafothman.net\/site\/unveiling-my-latest-book-sign-language-processing-from-gesture-to-meaning\/","url_meta":{"origin":2006,"position":5},"title":"Unveiling My Latest Book: Sign Language Processing\u2014From Gesture to Meaning","date":"October 1, 2024","format":false,"excerpt":"It is with great excitement that I announce the release of my latest book, Sign Language Processing: From Gesture to Meaning. This work is the culmination of years of dedication, research, and a deep commitment to understanding the intricacies of sign languages, an area that bridges language, culture, and technology.\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"Sign Language Processing Springer Book","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/WhatsApp-Image-2024-09-15-at-10.55.50-AM-e1727749850619.jpeg?fit=1183%2C1044&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/2006"}],"collection":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/comments?post=2006"}],"version-history":[{"count":16,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/2006\/revisions"}],"predecessor-version":[{"id":2620,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/2006\/revisions\/2620"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media\/2076"}],"wp:attachment":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media?parent=2006"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/categories?post=2006"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/tags?post=2006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}