{"id":500,"date":"2020-05-17T03:57:52","date_gmt":"2020-05-17T03:57:52","guid":{"rendered":"http:\/\/achrafothman.net\/site\/?p=500"},"modified":"2020-05-30T09:40:02","modified_gmt":"2020-05-30T09:40:02","slug":"deployment-of-a-statistical-machine-translation-english-american-sign-language","status":"publish","type":"post","link":"https:\/\/achrafothman.net\/site\/deployment-of-a-statistical-machine-translation-english-american-sign-language\/","title":{"rendered":"Deployment of a Statistical Machine Translation (English &#038; American Sign Language)"},"content":{"rendered":"\n<p>Hello!<\/p>\n\n\n\n<p>In this tutorial, you will be able to deploy your statistical machine translation for the pair of language English and American Sign Language in written form. <\/p>\n\n\n\n<p>If you want to cite my work in your research papers, please refer to this publication:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\" style=\"font-size:14px !important;\"><p>Achraf Othman, Mohamed Jemni, \u201c<a rel=\"noreferrer noopener\" href=\"http:\/\/www.achrafothman.net\/aslsmt\/Designing-High-Accuracy-Statistical-Machine-Translation-for-Sign-Language-Using-Parallel-Corpus_-Case-Study-English-and-American-Sign-Language.pdf\" target=\"_blank\" class=\"aioseop-link\">Designing High Accuracy Statistical Machine Translation for Sign Language Using Parallel Corpus\u2014Case study English and American Sign Language<\/a> \u201c, Journal of Information Technology Research, Volume 12, Issue 2, 2019.<\/p><\/blockquote>\n\n\n\n<figure><iframe loading=\"lazy\" width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/7UjY0WeKWsU\" allowfullscreen=\"\"><\/iframe><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Here, we assume that you build statistical machine translation tools (check my previous tutorials). In case of fail, you can download pre-built binary files of moses and related tools as follow:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    mkdir smt\n    cd smt\n    wget https:\/\/www.achrafothman.net\/aslsmt\/tools\/smt-moses-ao-ubuntu-16.04.tgz\n    tar -xzvf smt-moses-ao-ubuntu-16.04.tgz\n    cd ubuntu-16.04\/\n    mv bin ..\/\n    mv scripts ..\/\n    mv training-tools ..\/\n    cd ..\n    rm -r ubuntu-16.04\n    rm -r smt-moses-ao-ubuntu-16.04.tgz<\/pre>\n\n\n\n<p>We create a folder named tools and we copy complied GIZA++ files in this folder (<a aria-label=\"to see how to make the GIZA++ check my previous tutorial (opens in a new tab)\" href=\"http:\/\/achrafothman.net\/site\/how-to-install-moses-statistical-machine-translation-in-ubuntu\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"aioseop-link\">to see how to make the GIZA++ check my previous tutorial<\/a>). To do that, we use the command cp to copy all binary files. I assume that I compiled all files in a folder named smt2, so to copy files, we need just to run the following commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    mkdir tools\n    cd tools\n    cp ..\/..\/smt2\/tools\/GIZA++ .\n    cp ..\/..\/smt2\/tools\/mkcls .\n    cp ..\/..\/smt2\/tools\/snt2cooc.out .\n    cd ..<\/pre>\n\n\n\n<p>For this tutorial, I prepared a small corpus for testing between English and American Sign Language. You can use any pair of languages. Below commands to download files:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    mkdir corpus\n    cd corpus\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-mini.asl\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-mini.en\n    mv corpus-mini.asl corpus.asl\n    mv corpus-mini.en corpus.en\n    cd ..<\/pre>\n\n\n\n<p>Now, we run the tokenization step. Tokenization is essentially splitting a phrase, sentence, paragraph, or an entire text document into smaller units, such as individual words or terms. Each of these smaller units is called token. Let\u2019s take an example. Consider the below string: \u201cThis is a cat.\u201d. After the tokenization step, we get the following: [\u2018This\u2019, \u2018is\u2019, \u2018a\u2019, cat\u2019]. For our corpus, we do:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus.asl &gt; \/root\/smt\/corpus\/corpus.tok.asl\n    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus.en &gt; \/root\/smt\/corpus\/corpus.tok.en\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"><strong># to check output files<\/strong><\/span>\n    cd corpus\n    tail corpus.tok.en\n    cd ..<\/pre>\n\n\n\n<p>The next step is Truecasing. Truecasing is the problem in natural language processing (NLP) of determining the proper capitalization of words where such information is unavailable. This commonly comes up due to the standard practice (in English and many other languages) of automatically capitalizing the first word of a sentence. It can also arise in a badly cased or non-cased text (for example, all-lowercase or all-uppercase text messages). To do that, we run the following commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/scripts\/recaser\/train-truecaser.perl --model \/root\/smt\/corpus\/truecase-model.en --corpus \/root\/smt\/corpus\/corpus.tok.en\n    \/root\/smt\/scripts\/recaser\/train-truecaser.perl --model \/root\/smt\/corpus\/truecase-model.asl --corpus \/root\/smt\/corpus\/corpus.tok.asl\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.en &lt; \/root\/smt\/corpus\/corpus.tok.en &gt; \/root\/smt\/corpus\/corpus.true.en\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.asl &lt; \/root\/smt\/corpus\/corpus.tok.asl &gt; \/root\/smt\/corpus\/corpus.true.asl<\/pre>\n\n\n\n<p>For cleaning, we run the script clean-corpus-n.perl. It is a small script that cleans up a parallel corpus, so it works well with the training script. It performs the following steps: removes empty lines and removes redundant space characters.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/scripts\/training\/clean-corpus-n.perl \/root\/smt\/corpus\/corpus.true en asl \/root\/smt\/corpus\/corpus.clean 1 22<\/pre>\n\n\n\n<p>Now, it is time to build the language model (LM). It is used to ensure fluent output, so it is built with the target language (i.e English in this case). The <a aria-label=\"KenLM  (opens in a new tab)\" href=\"https:\/\/github.com\/kpu\/kenlm\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"aioseop-link\">KenLM <\/a>documentation gives a full explanation of the command-line options, but the following will build an appropriate 3-gram language model. To do:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/bin\/lmplz -o 3 &lt; \/root\/smt\/corpus\/corpus.true.asl &gt;   \/root\/smt\/corpus\/corpus.arpa.asl\n<span class=\"has-inline-color has-vivid-green-cyan-color\">    # Then we should binarise (for faster loading) the *.arpa.en file using KenLM:\n<\/span>    \/root\/smt\/bin\/build_binary \/root\/smt\/corpus\/corpus.arpa.asl \/root\/smt\/corpus\/corpus.blm.asl<\/pre>\n\n\n\n<p>We can test the language model by querying it as follow:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    echo \"NAME X-YOU WHAT\" | \/root\/smt\/bin\/query \/root\/smt\/corpus\/corpus.blm.asl<\/pre>\n\n\n\n<p>Finally, we proceed to training the translation model. To do this, we run word alignment (using GIZA++), phrase extraction and scoring, create lexicalized reordering tables, and create our Moses configuration file, all with a single command. I recommend that you create an appropriate directory as follows, and then run the training command, catching logs:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    mkdir working\n    cd working\n    nohup nice \/root\/smt\/scripts\/training\/train-model.perl -root-dir train -corpus \/root\/smt\/corpus\/corpus.clean -f en -e asl -alignment grow-diag-final-and -reordering msd-bidirectional-fe -lm 0:3:\/root\/smt\/corpus\/corpus.blm.asl:8 -external-bin-dir \/root\/smt\/tools &gt;&amp; training.out &amp;\n    tail -f training.out \n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># once the line starting with \"(9) create moses.ini @...\" appears, you can type CTRL+C to exit the tail mode.<\/span>\n    cd ..<\/pre>\n\n\n\n<p>This is the slowest part of the process &#8220;Tuning&#8221;, so we might want to line up something to read whilst it\u2019s progressing. Tuning requires a small amount of parallel data, separate from the training data, so again we\u2019ll download some data. Run the following commands (from your home directory again) to download the data and put it in a sensible place.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    cd corpus\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-tuning.asl\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-tuning.en\n  <span class=\"has-inline-color has-vivid-green-cyan-color\">  # Tokenization<\/span>\n    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus-tuning.asl &gt; \/root\/smt\/corpus\/corpus-tuning.tok.asl\n    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus-tuning.en &gt; \/root\/smt\/corpus\/corpus-tuning.tok.en\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># Truecasing<\/span>\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.asl &lt; \/root\/smt\/corpus\/corpus-tuning.tok.asl &gt; \/root\/smt\/corpus\/corpus-tuning.true.asl\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.en &lt; \/root\/smt\/corpus\/corpus-tuning.tok.en &gt; \/root\/smt\/corpus\/corpus-tuning.true.en\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># Tuning Process<\/span>\n    cd ..\n    cd working\n    nohup nice \/root\/smt\/scripts\/training\/mert-moses.pl \/root\/smt\/corpus\/corpus-tuning.true.en \/root\/smt\/corpus\/corpus-tuning.true.asl \/root\/smt\/bin\/moses \/root\/smt\/working\/train\/model\/moses.ini --mertdir \/root\/smt\/bin\/ &amp;&gt; mert.out &amp;\n    tail -f mert.out\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># once the line starting with \"Saving new config to: .\/moses.ini Saved: .\/moses.ini...\" appears, you can type CTRL+C to exit the tail mode.<\/span>\n    cd ..<\/pre>\n\n\n\n<p><strong>We can now run the statistical machine translation Moses \ud83d\udcaa\ud83c\udffb with:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/bin\/moses -f \/root\/smt\/working\/mert-work\/moses.ini\n<span class=\"has-inline-color has-vivid-green-cyan-color\">    # and type in your favorite English sentence e.g., \"what is your name ?\" to see the results. \n    # To exit the moses mode, type CTRL+C.<\/span><\/pre>\n\n\n\n<p>We\u2019ll notice, though, that the decoder takes at least a couple of minutes to start-up. In order to make it start quickly, we can binarise the phrase-table and lexicalized reordering models. To do this, create a suitable directory and binarise the models as follows:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    cd working\n    mkdir binarised-model\n    \/root\/smt\/bin\/processPhraseTableMin -in train\/model\/phrase-table.gz -nscores 4 -out binarised-model\/phrase-table\n    \/root\/smt\/bin\/processLexicalTableMin -in train\/model\/reordering-table.wbe-msd-bidirectional-fe.gz -out binarised-model\/reordering-table\n    cp \/root\/smt\/working\/mert-work\/moses.ini \/root\/smt\/working\/binarised-model\/\n    cd binarised-model\/\n    vim moses.ini\n        <span class=\"has-inline-color has-vivid-green-cyan-color\"># in the vim editor, type i to start editing mode and edit the following<\/span>\n        <span class=\"has-inline-color has-luminous-vivid-amber-color\">@1. Change <strong>PhraseDictionaryMemory<\/strong> to <strong>PhraseDictionaryCompact<\/strong>\n        @2. Set the path of the <strong>PhraseDictionaryCompact <\/strong>feature to point to: \/root\/smt\/working\/binarised-model\/phrase-table.minphr\n        @3. Set the path of the <strong>LexicalReordering <\/strong>feature to point to: \/root\/smt\/working\/binarised-model\/reordering-table\n        @4. Save moses.ini<\/span>\n        <span class=\"has-inline-color has-vivid-green-cyan-color\"># to save and quit, type ESC and type wq! followed by ENTER.<\/span><\/pre>\n\n\n\n<p>Until now, the loading and running a translation is pretty fast. To test the statistical machine translation again<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    cd ..\n    cd ..\n    \/root\/smt\/bin\/moses -f \/root\/smt\/working\/binarised-model\/moses.ini\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># and type in your favorite English sentence e.g., \"what is your name ?\" to see the results.\n    # To exit the moses mode, type CTRL+C.<\/span><\/pre>\n\n\n\n<p>At this stage, we probably wondering how good the translation system is. To measure this, we use another parallel data set (the test set) distinct from the ones we\u2019ve used so far. Let\u2019s pick download the manually created corpora, and so first we have to tokenize and true-case it as before.  The model that we\u2019ve trained can then be filtered for this test set, meaning that we only retain the entries needed to translate the test set. This will make the translation a lot faster. We can test the decoder by first translating the test set, then to run the BLEU script on it:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    cd corpus\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-bleu.asl\n    wget http:\/\/www.achrafothman.net\/aslsmt\/corpus\/corpus-bleu.en\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># Tokenization<\/span>\n    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus-bleu.asl &gt; \/root\/smt\/corpus\/corpus-bleu.tok.asl\n    \/root\/smt\/scripts\/tokenizer\/tokenizer.perl -l en &lt; \/root\/smt\/corpus\/corpus-bleu.en &gt; \/root\/smt\/corpus\/corpus-bleu.tok.en\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># Truecasing<\/span>\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.asl &lt; \/root\/smt\/corpus\/corpus-bleu.tok.asl &gt; \/root\/smt\/corpus\/corpus-bleu.true.asl\n    \/root\/smt\/scripts\/recaser\/truecase.perl --model \/root\/smt\/corpus\/truecase-model.en &lt; \/root\/smt\/corpus\/corpus-bleu.tok.en &gt; \/root\/smt\/corpus\/corpus-bleu.true.en\n    cd ..\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># Training<\/span>\n    cd working\n    \/root\/smt\/scripts\/training\/filter-model-given-input.pl filtered-corpus-mini mert-work\/moses.ini \/root\/smt\/corpus\/corpus-bleu.true.en -Binarizer   \/root\/smt\/bin\/processPhraseTableMin\n    nohup nice \/root\/smt\/bin\/moses -f \/root\/smt\/working\/filtered-corpus-mini\/moses.ini &lt; \/root\/smt\/corpus\/corpus-bleu.en &gt;  \/root\/smt\/working\/corpus.translated.asl 2&gt; \/root\/smt\/working\/corpus.translated.out\n    <span class=\"has-inline-color has-vivid-green-cyan-color\"># See the log<\/span>\n    tail -f \/root\/smt\/working\/corpus.translated.out\n    cd ..<\/pre>\n\n\n\n<p>To calculate the score<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">    \/root\/smt\/scripts\/generic\/multi-bleu.perl -lc \/root\/smt\/corpus\/corpus-bleu.true.asl &lt; \/root\/smt\/working\/corpus.translated.asl<\/pre>\n\n\n\n<p>Thanks for following this long tutorial \ud83d\ude42 <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hello! In this tutorial, you will be able to deploy your statistical machine translation for the pair of language English and American Sign Language in written form. If you want to cite my work in your research papers, please refer to this publication: Achraf Othman, Mohamed Jemni, \u201cDesigning High Accuracy Statistical Machine Translation for Sign<\/p>\n","protected":false},"author":1,"featured_media":502,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":""},"categories":[4],"tags":[],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1","uagb_featured_image_src":{"full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1",600,400,false],"thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=150%2C150&ssl=1",150,150,true],"medium":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=300%2C200&ssl=1",300,200,true],"medium_large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1",600,400,true],"large":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1",600,400,true],"1536x1536":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1",600,400,true],"2048x2048":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?fit=600%2C400&ssl=1",600,400,true],"post-thumbnail":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=270%2C180&ssl=1",270,180,true],"contentberg-main":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=600%2C400&ssl=1",600,400,true],"contentberg-main-full":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=600%2C400&ssl=1",600,400,true],"contentberg-slider-stylish":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=600%2C400&ssl=1",600,400,true],"contentberg-slider-carousel":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=370%2C370&ssl=1",370,370,true],"contentberg-slider-grid-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=554%2C400&ssl=1",554,400,true],"contentberg-slider-grid-b-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=306%2C400&ssl=1",306,400,true],"contentberg-slider-bold-sm":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=150%2C150&ssl=1",150,150,true],"contentberg-grid":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=370%2C245&ssl=1",370,245,true],"contentberg-list":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=260%2C200&ssl=1",260,200,true],"contentberg-list-b":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=370%2C305&ssl=1",370,305,true],"contentberg-thumb":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=87%2C67&ssl=1",87,67,true],"contentberg-thumb-alt":["https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt-tutorial.png?resize=150%2C150&ssl=1",150,150,true]},"uagb_author_info":{"display_name":"Achraf Othman","author_link":"https:\/\/achrafothman.net\/site\/author\/achraf-othman\/"},"uagb_comment_info":4,"uagb_excerpt":"Hello! In this tutorial, you will be able to deploy your statistical machine translation for the pair of language English and American Sign Language in written form. If you want to cite my work in your research papers, please refer to this publication: Achraf Othman, Mohamed Jemni, \u201cDesigning High Accuracy Statistical Machine Translation for Sign","jetpack_shortlink":"https:\/\/wp.me\/p8KjJN-84","jetpack-related-posts":[{"id":31,"url":"https:\/\/achrafothman.net\/site\/statistical-sign-language-machine-translation-from-english-written-text-to-american-sign-language-gloss\/","url_meta":{"origin":500,"position":0},"title":"Statistical sign language machine translation: from english written text to american sign language gloss","date":"September 19, 2011","format":"image","excerpt":"This works aims to design a statistical machine translation from English text to American Sign Language (ASL). The system is based on Moses tool with some modifications and the results are synthesized through a 3D avatar for interpretation. First, we translate the input text to gloss, a written form of\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"Statistical Machine Translation for sign language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/smt.png?fit=768%2C425&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":337,"url":"https:\/\/achrafothman.net\/site\/machine-translation-for-sign-language\/","url_meta":{"origin":500,"position":1},"title":"New Journal Publication: Designing High Accuracy Statistical Machine Translation for Sign Language","date":"March 12, 2019","format":"image","excerpt":"In this article, the authors deal with the machine translation of written English text to sign language. They study the existing systems and issues in order to propose an implantation of a statistical machine translation from written English text to American Sign Language (English\/ASL) taking care of several features of\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/dd.png?fit=750%2C438&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":118,"url":"https:\/\/achrafothman.net\/site\/phd-defense\/","url_meta":{"origin":500,"position":2},"title":"PhD Defense: \u201cMachine Translation for Sign Language based on Statistical Approach\u201c","date":"May 18, 2017","format":"image","excerpt":"I am happy to report that on March\u00a010, 2017, I had my doctoral dissertation defense, as part of the WebSign project, and that the committee found my research to be worthy. My dissertation was titled \u201cMachine Translation for Sign Language based on Statistical Approach\u201d and was based on translation between\u2026","rel":"","context":"In &quot;Blog&quot;","img":{"alt_text":"PhD Defence Dr. Achraf Othman","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/17157361_10211187722864810_4042370163602591668_o.jpg?fit=1200%2C900&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":235,"url":"https:\/\/achrafothman.net\/site\/how-to-install-moses-statistical-machine-translation-in-ubuntu\/","url_meta":{"origin":500,"position":3},"title":"How to install Moses (Statistical Machine Translation) on Ubuntu?","date":"November 27, 2017","format":"image","excerpt":"In this article, I will show you how to install and build Moses on Ubuntu, and how to use Moses to translate with some simple models (English and Sign Language Gloss). If you experience problems, then please contact me. If you\u2019re just writing about this work, please cite this paper\u2026","rel":"","context":"In &quot;Tutorials&quot;","img":{"alt_text":"How to install Moses on Ubuntu? Moses, Statistical Machine Translation","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/tuto-1.jpg?fit=640%2C400&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":942,"url":"https:\/\/achrafothman.net\/site\/overview-of-text-to-gloss-in-computational-sign-language-processing-slp\/","url_meta":{"origin":500,"position":4},"title":"Overview of Text-to-Gloss in Computational Sign Language Processing (SLP)","date":"August 9, 2021","format":false,"excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 1 \u2022 August 2021 \u2022 Published: August 9, 2021 \u2022 PDF Abstract- Digital Accessibility to the content in web environments for people with hearing disabilities and with hearing impairment with a low level of literacy is becoming increasingly\u2026","rel":"","context":"In &quot;Research and Innovation Letters&quot;","img":{"alt_text":"Using Gloss in Sign Language","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/signlanguagegloss.png?fit=1024%2C683&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]},{"id":2006,"url":"https:\/\/achrafothman.net\/site\/qatar-sign-language-avatar-arabic\/","url_meta":{"origin":500,"position":5},"title":"Meet the First Qatari Sign Language Avatar: A 3D Realistic Virtual Conversational Agent","date":"September 28, 2021","format":false,"excerpt":"Authors: Achraf Othman Research and Innovation Letters \u2022 Volume 1 \u2022 Issue 2 \u2022 September 2021 \u2022 Published: September 28, 2021 \u2022 PDF Abstract- When it comes to inventions and technological assistance for the hearing impaired, science has come a long way, there is absolutely no doubt about that. However,\u2026","rel":"","context":"In &quot;Research and Innovation Letters&quot;","img":{"alt_text":"BuHamad Qatari Sign Language Avatar","src":"https:\/\/i0.wp.com\/achrafothman.net\/site\/wp-content\/uploads\/BuHamad-Gif.gif?fit=644%2C480&ssl=1&resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/500"}],"collection":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/comments?post=500"}],"version-history":[{"count":58,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/500\/revisions"}],"predecessor-version":[{"id":562,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/posts\/500\/revisions\/562"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media\/502"}],"wp:attachment":[{"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/media?parent=500"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/categories?post=500"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/achrafothman.net\/site\/wp-json\/wp\/v2\/tags?post=500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}