Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
55 result(s) for "Turkish Sign Language"
Sort by:
Multi-Stream General and Graph-Based Deep Neural Networks for Skeleton-Based Sign Language Recognition
Sign language recognition (SLR) aims to bridge speech-impaired and general communities by recognizing signs from given videos. However, due to the complex background, light illumination, and subject structures in videos, researchers still face challenges in developing effective SLR systems. Many researchers have recently sought to develop skeleton-based sign language recognition systems to overcome the subject and background variation in hand gesture sign videos. However, skeleton-based SLR is still under exploration, mainly due to a lack of information and hand key point annotations. More recently, researchers have included body and face information along with hand gesture information for SLR; however, the obtained performance accuracy and generalizability properties remain unsatisfactory. In this paper, we propose a multi-stream graph-based deep neural network (SL-GDN) for a skeleton-based SLR system in order to overcome the above-mentioned problems. The main purpose of the proposed SL-GDN approach is to improve the generalizability and performance accuracy of the SLR system while maintaining a low computational cost based on the human body pose in the form of 2D landmark locations. We first construct a skeleton graph based on 27 whole-body key points selected among 67 key points to address the high computational cost problem. Then, we utilize the multi-stream SL-GDN to extract features from the whole-body skeleton graph considering four streams. Finally, we concatenate the four different features and apply a classification module to refine the features and recognize corresponding sign classes. Our data-driven graph construction method increases the system’s flexibility and brings high generalizability, allowing it to adapt to varied data. We use two large-scale benchmark SLR data sets to evaluate the proposed model: The Turkish Sign Language data set (AUTSL) and Chinese Sign Language (CSL). The reported performance accuracy results demonstrate the outstanding ability of the proposed model, and we believe that it will be considered a great innovation in the SLR domain.
Real-time sign language recognition based on YOLO algorithm
This study focuses on real-time hand gesture recognition in the Turkish sign language detection system. YOLOv4-CSP based on convolutional neural network (CNN), a state-of-the-art object detection algorithm, is used to provide real-time and high-performance detection. The YOLOv4-CSP algorithm is created by adding CSPNet to the neck of the original YOLOv4 to improve network performance. A new object detection model has been proposed by optimizing the YOLOv4-CSP algorithm in order to provide more efficient detection in Turkish sign language. The model uses CSPNet throughout the network to increase the learning ability of the network. However, Proposed YOLOv4-CSP has a learning model with Mish activation function, complete intersection of union (CIoU) loss function and transformer block added. The Proposed YOLOv4-CSP algorithm has faster learning with transfer learning than previous versions. This allows the proposed YOLOv4-CSP algorithm to perform a faster restriction and recognition of static hand signals simultaneously. To evaluate the speed and detection performance of the proposed YOLOv4-CSP model, it is compared with previous YOLO series, which offers real-time detection, as well. YOLOv3, YOLOv3-SPP, YOLOv4-CSP and proposed YOLOv4-CSP models are trained with a labeled dataset consisting of numbers in Turkish Sign language, and their performances on the hand signals recognitions are compared. With the proposed method, 98.95% precision, 98.15% recall, 98.55 F1 score and 99.49% mAP results are obtained in 9.8 ms. The proposed method for detecting numbers in Turkish sign language outperforms other algorithms with both real-time performance and accurate hand sign prediction, regardless of background.
Duration as a prosodic cue in TİD: Focus realization in the extended domain
Prosodic prominence of focused units reflected through a variety of cues, is well documented in all modalities. Yet, the effect of focus in the extended domain is understudied in sign languages. This paper, building on an experimental study on Turkish Sign Language (TİD), investigates prosodic strategies used in narrowly focused and broad focus conditions in an extended prosodic domain. We measured the duration of the signs and the proportion of accompanying nonmanuals in both the pre-focal and post-focal domains, as well as in the focal domain. The results showed that (i) signs in narrowly focused conditions significantly differ from their counterparts in the non-focused and broad focus conditions, and an increase in duration is the focus strategy for signs, (ii) nonmanuals do not necessarily accompany focused signs, (iii) narrowlyfocused signs yield a decrease in duration as a compression effect in the pre-focal or post-focal domain. We argue that the compression effect cannot be analyzed as a result of givenness. Focus realization in TİD is a trade-off between boosting and deboosting strategies within the extended prosodic domain.
Prosody of focus in Turkish Sign Language
Prosodic realization of focus has been a widely investigated topic across languages and modalities. Simultaneous focus strategies are intriguing to see how they interact regarding their functional and temporal alignment. We explored the multichannel (manual and nonmanual) realization of focus in Turkish Sign Language. We elicited data with focus type, syntactic roles and movement type variables from 20 signers. The results revealed the focus is encoded via increased duration in manual signs, and nonmanuals do not necessarily accompany focused signs. With a multichanneled structure, sign languages use two available channels or opt for one to express focushood.
Türk Sağır Kültürünün Tarihsel Kökleri
Birleşmiş Milletler Engelli Hakları Sözleşmesi, 21. yüzyılın başında engellilerin topluma tam ve etkin katılımlarının sağlanmasını, bireysel ve kültürel özelliklerinin tanınmasını, körler için Braille alfabesi, Sağır ve işitme engelliler için işaret dili gibi temel iletişim araçlarının kabul edilmesini tesis eden yeni ve önemli düzenlemeler getirmiştir. Türkiye bu sözleşmeyi 30 Mart 2007 tarihinde imzalamış ve ilgili sözleşme 3 Mayıs 2008 tarihinde yürürlüğe girmiştir. Engellilerin hak ve hukukuna ilişkin bu düzenlemeleri müteakip 2020 yılının \"Erişilebilirlik Yılı\" ilan edilmesiyle bu alandaki akademik araştırma ve çalışmalar hız kazanmıştır. Diğer engelli gruplardan farklı olarak Sağır oluşlarıyla övünen ve gurur duyan Sağır toplum, işaret dilleri kullanımlarıyla biçimlenen kimlikleri ile işiten toplum içinde yerel altkültürler olarak tanımlanmakta-dır. Sağır toplumuna ilişkin yapılan birçok araştırma, Sağırların ortak deneyim ve yaşantıları ile işaret dille-rinin yapılarındaki benzerlikler, uluslarüstü bir Sağır kültürünün ve Sağır dünyasının varlığını ortaya koy-muştur. Türkiye’de yaşayan Sağırlar Türk Sağır toplumunu oluşturmaktadır. Bu toplum Türkiye’nin bütün bölgelerinde daha çok S/sağır ve işitme engelliler federasyon ve dernekler bünyesinde dayanışma içinde hareket eden, kendi kültürlerine özgü ulusal ve uluslararası tiyatro, film gibi sanatsal faaliyetler sergileyen, lisanslı sporcular yetiştiren çok zengin bir altkültürdür. Bu çalışmanın amacı, sağırlığı kimlik ve bir topluma aidiyet açısından ele alan sosyo-kültürel yaklaşımdan hareketle Türk Sağır toplumunun kültürünün köklerini tarihsel veriler ışığında aydınlatmaktır. Nitekim, Türk Sağır toplumunun günümüzdeki kültürel yapısının oluşmasında tarihsel süreç içinde bu toplumun kültürel özelliklerini ve sağırlık dereceleriyle çeşitlenen kimliklerini incelemek önemlidir. Türk Sağır kültürü hakkında Osmanlı öncesine ilişkin bilgiler yok denecek kadar azdır. Türk yazı dilinin ilk kaynağı Orhon Yazıtları'nda ve Çin yıllıklarında Göktürk Kağanlığı döne-minde Türk sağır kültürü hakkında herhangi bir bilgi yer almamaktadır. Bu kültürün ilk izlerine Dîvânu Lugâti't-Türk ve Kutadgu Bilig'de rastlanmaktadır. Osmanlı Devleti'nin yükselme döneminden itibaren saray-da istihdam edilen Sağır görevlilerin yaşamını konu edinen kaynaklarda daha ayrıntılı bilgiler elde edilmektedir. Osmanlı Devleti'nin yükselme döneminden itibaren sarayda istihdam edilen Sağır görevlilerin yaşamını konu edinen kaynaklarda daha ayrıntılı bilgilere ulaşılmaktadır. Bu çalışma, Osmanlı Döneminden başlayarak tarihsel süreç içinde Türk Sağır kültürünün köklerini yerel ve yabancı tarihi kaynaklardan elde edilen veriler ışığında ortaya koymayı amaçlamaktadır. Bu verilerden hareketle S/sağır ve işitme engelli bireylerin özellikle sarayda önemli ve ayrıcalıklı görevlerde bulundukları, yaşam tarzları, kıyafetleri, sanat ve edebiyata düşkünlükleri kimlikleri ve toplumdaki yerleri incelenecektir. Söz konusu çalışma Türk Sağır toplumunun kendi kültürünün kökleri ve ortak değerleri hakkında bilgi sahibi olması açısından da önemlidir. Bu doğrultuda çalışmadaki verilerin görsel içerikle desteklenerek işaret dili çevirisi ve ayrıntılı alt yazı gibi engelsiz erişim için çeviri türleriyle erişilebilir kılınması önerilmektedir.
Building the first comprehensive machine-readable Turkish sign language resource: methods, challenges and solutions
This article describes the procedures employed during the development of the first comprehensive machine-readable Turkish Sign Language (TiD) resource: a bilingual lexical database and a parallel corpus between Turkish and TiD. In addition to sign language specific annotations (such as non-manual markers, classifiers and buoys) following the recently introduced TiD knowledge representation (Eryiğit et al. 2016 ), the parallel corpus contains also annotations of dependency relations, which makes it the first parallel treebank between a sign language and an auditory-vocal language.
DOES SPACE STRUCTURE SPATIAL LANGUAGE? A COMPARISON OF SPATIAL EXPRESSION ACROSS SIGN LANGUAGES
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of FIGURE-GROUND (e.g. cup on table) and FIGURE-FIGURE (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language.
A real-time approach to recognition of Turkish sign language by using convolutional neural networks
Sign language is a form of visual communication used by people with hearing problems to express themselves. The main purpose of this study is to make life easier for these people. In this study, a data set was created using 3200 RGB images for 32 classes (32 static words) taken from three different people. Data augmentation methods were applied to the data sets, and the number of images increased from 3200 to 19,200, 600 per class. A 10-layer convolutional neural network model was created for the classification of the signs, and VGG166, Inception, and ResNet deep network architectures, which are deep learning methods, were applied by using the transfer learning method. Also, the signs are classified using the support vector machines and k-nearest neighbor methods, which are the traditional machine learning methods, by using features obtained from the last layer of the convolutional neural network. The most successful method was determined by comparing the obtained results according to time and performance ratios. In addition to these analyses, an interface was developed. By using the interface, the static words belonging to Turkish sign language (TSL) are translated into real-time written language. With the real-time system designed, its success in recognizing the static words of TSL signs and printing its prediction on the computer screen was evaluated.
Expression of Aboutness Subject Topic Constructions in Turkish Sign Language (TİD) Narratives
In the visual-spatial modality, signers indicate old, new, or contrastive information using certain syntactic, prosodic, and morphological strategies. Even though information structure has been described extensively for many sign languages, the flow of information in the narrative discourse remains unexplored in Turkish Sign Language (TİD). This study aims to describe aboutness subject topic constructions in TİD narratives. We examined data from six adult native signers of TİD and found that TİD signers mainly used nominals for reintroduced aboutness subject topics. The optional and rare non-manual markers observed on reintroduced topics mainly included squint, brow raise, and backward head tilt. Maintained aboutness subject topics, which have higher referential accessibility, were often omitted and tracked with zero anaphora. Finally, we found that constructed action is more frequently present on the predicates of clauses with a maintained aboutness subject topic than with a reintroduced aboutness subject topic. Overall, these results indicate that the use of constructed action and nominals in aboutness subject topics correlates with referential accessibility in TİD. While the former has been observed more in maintained contexts, the latter has been observed mainly in reintroduced contexts. In addition to the syntactic and prosodic cues that may distinguish old information from new or contrastive information in narratives, we suggest that pragmatic cues such as referential accessibility may help account for the manual and non-manual articulation strategies for information structure in TİD narratives.
Real-Time Turkish Sign Language Recognition Using Cascade Voting Approach with Handcrafted Features
In this study, a machine learning-based system, which recognises the Turkish sign language person-independent in real-time, was developed. A leap motion sensor was used to obtain raw data from individuals. Then, handcraft features were extracted by using Euclidean distance on the raw data. Handcraft features include finger-to-finger, finger -to-palm, finger -to-wrist bone, palm-to-palm and wrist-to-wrist distances. LR, -NN, RF, DNN, ANN single classifiers were trained using the handcraft features. Cascade voting approach was applied with two-step voting. The first voting was applied for each classifier’s final prediction. Then, the second voting, which voted the prediction of all classifiers at the final decision stage, was applied to improve the performance of the proposed system. The proposed system was tested in real-time by an individual whose hand data were not involved in the training dataset. According to the results, the proposed system presents 100 % value of accuracy in the classification of one hand letters. Besides, the recognition accuracy ratio of the system is 100 % on the two hands letters, except “J” and “H” letters. The recognition accuracy rates were 80 % and 90 %, respectively for “J” and “H” letters. Overall, the cascade voting approach presented a high average classification performance with 98.97 % value of accuracy. The proposed system enables Turkish sign language recognition with high accuracy rates in real time.