dc.description.abstract |
In a fast-growing field of machine learning YOLO models are said to be
the most advanced tools for object detection yet. The newest YOLO model,
YOLOv5, came out in 2020 and its authors claim it can obtain high accuracy
with little training time. Researchers already try to use object detection models
for tasks such as activity detection, face recognition vehicle counting, and lately
even in medical image analysis. It is easy to imagine such models could help
lling the gap between spaces for able-bodied people and members of the Deaf
Community. There are many examples of tools, programs, TV channels or even
websites that are unreachable to people with hearing loss. This paper proposes
using YOLOv5 model for American Sign Language (ASL) alphabet signs detection
in hopes to prove that more advanced tools, such as ASL translators, can
be build and used to ease aforementioned needs. For this reason 4 experiments
were conducted on dataset containing over 1700 images of ASL alphabet hand
gestures, in order to show how such tools could be created. Presented ndings
show it is possible to classify gestures using images with accuracy higher than
90%. With such basis, more advanced tools could be built with further research
and more advanced architecture and usage of supplementary data. |
pl_PL |