Sign Language Recognition App

A deep learning application that detects and interprets sign language gestures

Sign Language Recognition App

This application uses deep learning to recognize and interpret sign language gestures in real time.

Key Contributions:

  • Video Classification Pipeline: Designed a deep learning-based video classification pipeline for ASL gesture recognition using structured frame extraction and spatiotemporal modeling.
  • Transfer Learning: Achieved high validation accuracy through transfer learning with a pretrained I3D model on a large-scale ASL dataset (~2000 gesture classes).
  • Real-time Inference: Implemented frame extraction and spatiotemporal feature modeling to enable accurate gesture classification with real-time inference potential.

Built using Python, TensorFlow, and OpenCV, it aims to bridge communication gaps for hearing and speech-impaired individuals.