Hearing-impaired communities around the world communicate via a sign language. The focus of this work is to develop an interpreting human-robot interaction system that could act as a sign language interpreter in public places. This paper presents an ongoing work, which aims to recognize fingerspelling gestures in real time. To this end, we utilize a deep learning method for classification of 33 gestures used for fingerspelling by the local deaf-mute community. In order to train and test the performance of the recognition system, we utilize previously collected dataset of motion RGB-D data of 33 manual gestures. After applying it to a deep learning method, we achieved an offline result of an average accuracy of 75% for a complete alphabet recognition. In real time, the result was only 24.72%. In addition, we integrated a form of auto-correction in order to perform spell-checking on the recognized letters. Among 35 tested words, four words were recognized correctly (11.4%). Finally, we conducted an exploratory study inviting ten deaf individuals to interact with our sign language interpreting robotic system.