Description
This article introduces the development of the Say2me application which, through an informed image, queries a remote image description service and returns its description. The objective of this work is, using the model, to approximate visually impaired users to the social interaction of people in general. The application allows an image to be photographed or an image already in the gallery is retrieved and sent to be described. The prototype can be used on devices
using the iOS operating system and was implemented using only the Swift language. In the tests performed, the model answered the description requests, with a high assertiveness index, in the average of thirty-three seconds, a time considered acceptable by the user. Although some improvements are considered, the application reaches the scope proposed by this work.