Standard Machine Learning approaches require large amounts of data usually centralized in data centers. In these approaches, there is only one device responsible for the training of the whole process. New collaborative approaches allow the training of common models from different decentralized devices, each one holding local data samples. An example is Federated Learning. In recent years, along with the blooming of Machine Learning based applications and services, ensuring data privacy and security have become a critical obligation. In this work, three training procedures based on Federated Learning were tested: FedAvg, FedADA, and LoADABoost comparing their performance versus a traditional centralized training method. Using public information from written reviews about movies, a neural network algorithm was implemented. The objective of the model was to predict whether a review is positive or negative. Utilizing the F1 Score as a performance metric, the hypothesis was to validate whether the Federated Learning training methods are similar to traditional centralized training methodologies. After the implementation of the same neural network with different training methodologies, no major differences or changes in performance were noted, concluding that Federated Learning is indeed a similar and viable training methodology.