Show simple item record

dc.contributor.advisorRighi, Rodrigo da Rosa
dc.contributor.authorRodrigues, Vinicius Facco
dc.date.accessioned2016-05-09T12:51:40Z
dc.date.accessioned2022-09-22T19:19:58Z
dc.date.available2016-05-09T12:51:40Z
dc.date.available2022-09-22T19:19:58Z
dc.date.issued2016-02-29
dc.identifier.urihttps://hdl.handle.net/20.500.12032/59636
dc.description.abstractElasticity is one of the key features of cloud computing. Using this functionality, we can increase or decrease the amount of computational resources of the cloud at any time, enabling applications to dynamically scale computing and storage resources, avoiding overand under-provisioning. In high performance computing (HPC), initiatives like bag-oftasks or key-value applications use a load balancer and a loosely-coupled set of virtual machine (VM) instances. In this scenario, it is easier to add or remove virtual machines because the load balancer is in charge of distribute tasks between the active processes. However, iterative HPC applications are characterized by being tightly-coupled and have difficulty to take advantage of the elasticity because in such applications the amount of processes is fixed throughout the application runtime. In fact, the simple addition of new resources does not guarantee that the processes will use them. Moreover, removing a single process can compromise the entire execution of the application because each process plays a key role in its execution cycle. Iterative applications related to HPC are commonly implemented using MPI (Message Passing Interface). In the joint-field of MPI and tightly-coupled HPC applications, it is a challenge use the elasticity feature since we need re-write the source code to address resource reorganization. Such strategy requires prior knowledge of application behaviour, requiring stop-reconfigure-and-go approaches when reorganizing resources. Besides, using MPI 2.0, in which the number of process can be changed during the application execution, there are problems related to how profit this new feature in the HPC scope, since the developer needs to handle the communication topology by himself. Moreover, sudden consolidation of a VM, together with a process, can compromise the entire execution. To address these issues, we propose a PaaS-based elasticity model, named AutoElastic. It acts as a middleware that allows iterative HPC applications to take advantage of dynamic resource provisioning of cloud infrastructures without any major modification. AutoElastic offers elasticity automatically, where the user does not need to configure any resource management policy. This elastic mechanism includes using fixed thresholds as well as offering a new approach where it self adjusts the threshold values during the application execution. AutoElastic provides a new concept denoted here as asynchronous elasticity, i.e., it provides a framework to allow applications to either increase or decrease their computing resources without blocking the current execution. The feasibility of AutoElastic is demonstrated through a prototype that runs a CPU-bound numerical integration application on top of the OpenNebula middleware. Results with a parallel iterative application showed performance gains between 28.4% and 59% when comparing different executions enabling and disabling elasticity feature. In addition, tests with different parameters showed that when using threshold-rule based techniques with fixed thresholds, the upper threshold has a greater impact in performance and resource consumption than the lower threshold.en
dc.description.sponsorshipCAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superiorpt_BR
dc.languagept_BRpt_BR
dc.publisherUniversidade do Vale do Rio dos Sinospt_BR
dc.rightsopenAccesspt_BR
dc.subjectComputação em nuvempt_BR
dc.subjectCloud computingen
dc.titleAutoelastic: explorando a elasticidade de recursos de computação em nuvem para a execução de aplicações de alto desempenho iterativapt_BR
dc.typeDissertaçãopt_BR


Files in this item

FilesSizeFormatView
Vinicius Facco Rodrigues_.pdf2.415Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record


© AUSJAL 2022

Asociación de Universidades Confiadas a la Compañía de Jesús en América Latina, AUSJAL
Av. Santa Teresa de Jesús Edif. Cerpe, Piso 2, Oficina AUSJAL Urb.
La Castellana, Chacao (1060) Caracas - Venezuela
Tel/Fax (+58-212)-266-13-41 /(+58-212)-266-85-62

Nuestras redes sociales

facebook Facebook

twitter Twitter

youtube Youtube

Asociaciones Jesuitas en el mundo
Ausjal en el mundo AJCU AUSJAL JESAM JCEP JCS JCAP