Show simple item record

dc.contributor.advisorRuíz-Cruz, Riemann
dc.contributor.authorDíaz-Sánchez, Jorge A.
dc.date.accessioned2025-01-27T20:39:10Z
dc.date.accessioned2025-03-25T21:17:32Z
dc.date.available2025-01-27T20:39:10Z
dc.date.available2025-03-25T21:17:32Z
dc.date.issued2025-01
dc.identifier.citationDíaz-Sánchez, J. A. (2025). Levenberg-Marquardt Algorithm. Trabajo de obtención de grado, Maestría en Ciencia de Datos. Tlaquepaque, Jalisco: ITESO.
dc.identifier.urihttps://hdl.handle.net/20.500.12032/160059
dc.description.abstractThis research presents an efficient Levenberg-Marquardt implementation for neural network training in regression, classification, and transfer learning. While Levenberg-Marquardt offers fast convergence and precision in nonlinear least-squares problems, its high memory and computational demands limit its use in large models. This work optimizes Levenberg-Marquardt to improve its practicality across diverse architectures by addressing these constraints. A key contribution is integrating Levenberg-Marquardt into PyTorch, a widely used deep learning framework. This enables easier adoption, leveraging PyTorch’s GPU acceleration and parallelization for improved efficiency. By minimizing redundant calculations in the Jacobian and Hessian approximations, this implementation significantly reduces memory usage and computational overhead. Instead of merely optimizing storage, it selectively applies Levenberg-Marquardt where needed, balancing second-order precision with resource constraints. Experiments validate Levenberg-Marquardt’s efficiency on benchmark tasks, including MNIST classification and fine-tuning AlexNet. Comparisons with Adam and SGD show that Levenberg-Marquardt achieves competitive accuracy with fewer epochs, making it a viable alternative in high-precision scenarios. In transfer learning, limiting trainable parameters helps mitigate memory concerns. This research demonstrates that Levenberg-Marquardt can be an efficient neural network optimizer when resource management is prioritized. By refining its implementation, Levenberg-Marquardt becomes more practical for deep learning, particularly in tasks requiring fast convergence and high accuracy. Future work will explore further memory optimizations and extensions for high-dimensional datasets, broadening Levenberg-Marquardt’s applicability in modern neural network training.
dc.language.isoeng
dc.publisherITESO
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/deed.es
dc.subjectLevenberg-Marquardt
dc.subjectOptimization
dc.subjectMachine Learning
dc.subjectNeural Network
dc.subjectAlgorithm Design and Analysis
dc.titleLevenberg-Marquardt Algorithm
dc.title.alternativeLevenberg-Marquardt Algorithm
dc.typeinfo:eu-repo/semantics/masterThesis
dc.type.versioninfo:eu-repo/semantics/acceptedVersion


Files in this item

FilesSizeFormatView
ITESO_MAF_MScThesis_AlejandroDiaz.pdf2.721Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc/4.0/deed.es
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc/4.0/deed.es

© AUSJAL 2022

Asociación de Universidades Confiadas a la Compañía de Jesús en América Latina, AUSJAL
Av. Santa Teresa de Jesús Edif. Cerpe, Piso 2, Oficina AUSJAL Urb.
La Castellana, Chacao (1060) Caracas - Venezuela
Tel/Fax (+58-212)-266-13-41 /(+58-212)-266-85-62

Nuestras redes sociales

facebook Facebook

twitter Twitter

youtube Youtube

Asociaciones Jesuitas en el mundo
Ausjal en el mundo AJCU AUSJAL JESAM JCEP JCS JCAP