A new framework for training a CNN with a hardware-software architecture

ilustraciones, diagramas., fotografías a color

Autores:
Parra Prada, Dorfell Leonardo
Tipo de recurso:
Doctoral thesis
Fecha de publicación:
2023
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
eng
OAI Identifier:
oai:repositorio.unal.edu.co:unal/84550
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/84550
https://repositorio.unal.edu.co/
Palabra clave:
620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Computadores neuronales
Supercomputadores
Neural computers
Supercomputers
FER
CNN
FPGA
HNN
FER
CNN
FPGA
HNN
Rights
openAccess
License
Reconocimiento 4.0 Internacional
id UNACIONAL2_0210603af185805c3a3aad934e0097d3
oai_identifier_str oai:repositorio.unal.edu.co:unal/84550
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.eng.fl_str_mv A new framework for training a CNN with a hardware-software architecture
dc.title.translated.spa.fl_str_mv Nuevo framework para el entrenamiento de CNN usando una arquitecture hardware-software
title A new framework for training a CNN with a hardware-software architecture
spellingShingle A new framework for training a CNN with a hardware-software architecture
620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Computadores neuronales
Supercomputadores
Neural computers
Supercomputers
FER
CNN
FPGA
HNN
FER
CNN
FPGA
HNN
title_short A new framework for training a CNN with a hardware-software architecture
title_full A new framework for training a CNN with a hardware-software architecture
title_fullStr A new framework for training a CNN with a hardware-software architecture
title_full_unstemmed A new framework for training a CNN with a hardware-software architecture
title_sort A new framework for training a CNN with a hardware-software architecture
dc.creator.fl_str_mv Parra Prada, Dorfell Leonardo
dc.contributor.advisor.none.fl_str_mv Camargo Bareño, Carlos Ivan
dc.contributor.author.none.fl_str_mv Parra Prada, Dorfell Leonardo
dc.contributor.researchgroup.spa.fl_str_mv Grupo de Física Nuclear de la Universidad Nacional
dc.subject.ddc.spa.fl_str_mv 620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
topic 620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Computadores neuronales
Supercomputadores
Neural computers
Supercomputers
FER
CNN
FPGA
HNN
FER
CNN
FPGA
HNN
dc.subject.lemb.spa.fl_str_mv Computadores neuronales
Supercomputadores
dc.subject.lemb.eng.fl_str_mv Neural computers
Supercomputers
dc.subject.proposal.eng.fl_str_mv FER
CNN
FPGA
HNN
dc.subject.proposal.spa.fl_str_mv FER
CNN
FPGA
HNN
description ilustraciones, diagramas., fotografías a color
publishDate 2023
dc.date.accessioned.none.fl_str_mv 2023-08-14T15:43:03Z
dc.date.available.none.fl_str_mv 2023-08-14T15:43:03Z
dc.date.issued.none.fl_str_mv 2023-04
dc.type.spa.fl_str_mv Trabajo de grado - Doctorado
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/doctoralThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_db06
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TD
format http://purl.org/coar/resource_type/c_db06
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/84550
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/84550
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv eng
language eng
dc.relation.references.spa.fl_str_mv Y. Tian, T. Kanade, and J. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97–115, 2001.
M. Lyons, M. Kamachi, and Gyoba, “Coding facial expressions with gabor wavelets (ivc special issue),” Modified version of a conference article, that was invited for publication in a special issue of Image and Vision Computing dedicated to a selection of articles from the IEEE Face and Gesture 1998 conference. The special issue never materialized., 2020. [Online]. Available: https://zenodo.org/record/4029680
K. Anas, “Facial expression recognition in jaffe database,” https://github.com/anas-899/ facial-expression-recognition-Jaffe, last accessed 12 Dec 2021.
S. Christos, T. Georgios, Z. Stefanos, and P. Maja, “300 faces in-the-wild challenge: The first facial landmark localization challenge,” Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 2013. [Online]. Available: https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
B. Yang, J. Cao, R. Ni, and Y. Zhang, “Facial expression recognition using weighted mixture deep neural network based on double-channel facial images,” IEEE Access, vol. 6, pp. 4630–4640, 2018.
J. Kim, B. Kim, P. Roy, and D. Jeong, “Efficient facial expression recognition algorithm based on hierar- chical deep neural network structure,” IEEE Access, vol. 7, pp. 41 273–41 285, 2019.
Digilent, “Zybo z7,” https://digilent.com/reference/programmable-logic/zybo-z7/start, last accessed 28 Jan 2022.
——, “Zybo z7 reference manual,” https://digilent.com/reference/programmable-logic/zybo-z7/ reference-manual, last accessed 28 Jan 2022.
Xilinx, “7 series fpgas configurable logic block,” https://www.xilinx.com/support/documentation/user guides/ug474 7Series CLB.pdf, last accessed 28 Jan 2022.
——, “7 series fpgas memory resources,” https://www.xilinx.com/support/documentation/user guides/ ug473 7Series Memory Resources.pdf, last accessed 28 Jan 2022.
——, “7 series dsp48e1 slice,” https://www.xilinx.com/support/documentation/user guides/ug479 7Series DSP48E1.pdf, last accessed 28 Jan 2022.
AMD-Xilinx, “Pynq: Python productivity,” http://www.pynq.io/, last accessed 19 May 2022.
B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficitent integer-arithmetic-only inference,” Google Inc., pp. 1–14, December 2017.
S. Maloney, “Survey: Implementing dense neural networks in hardware,” https://pdfs.semanticscholar. org/b709/459d8b52783f58f1c118619ec42f3b10e952.pdf, March 2013, last accessed 15 Feb 2018.
“Face image analysis with convolutional neural networks,” https://lmb.informatik.uni-freiburg.de/papers/ download/du diss.pdf, last accessed 15 Feb 2018.
Z. Saidane, Image and video text recognition using convolutional neural networks: Study of new CNNs architectures for binarization, segmentation and recognition of text images. LAP LAMBERT Academic Publishing, 2011.
J. Misra and I. Saha, “Artificial neural networks in hardware: A survey of two decades of progress,” Neurocomputing, vol. 74, no. 1–3, pp. 239–255, December 2010.
V. Bettadapura, “Face expression recognition and analysis: The state of the art,” CoRR, vol. abs/1203.6722, pp. 1–27, 2012.
M. Z. Uddin, W. Khaksar, and J. Torresen, “Facial expression recognition using salient features and convolutional neural,” IEEE Access, vol. 5, pp. 26 146–26 161, 2017.
Xie, Siyue, and H. Hu, “Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks,” IEEE Transactions on Multimedia, vol. 21, no. 1, pp. 211–220, 2019.
C. Zhang, P. Wang, K. Chen, and J. Kamarainen, “Identity-aware convolutional neural networks for facial expression recognition,” Journal of Systems Engineering and Electronics, vol. 28, no. 4, pp. 784–792, 2017.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101, 2010.
M. Lyons, M. Kamachi, and J. Gyoba, “The japanese female facial expression (jaffe) dataset,” https: //doi.org/10.5281/zenodo.3451524, last accessed 06 Dec 2021.
V. Paul and J. Michael J., “Robus real-time object detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, December 2004.
K. David, “Dlib-models,” https://github.com/davisking/dlib-models/, last accessed 12 Dec 2021.
K. Davis E., “Dlib-ml: A machine learning toolkit,” Journal of Machine Learning Research, vol. 10, no. 2, pp. 1755–1758, 2009.
Itseez, “Open source computer vision library,” https://github.com/itseez/opencv, last accessed 12 Dec 2021.
S. van der Walt, J. L. Schonberger, J. , Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, and the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ, vol. 2, p. e453, 6 2014. [Online]. Available: https://doi.org/10.7717/peerj.453
skimage, “local binary pattern,” https://scikit-image.org/docs/stable/api/skimage.feature.html? highlight=local binary pattern#skimage.feature.local binary pattern, last accessed 17 Dec 2021.
A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, 6 2020. [Online]. Available: https://www.mdpi.com/2078-2489/11/2/125
“Tensorflow: An open-source software library for machine intelligence,” https://www.tensorflow.org/, last accessed 15 Feb 2018.
tensorflow, “Quantization aware training,” https://blog.tensorflow.org/2020/04/ quantization-aware-training-with-tensorflow-model-optimization-toolkit.html, last accessed 28 Jan 2022.
——, “Tensorflow lite 8-bit quantization specification,” https://www.tensorflow.org/lite/performance/ quantization spec, last accessed 28 Jan 2022.
Xilinx, “Field programmable gate array (fpga),” https://www.xilinx.com/products/silicon-devices/fpga/ what-is-an-fpga.html, last accessed 28 Jan 2022.
TensorFlow, “Transfer learning with tensorflow hub,” https://www.tensorflow.org/tutorials/images/ transfer learning with hub, last accessed 17 May 2022.
Amazon, “Alexa,” https://www.amazon.com/b?node=21576558011, last accessed 17 May 2022.
Google, “Hey google,” https://assistant.google.com/, last accessed 17 May 2022.
Anki, “Vector by anki: A giant roll forward for robot kind,” https://www.kickstarter.com/projects/anki/ vector-by-anki-a-giant-roll-forward-for-robot-kind, last accessed 17 May 2022.
D. D. Labs, “Vector 2.0,” https://www.digitaldreamlabs.com/products/vector-robot, last accessed 17 May 2022.
L. AI, “Emo: The coolest ai desktop pet with personality and ideas,” https://living.ai/emo/, last accessed 17 May 2022.
Google, “Coral,” https://coral.ai/, last accessed 17 May 2022.
Intel, “Intel neural compute stick 2 (intel ncs2),” https://www.intel.com/content/www/us/en/developer/ tools/neural-compute-stick/overview.html, last accessed 17 May 2022.
AMD-Xilinx, “Vitis ai,” https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html, last accessed 19 May 2022.
——, “Xilinx alveo,” https://www.xilinx.com/products/boards-and-kits/alveo.html, last accessed 19 May 2022.
——, “Board support package settings page,” https://docs.xilinx.com/r/en-US/ug1400-vitis-embedded/ Board-Support-Package-Settings-Page, last accessed 29 May 2022.
——, “Dpu on pynq,” https://github.com/Xilinx/DPU-PYNQ, last accessed 29 May 2022.
Dorfell, “Pynq 2.7 for zybo-z7,” https://discuss.pynq.io/t/pynq-2-7-for-zybo-z7/4124, last accessed 02 Jun 2022.
AMD-Xilinx, “Retargeting to a different board,” https://pynq.readthedocs.io/en/latest/pynq sd card. html#retargeting-to-a-different-board, last accessed 31 May 2022.
Logictronix, “Installing tensorflow in pynq,” https://logictronix.com/wp-content/uploads/2019/04/ TensorFlow Installation on PYNQ Nov 6 2018.pdf, last accessed 03 Jun 2022.
K. Hyodo, “Tensorflow-bin previous versions,” https://github.com/PINTO0309/Tensorflow-bin/tree/ main/previous versions, last accessed 03 Jun 2022.
Y. Shen, T. Ji, M. Ferdman, and P. Milder, “Argus: An end-to-end framework for accelerating cnns on fpgas,” IEEE Micro, vol. 39, no. 5, pp. 17–25, 2019.
S. Sabogal, A. George, and G. Crum, “Recon: A reconfigurable cnn acceleration framework for hybrid semantic segmentation on hybrid socs for space applications,” in 2019 IEEE Space Computing Conference (SCC), 2019, pp. 41–52.
S. Mouselinos, V. Leon, S. Xydis, D. Soudris, and K. Pekmestzi, “Tf2fpga: A framework for projecting and accelerating tensorflow cnns on fpga platforms,” in 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), 2019, pp. 1–4.
J. Zhu, L. Wang, H. Liu, S. Tian, Q. Deng, and J. Li, “An efficient task assignment framework to accelerate dpu-based convolutional neural network inference on fpgas,” IEEE Access, vol. 8, pp. 83 224–83 237, 2020.
Y. Liang, L. Lu, and J. Xie, “Omni: A framework for integrating hardware and software optimizations for sparse cnns,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 8, pp. 1648–1661, 2021.
Xilinx, “Zynq ultrascale+ mpsoc,” https://www.xilinx.com/products/silicon-devices/soc/ zynq-ultrascale-mpsoc.html, last accessed 12 Sep 2022.
C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks.” Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays - FPGA’15, February 2015, pp. 161–170.
S. I. Venieris and C. S. Bouganis, “Fpgaconvnet: A framework for mapping convolutional neural net- works on fpgas.” Proceedings - 24th IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2016, May 2016, pp. 40–47.
A. Dundar, J. Jin, B. Martini, and E. Culurciello, “Embedded streaming deep neural networks accelerator with applications,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 7, pp. 1572– 1583, July 2017.
N. Li, S. Takaki, Y. Tomioka, and H. Kitazawa, “A multistage dataflow implementation of a deep convolu- tional neural network based on fpga for high-speed object recognition.” 2016 IEEE Southwest Symposium On Image Analysis and Interpretation (SSIAI), 2016, pp. 165–168.
“Caffe: Deep learning framework,” http://caffe.berkeleyvision.org/, last accessed 15 Feb 2018.
“Mathworks: Matlab,” https://www.mathworks.com/products/matlab.html, last accessed 15 Feb 2018.
“Microsoft cognitive toolkit,” https://www.microsoft.com/en-us/cognitive-toolkit/, last accessed 15 Feb 2018.
T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: a small-footprint high- throughput accelerator for ubiquitous machine-learning.” Proceedings of the 19th International Confer- ence on Architectural Support for Programming Languages and Operating Systems - ASPLOS’14, March 2014, pp. 269–284.
Y. Zhou and J. Jiang, “An fpga-based accelerator implementation for deep convolutional neural networks.” 4th International Conference on Computer Science and Network Technology (ICCSNT), December 2015, pp. 829–832.
Y. Murakami, “Fpga implementation of a simd-based array processor with torus interconnect.” 2015 International Conference on Field Programmable Technology, FPT 2015, May 2015, pp. 244–247.
B. Kitchenham, O. P. Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in software engineering - a systematic literature review,” Information and Software Technology, vol. 51, no. 1, pp. 7–15, November 2008.
B. Kitchenham, R. Pretorius, D. Budgen, O. P. Brereton, M. Turner, M. Niazi, and S. Linkman, “Sys- tematic literature reviews in software engineering-a tertiary study,” Information and Software Technology, August 2010.
A. krizhevsky, “Survey: Implementing dense neural networks in hardware,” https://arxiv.org/abs/1404. 5997, April 2014, last accessed 15 Feb 2018.
S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” https://arxiv.org/abs/1410.0759, December 2014, last accessed 15 Feb 2018.
F. Ortega-Zamorano, J. M. Jerez, D. U. Munoz, R. M. Luque-Baena, and L. Franco, “Efficient implemen- tation of the backpropagation algorithm in fpgas and microcontrollers,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 9, pp. 1840–1850, August 2016.
C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. Lecun, “Neuflow: A runtime reconfigurable dataflow processor for vision.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 2011, pp. 109–116.
M. R. D. Abdu-Aljabar, “Design and implementation of neural network in fpga,” Journal of Engineering and Development, vol. 16, no. 3, September 2012.
G. H. Shakoory, “Fpga implementation of multilayer perceptron for speech recognition,” Journal of En- gineering and Development, vol. 17, no. 6, December 2013.
E. Z. Mohammed and H. K. Ali, “Hardware implementation of artificial neural network using field pro- grammable gate array,” International Journal of Computer Theory and Engineering, vol. 5, no. 5, October 2013.
S. Singh, S. Sanjeevi, S. V., and A. Talashi, “Fpga implementation of a trained neural network,” IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), vol. 10, no. 3, May-June 2015.
Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam, “Shidiannao: Shifting vision processing closer to the sensor.” Proceedings of the 42nd Annual International Symposium on Computer Architecture-ISCA’15, June 2015, pp. 92–104.
M. Motamedi, P. Gysel, V. Akella, and S. Ghiasi, “Design space exploration of fpga-based deep con- volutional neural networks.” 21st Asia and South Pacific Design Automation Conference, 2016, pp. 575–580.
L. B. Saldanha and C. Bobda, “Sparsely connected neural networks in fpga for handwritten digit recog- nition.” Proceedings - International Symposium on Quality Electronic Design (ISQED), May 2016, pp. 113–117.
Y. Wang, L. Xia, T. Tang, B. Li, S. Yao, M. Cheng, and H. Yang, “Low power convolutional neural networks on a chip,” no. 1. IEEE International Symposium on Computer Architecture, April 2016, pp. 129–132.
C. Kyrkou, C. S. Bouganis, T. Theocharides, and M. M. Polycarpou, “Embedded hardware-efficient real- time classification with cascade support vector machines,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, January 2016.
T. Luo, S. Liu, L. Li, Y. Wang, S. Zhang, T. Chen, Z. Xu, O. Temam, and Y. Chen, “Dadiannao: A neural network supercomputer,” IEEE Transactions on Computers, vol. 66, no. 1, pp. 73–88, January 2017.
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Reconocimiento 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Reconocimiento 4.0 Internacional
http://creativecommons.org/licenses/by/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv xiii, 100 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Bogotá - Ingeniería - Doctorado en Ingeniería - Ingeniería Eléctrica
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.spa.fl_str_mv Bogotá, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Bogotá
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/84550/3/license.txt
https://repositorio.unal.edu.co/bitstream/unal/84550/4/1098679415.2023.pdf
https://repositorio.unal.edu.co/bitstream/unal/84550/5/1098679415.2023.pdf.jpg
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
38ef7ecc441ec1b21eed68e15f9400c0
5096912a7a4f7c64c4f3389853684a8a
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089726395678720
spelling Reconocimiento 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Camargo Bareño, Carlos Ivan8693e0707cfee31ff8e062d4cd84ddb7Parra Prada, Dorfell Leonardo60acdfd3c2707f7dc68ece1f2cd147a8Grupo de Física Nuclear de la Universidad Nacional2023-08-14T15:43:03Z2023-08-14T15:43:03Z2023-04https://repositorio.unal.edu.co/handle/unal/84550Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/ilustraciones, diagramas., fotografías a colorFacial Expression Recognition (FER) systems classify emotions by using geometrical approaches or Machine Learning (ML) algorithms such as Convolutional Neural Networks (CNNs). However, designing these systems could be a challenging task that depends on the data set's quality or the designer's expertise. Moreover, CNNs inference requires a large amount of memory and computational resources, making it unfeasible for low-cost embedded systems. Hence, although GPUs are expensive and have high power consumption, they are frequently employed because they considerably reduce the inference time compared to CPUs. On the other hand, SoCs implemented in FPGAs could employ less power and support pipelining. However, the floating point representation may result in intricate and larger designs that are only suitable for high-end FPGAs. Therefore, custom hardware-software architectures that maintain acceptable performance while using simpler data representations are advantageous. To address these challenges, this work proposes a design methodology for CNN-based FER systems. The methodology includes the preprocessing, the Local binary pattern (LBP), and the data augmentation. Besides, several CNN models were trained with TensorFlow and the JAFFE data set to validate the methodology. In each test, the relationship between parameters, layers, and performance was studied, as were the overfitting and underfitting scenarios. Furthermore, this work introduces the model M6, a single channel CNN that reaches an accuracy of 94% in less than 30 epochs. M6 has 306.182 parameters in 1.17 MB. In addition, the work also employs the quantization methodology from TensorFlow Lite (tflite), to compute the inference of a CNN using integer numbers. M6's accuracy dropped from 94.44% to 83.33% after quantization, the number of parameters increased from 306.182 to 306.652, and the model size decreased almost 4x from 1.17 MB to 0.3 MB. Also, the work presents a custom hardware-software architecture to accelerate CNNs known as the FER SoC, which reproduces the main tflite operations in hardware. Hence, as the integer numbers are fully mapped to hardware registers, the accelerator results will be similar to their software counterparts. The architecture has been tested on a Zybo-Z7 development board with 1 GB RAM and the Zynq7 device XCZ7020-CLG400. Moreover, it was observed that the architecture got the same accuracy but was 20% slower than a laptop equipped with an AMD CPU with 16 threads, 16 GB of RAM and a Nvidia GTX1660Ti GPU. Therefore, it is recommended to assess whether the trade-off between quantization and inference time is worth it for the target application. Lastly, another contribution is the framework for CNNs' training in custom hardware-software architectures known as Resiliency. It has been used to train and run the inference of the single-channel M6 model. Resiliency provides the design files needed as well as the Pynq 2.7 image created for running ML frameworks such as TensorFlow and PyTorch. Although the training time was slow, the accuracy and loss were consistent to traditional approaches. However, the execution time could be improved by utilizing bigger FPGAs with MPSoCs like the Zynq Ultrascale family. (Texto tomado de la fuente)Los sistemas de reconocimiento de expresiones faciales (FER) clasifican emociones usando estrategias geométricas o algoritmos de Machine Learning (ML) como redes neuronales convolucionales (CNNs). Sin embargo, el diseño de estos sistemas es una tarea compleja que depende de la calidad del set de datos y la experiencia del diseñador. Además, la inferencia de las CNNs requiere recursos de memoria y cómputo que hacen inviable el uso de sistemas embebidos de bajo costo. Igualmente, aunque las GPUs son costosas y presentan un alto consumo de potencia, se utilizan frecuentemente porque reducen el tiempo de ejecución en comparación a las CPU. Por otro lado, los sistemas on-chip (SoCs) implementados en FPGAs emplean menos potencia y soportan cómputo en paralelo. No obstante, representaciones numéricas como punto flotante pueden resultar en diseños complejos sólo adecuadas para FPGAs de gama alta. Por esta razón, el uso de arquitecturas de hardware-software que emplean representaciones numéricas sencillas y mantienen un desempeño aceptable son favorables. Para afrontar estos desafíos, este trabajo propone una metodología de diseño para sistemas FER basados en CNNs. La metodología incluye el preprocesamiento, el patrón local binario (LBP), y la aumentación de datos. Asimismo, para validar la metodología varios modelos CNNs fueron entrenados con TensorFlow y el set de datos JAFFE. En cada test, se estudia la relación entre los parámetros, las capas y el desempeño, el subentrenamiento y el sobreentrenamiento. Además, este trabajo introduce un modelo CNN de un canal llamado M6 que alcanza una exactitud de 94% en menos de 30 épocas. M6 tiene 306.182 parámetros y emplea 1.17 MB. El trabajo también utiliza la estrategia de quantización de TensorFlow Lite (tflite) para computar la inferencia de la CNN empleando números enteros. Después de la quantización, la exactitud de M6 se redujo de 94.44% a 83.33%, el número de parámetros aumentó de 306.182 a 306.652, y el tamaño del modelo se redujo aproximadamente 4 veces pasando de 1.17 MB a 0.3 MB. Igualmente, el trabajo presenta el FER SoC, una arquitectura de hardware-software para la aceleración de CNNs que reproduce las operaciones principales de tflite en hardware. En este caso, como los registros en hardware soportan las operaciones enteras, los resultados del acelerador son similares a la contraparte software. El FER SoC fue implementado en el sistema de desarrollo Zybo-Z7, el cuál emplea 1 GB RAM, y la FPGA Zynq XCZ7020-CLG400. Además, se observó que la arquitectura obtuvo la misma exactitud que una laptop con una CPU AMD de 16 threads, 16 GB de RAM, y la GPU de Nvidia GTX1660Ti, pero fue 20% más lenta. Por lo que se recomienda evaluar sí el intercambio entre la quantización y el tiempo de ejecución es suficiente para la aplicación objetivo. Por último, otra contribución del trabajo es el framework Resiliency, el cuál permite el entrenamiento y la inferencia de modelos CNN de un solo canal. Resiliency, provee los archivos de diseño necesarios y la imagen Pynq 2.7 creada para ejecutar los frameworks de ML TensorFlow y PyTorch. Aunque el tiempo de entrenamiento fue lento, la exactitud y la perdida son consistentes con las estrategias tradicionales. Sin embargo, el tiempo de ejecución puede ser mejorado usando FPGAs de gama alta con MPSoCs como las Zynq Ultrascale.N/ADoctoradoDoctor en IngenieríaDiseño digital, sistemas embebidos.xiii, 100 páginasapplication/pdfengUniversidad Nacional de ColombiaBogotá - Ingeniería - Doctorado en Ingeniería - Ingeniería EléctricaFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingenieríaComputadores neuronalesSupercomputadoresNeural computersSupercomputersFERCNNFPGAHNNFERCNNFPGAHNNA new framework for training a CNN with a hardware-software architectureNuevo framework para el entrenamiento de CNN usando una arquitecture hardware-softwareTrabajo de grado - Doctoradoinfo:eu-repo/semantics/doctoralThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_db06Texthttp://purl.org/redcol/resource_type/TDY. Tian, T. Kanade, and J. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 97–115, 2001.M. Lyons, M. Kamachi, and Gyoba, “Coding facial expressions with gabor wavelets (ivc special issue),” Modified version of a conference article, that was invited for publication in a special issue of Image and Vision Computing dedicated to a selection of articles from the IEEE Face and Gesture 1998 conference. The special issue never materialized., 2020. [Online]. Available: https://zenodo.org/record/4029680K. Anas, “Facial expression recognition in jaffe database,” https://github.com/anas-899/ facial-expression-recognition-Jaffe, last accessed 12 Dec 2021.S. Christos, T. Georgios, Z. Stefanos, and P. Maja, “300 faces in-the-wild challenge: The first facial landmark localization challenge,” Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 2013. [Online]. Available: https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/B. Yang, J. Cao, R. Ni, and Y. Zhang, “Facial expression recognition using weighted mixture deep neural network based on double-channel facial images,” IEEE Access, vol. 6, pp. 4630–4640, 2018.J. Kim, B. Kim, P. Roy, and D. Jeong, “Efficient facial expression recognition algorithm based on hierar- chical deep neural network structure,” IEEE Access, vol. 7, pp. 41 273–41 285, 2019.Digilent, “Zybo z7,” https://digilent.com/reference/programmable-logic/zybo-z7/start, last accessed 28 Jan 2022.——, “Zybo z7 reference manual,” https://digilent.com/reference/programmable-logic/zybo-z7/ reference-manual, last accessed 28 Jan 2022.Xilinx, “7 series fpgas configurable logic block,” https://www.xilinx.com/support/documentation/user guides/ug474 7Series CLB.pdf, last accessed 28 Jan 2022.——, “7 series fpgas memory resources,” https://www.xilinx.com/support/documentation/user guides/ ug473 7Series Memory Resources.pdf, last accessed 28 Jan 2022.——, “7 series dsp48e1 slice,” https://www.xilinx.com/support/documentation/user guides/ug479 7Series DSP48E1.pdf, last accessed 28 Jan 2022.AMD-Xilinx, “Pynq: Python productivity,” http://www.pynq.io/, last accessed 19 May 2022.B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficitent integer-arithmetic-only inference,” Google Inc., pp. 1–14, December 2017.S. Maloney, “Survey: Implementing dense neural networks in hardware,” https://pdfs.semanticscholar. org/b709/459d8b52783f58f1c118619ec42f3b10e952.pdf, March 2013, last accessed 15 Feb 2018.“Face image analysis with convolutional neural networks,” https://lmb.informatik.uni-freiburg.de/papers/ download/du diss.pdf, last accessed 15 Feb 2018.Z. Saidane, Image and video text recognition using convolutional neural networks: Study of new CNNs architectures for binarization, segmentation and recognition of text images. LAP LAMBERT Academic Publishing, 2011.J. Misra and I. Saha, “Artificial neural networks in hardware: A survey of two decades of progress,” Neurocomputing, vol. 74, no. 1–3, pp. 239–255, December 2010.V. Bettadapura, “Face expression recognition and analysis: The state of the art,” CoRR, vol. abs/1203.6722, pp. 1–27, 2012.M. Z. Uddin, W. Khaksar, and J. Torresen, “Facial expression recognition using salient features and convolutional neural,” IEEE Access, vol. 5, pp. 26 146–26 161, 2017.Xie, Siyue, and H. Hu, “Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks,” IEEE Transactions on Multimedia, vol. 21, no. 1, pp. 211–220, 2019.C. Zhang, P. Wang, K. Chen, and J. Kamarainen, “Identity-aware convolutional neural networks for facial expression recognition,” Journal of Systems Engineering and Electronics, vol. 28, no. 4, pp. 784–792, 2017.P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101, 2010.M. Lyons, M. Kamachi, and J. Gyoba, “The japanese female facial expression (jaffe) dataset,” https: //doi.org/10.5281/zenodo.3451524, last accessed 06 Dec 2021.V. Paul and J. Michael J., “Robus real-time object detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, December 2004.K. David, “Dlib-models,” https://github.com/davisking/dlib-models/, last accessed 12 Dec 2021.K. Davis E., “Dlib-ml: A machine learning toolkit,” Journal of Machine Learning Research, vol. 10, no. 2, pp. 1755–1758, 2009.Itseez, “Open source computer vision library,” https://github.com/itseez/opencv, last accessed 12 Dec 2021.S. van der Walt, J. L. Schonberger, J. , Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, and the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ, vol. 2, p. e453, 6 2014. [Online]. Available: https://doi.org/10.7717/peerj.453skimage, “local binary pattern,” https://scikit-image.org/docs/stable/api/skimage.feature.html? highlight=local binary pattern#skimage.feature.local binary pattern, last accessed 17 Dec 2021.A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, 6 2020. [Online]. Available: https://www.mdpi.com/2078-2489/11/2/125“Tensorflow: An open-source software library for machine intelligence,” https://www.tensorflow.org/, last accessed 15 Feb 2018.tensorflow, “Quantization aware training,” https://blog.tensorflow.org/2020/04/ quantization-aware-training-with-tensorflow-model-optimization-toolkit.html, last accessed 28 Jan 2022.——, “Tensorflow lite 8-bit quantization specification,” https://www.tensorflow.org/lite/performance/ quantization spec, last accessed 28 Jan 2022.Xilinx, “Field programmable gate array (fpga),” https://www.xilinx.com/products/silicon-devices/fpga/ what-is-an-fpga.html, last accessed 28 Jan 2022.TensorFlow, “Transfer learning with tensorflow hub,” https://www.tensorflow.org/tutorials/images/ transfer learning with hub, last accessed 17 May 2022.Amazon, “Alexa,” https://www.amazon.com/b?node=21576558011, last accessed 17 May 2022.Google, “Hey google,” https://assistant.google.com/, last accessed 17 May 2022.Anki, “Vector by anki: A giant roll forward for robot kind,” https://www.kickstarter.com/projects/anki/ vector-by-anki-a-giant-roll-forward-for-robot-kind, last accessed 17 May 2022.D. D. Labs, “Vector 2.0,” https://www.digitaldreamlabs.com/products/vector-robot, last accessed 17 May 2022.L. AI, “Emo: The coolest ai desktop pet with personality and ideas,” https://living.ai/emo/, last accessed 17 May 2022.Google, “Coral,” https://coral.ai/, last accessed 17 May 2022.Intel, “Intel neural compute stick 2 (intel ncs2),” https://www.intel.com/content/www/us/en/developer/ tools/neural-compute-stick/overview.html, last accessed 17 May 2022.AMD-Xilinx, “Vitis ai,” https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html, last accessed 19 May 2022.——, “Xilinx alveo,” https://www.xilinx.com/products/boards-and-kits/alveo.html, last accessed 19 May 2022.——, “Board support package settings page,” https://docs.xilinx.com/r/en-US/ug1400-vitis-embedded/ Board-Support-Package-Settings-Page, last accessed 29 May 2022.——, “Dpu on pynq,” https://github.com/Xilinx/DPU-PYNQ, last accessed 29 May 2022.Dorfell, “Pynq 2.7 for zybo-z7,” https://discuss.pynq.io/t/pynq-2-7-for-zybo-z7/4124, last accessed 02 Jun 2022.AMD-Xilinx, “Retargeting to a different board,” https://pynq.readthedocs.io/en/latest/pynq sd card. html#retargeting-to-a-different-board, last accessed 31 May 2022.Logictronix, “Installing tensorflow in pynq,” https://logictronix.com/wp-content/uploads/2019/04/ TensorFlow Installation on PYNQ Nov 6 2018.pdf, last accessed 03 Jun 2022.K. Hyodo, “Tensorflow-bin previous versions,” https://github.com/PINTO0309/Tensorflow-bin/tree/ main/previous versions, last accessed 03 Jun 2022.Y. Shen, T. Ji, M. Ferdman, and P. Milder, “Argus: An end-to-end framework for accelerating cnns on fpgas,” IEEE Micro, vol. 39, no. 5, pp. 17–25, 2019.S. Sabogal, A. George, and G. Crum, “Recon: A reconfigurable cnn acceleration framework for hybrid semantic segmentation on hybrid socs for space applications,” in 2019 IEEE Space Computing Conference (SCC), 2019, pp. 41–52.S. Mouselinos, V. Leon, S. Xydis, D. Soudris, and K. Pekmestzi, “Tf2fpga: A framework for projecting and accelerating tensorflow cnns on fpga platforms,” in 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), 2019, pp. 1–4.J. Zhu, L. Wang, H. Liu, S. Tian, Q. Deng, and J. Li, “An efficient task assignment framework to accelerate dpu-based convolutional neural network inference on fpgas,” IEEE Access, vol. 8, pp. 83 224–83 237, 2020.Y. Liang, L. Lu, and J. Xie, “Omni: A framework for integrating hardware and software optimizations for sparse cnns,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 8, pp. 1648–1661, 2021.Xilinx, “Zynq ultrascale+ mpsoc,” https://www.xilinx.com/products/silicon-devices/soc/ zynq-ultrascale-mpsoc.html, last accessed 12 Sep 2022.C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Optimizing fpga-based accelerator design for deep convolutional neural networks.” Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays - FPGA’15, February 2015, pp. 161–170.S. I. Venieris and C. S. Bouganis, “Fpgaconvnet: A framework for mapping convolutional neural net- works on fpgas.” Proceedings - 24th IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2016, May 2016, pp. 40–47.A. Dundar, J. Jin, B. Martini, and E. Culurciello, “Embedded streaming deep neural networks accelerator with applications,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 7, pp. 1572– 1583, July 2017.N. Li, S. Takaki, Y. Tomioka, and H. Kitazawa, “A multistage dataflow implementation of a deep convolu- tional neural network based on fpga for high-speed object recognition.” 2016 IEEE Southwest Symposium On Image Analysis and Interpretation (SSIAI), 2016, pp. 165–168.“Caffe: Deep learning framework,” http://caffe.berkeleyvision.org/, last accessed 15 Feb 2018.“Mathworks: Matlab,” https://www.mathworks.com/products/matlab.html, last accessed 15 Feb 2018.“Microsoft cognitive toolkit,” https://www.microsoft.com/en-us/cognitive-toolkit/, last accessed 15 Feb 2018.T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: a small-footprint high- throughput accelerator for ubiquitous machine-learning.” Proceedings of the 19th International Confer- ence on Architectural Support for Programming Languages and Operating Systems - ASPLOS’14, March 2014, pp. 269–284.Y. Zhou and J. Jiang, “An fpga-based accelerator implementation for deep convolutional neural networks.” 4th International Conference on Computer Science and Network Technology (ICCSNT), December 2015, pp. 829–832.Y. Murakami, “Fpga implementation of a simd-based array processor with torus interconnect.” 2015 International Conference on Field Programmable Technology, FPT 2015, May 2015, pp. 244–247.B. Kitchenham, O. P. Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in software engineering - a systematic literature review,” Information and Software Technology, vol. 51, no. 1, pp. 7–15, November 2008.B. Kitchenham, R. Pretorius, D. Budgen, O. P. Brereton, M. Turner, M. Niazi, and S. Linkman, “Sys- tematic literature reviews in software engineering-a tertiary study,” Information and Software Technology, August 2010.A. krizhevsky, “Survey: Implementing dense neural networks in hardware,” https://arxiv.org/abs/1404. 5997, April 2014, last accessed 15 Feb 2018.S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cudnn: Efficient primitives for deep learning,” https://arxiv.org/abs/1410.0759, December 2014, last accessed 15 Feb 2018.F. Ortega-Zamorano, J. M. Jerez, D. U. Munoz, R. M. Luque-Baena, and L. Franco, “Efficient implemen- tation of the backpropagation algorithm in fpgas and microcontrollers,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 9, pp. 1840–1850, August 2016.C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. Lecun, “Neuflow: A runtime reconfigurable dataflow processor for vision.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 2011, pp. 109–116.M. R. D. Abdu-Aljabar, “Design and implementation of neural network in fpga,” Journal of Engineering and Development, vol. 16, no. 3, September 2012.G. H. Shakoory, “Fpga implementation of multilayer perceptron for speech recognition,” Journal of En- gineering and Development, vol. 17, no. 6, December 2013.E. Z. Mohammed and H. K. Ali, “Hardware implementation of artificial neural network using field pro- grammable gate array,” International Journal of Computer Theory and Engineering, vol. 5, no. 5, October 2013.S. Singh, S. Sanjeevi, S. V., and A. Talashi, “Fpga implementation of a trained neural network,” IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), vol. 10, no. 3, May-June 2015.Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam, “Shidiannao: Shifting vision processing closer to the sensor.” Proceedings of the 42nd Annual International Symposium on Computer Architecture-ISCA’15, June 2015, pp. 92–104.M. Motamedi, P. Gysel, V. Akella, and S. Ghiasi, “Design space exploration of fpga-based deep con- volutional neural networks.” 21st Asia and South Pacific Design Automation Conference, 2016, pp. 575–580.L. B. Saldanha and C. Bobda, “Sparsely connected neural networks in fpga for handwritten digit recog- nition.” Proceedings - International Symposium on Quality Electronic Design (ISQED), May 2016, pp. 113–117.Y. Wang, L. Xia, T. Tang, B. Li, S. Yao, M. Cheng, and H. Yang, “Low power convolutional neural networks on a chip,” no. 1. IEEE International Symposium on Computer Architecture, April 2016, pp. 129–132.C. Kyrkou, C. S. Bouganis, T. Theocharides, and M. M. Polycarpou, “Embedded hardware-efficient real- time classification with cascade support vector machines,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, January 2016.T. Luo, S. Liu, L. Li, Y. Wang, S. Zhang, T. Chen, Z. Xu, O. Temam, and Y. Chen, “Dadiannao: A neural network supercomputer,” IEEE Transactions on Computers, vol. 66, no. 1, pp. 73–88, January 2017.N/AN/AEstudiantesInvestigadoresLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/84550/3/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD53ORIGINAL1098679415.2023.pdf1098679415.2023.pdfTesis de Doctorado en Ingeniería - Ingeniería Eléctricaapplication/pdf17563315https://repositorio.unal.edu.co/bitstream/unal/84550/4/1098679415.2023.pdf38ef7ecc441ec1b21eed68e15f9400c0MD54THUMBNAIL1098679415.2023.pdf.jpg1098679415.2023.pdf.jpgGenerated Thumbnailimage/jpeg4483https://repositorio.unal.edu.co/bitstream/unal/84550/5/1098679415.2023.pdf.jpg5096912a7a4f7c64c4f3389853684a8aMD55unal/84550oai:repositorio.unal.edu.co:unal/845502024-08-18 23:12:53.663Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=