Associative learning for collective decision-making in dynamic environments

Documento en Inglés de la tesis de Doctorado

Autores:
Chica Pedraza, Gustavo Alonso
Tipo de recurso:
Doctoral thesis
Fecha de publicación:
2021
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
eng
OAI Identifier:
oai:repositorio.unal.edu.co:unal/80689
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/80689
https://repositorio.unal.edu.co/
Palabra clave:
620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Behavior, animal
Ethology
Etología
Animal Behavior
Distributed control
Entropy
Evolutionary Game Theory
Learning Rules
Population dynamics
Comportamiento animal
control distribuido
Entropía
Teoría de Juegos Evolutiva
Reglas de Aprendizaje
Dinámicas de Población
Rights
openAccess
License
Atribución-NoComercial-SinDerivadas 4.0 Internacional
id UNACIONAL2_cb6ae32cf66ecda1c62e5c096349cdb5
oai_identifier_str oai:repositorio.unal.edu.co:unal/80689
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.eng.fl_str_mv Associative learning for collective decision-making in dynamic environments
dc.title.translated.spa.fl_str_mv Aprendizaje asociativo para toma de decisiones colectivas en ambientes dinámicos
title Associative learning for collective decision-making in dynamic environments
spellingShingle Associative learning for collective decision-making in dynamic environments
620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Behavior, animal
Ethology
Etología
Animal Behavior
Distributed control
Entropy
Evolutionary Game Theory
Learning Rules
Population dynamics
Comportamiento animal
control distribuido
Entropía
Teoría de Juegos Evolutiva
Reglas de Aprendizaje
Dinámicas de Población
title_short Associative learning for collective decision-making in dynamic environments
title_full Associative learning for collective decision-making in dynamic environments
title_fullStr Associative learning for collective decision-making in dynamic environments
title_full_unstemmed Associative learning for collective decision-making in dynamic environments
title_sort Associative learning for collective decision-making in dynamic environments
dc.creator.fl_str_mv Chica Pedraza, Gustavo Alonso
dc.contributor.advisor.none.fl_str_mv Mojica Nava, Eduardo Alirio
dc.contributor.author.none.fl_str_mv Chica Pedraza, Gustavo Alonso
dc.contributor.researchgroup.spa.fl_str_mv Programa de Investigacion sobre Adquisicion y Analisis de Señales Paas-Un
dc.subject.ddc.spa.fl_str_mv 620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
topic 620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingeniería
Behavior, animal
Ethology
Etología
Animal Behavior
Distributed control
Entropy
Evolutionary Game Theory
Learning Rules
Population dynamics
Comportamiento animal
control distribuido
Entropía
Teoría de Juegos Evolutiva
Reglas de Aprendizaje
Dinámicas de Población
dc.subject.lemb.eng.fl_str_mv Behavior, animal
Ethology
dc.subject.lemb.spa.fl_str_mv Etología
dc.subject.proposal.eng.fl_str_mv Animal Behavior
Distributed control
Entropy
Evolutionary Game Theory
Learning Rules
Population dynamics
dc.subject.proposal.spa.fl_str_mv Comportamiento animal
control distribuido
Entropía
Teoría de Juegos Evolutiva
Reglas de Aprendizaje
Dinámicas de Población
description Documento en Inglés de la tesis de Doctorado
publishDate 2021
dc.date.accessioned.none.fl_str_mv 2021-11-16T21:03:48Z
dc.date.available.none.fl_str_mv 2021-11-16T21:03:48Z
dc.date.issued.none.fl_str_mv 2021-11-15
dc.type.spa.fl_str_mv Trabajo de grado - Doctorado
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/doctoralThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_db06
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TD
format http://purl.org/coar/resource_type/c_db06
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/80689
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/80689
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv eng
language eng
dc.relation.references.spa.fl_str_mv Albert B Kao, Noam Miller, Colin Torney, Andrew Hartnett, and Iain D Couzin. Collective learning and optimal consensus decisions in social animal groups. PLoS computational biology, 10(8), 2014.
Nicanor Quijano, Carlos Ocampo-Martinez, Julian Barreiro-Gomez, German Obando, Andres Pantoja, and Eduardo Mojica-Nava. The role of population games and evolutionary dynamics in distributed control systems: The advantages of evolutionary game theory. IEEE Control Systems Magazine, 37(1):70–97, 2017.
Peter D Taylor and Leo B Jonker. Evolutionary stable strategies and game dynamics. Mathematical biosciences, 40(1-2):145–156, 1978.
J¨orgen W Weibull. Evolutionary game theory. MIT press, 1997.
Andres Pantoja, G Obando, and Nicanor Quijano. Distributed optimization with information-constrained population dynamics. Journal of the Franklin Institute, 356(1):209–236, 2019.
SM Zafaruddin, Ilai Bistritz, Amir Leshem, and Dusit Niyato. Distributed learning for channel allocation over a shared spectrum. IEEE Journal on Selected Areas in Communications, 37(10):2337–2349, 2019.
Udari Madhushani and Naomi Ehrich Leonard. Distributed learning: Sequential decision making in resource-constrained environments. arXiv preprint arXiv:2004.06171, 2020.
Guillaume Sartoretti, William Paivine, Yunfei Shi, Yue Wu, and Howie Choset. Distributed learning of decentralized control policies for articulated mobile robots. IEEE Transactions on Robotics, 35(5):1109–1122, 2019.
Josef Hofbauer, Karl Sigmund, et al. Evolutionary games and population dynamics. Cambridge university press, 1998.
William H Sandholm. Population games and deterministic evolutionary dynamics. In Handbook of game theory with economic applications, volume 4, pages 703–778. Elsevier, 2015.
Maojiao Ye and Guoqiang Hu. Distributed nash equilibrium seeking in multi-agent games with partially coupled payoff functions. In 2017 13th IEEE International Conference on Control & Automation (ICCA), pages 265–270. IEEE, 2017.
Bahman Gharesifard and Jorge Cort´es. Distributed convergence to nash equilibria by adversarial networks with undirected topologies. In 2012 American Control Conference (ACC), pages 5881–5886. IEEE, 2012.
Octave Boussaton, Johanne Cohen, Joanna Tomasik, and Dominique Barth. On the distributed learning of nash equilibria with minimal information. In 2012 6th International Conference on Network Games, Control and Optimization (NetGCooP), pages 30–37. IEEE, 2012.
Andr´es Pantoja and Nicanor Quijano. Distributed optimization using population dynamics with a local replicator equation. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 3790–3795. IEEE, 2012.
Lee Alan Dugatkin. Principles of animal behavior. University of Chicago Press, 2020.
Sarah Krichbaum, Adam Davila, Lucia Lazarowski, and Jeffrey S Katz. Animal cognition. In Oxford Research Encyclopedia of Psychology. University of Oxford, 2020.
Anthony M Zador. A critique of pure learning and what artificial neural networks can learn from animal brains. Nature communications, 10(1):1–7, 2019.
William H Sandholm. Evolutionary game theory. Complex Social and Behavioral Systems: Game Theory and Agent-Based Models, pages 573–608, 2020.
Vicky A Melfi, Nicole R Dorey, and Samantha J Ward. Zoo animal learning and training. Wiley Online Library, 2020.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
Ronald L Akers and Wesley G Jennings. Social learning theory. Wiley Handbooks in Criminology and Criminal Justice, pages 230–240, 2016.
Mark Haselgrove and IPL McLaren. The psychology of associative learning, 2019.
G´erard Mani`ere and G´erard Coureaud. From stimulus to behavioral decision-making. Frontiers in Behavioral Neuroscience, 13:274, 2020.
Howard Raiffa. Decision analysis: Introductory lectures on choices under uncertainty. Addison-Wesley, 1968.
Nils Bulling. A survey of multi-agent decision making. KI-K¨unstliche Intelligenz, 28(3):147–158, 2014.
Mariusz Flasi´nski. Introduction to artificial intelligence. Springer, 2016.
Zeng Wei, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. Reinforcement learning to rank with markov decision process. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 945–948, 2017.
Didier Dubois and Henri Prade. Qualitative decision theory. In Proceedings IJCAI 95, pages 1924–1930, 2017.
Bruce Edmonds. Towards a descriptive model of agent strategy search. Computational Economics, 18(1):111–133, 2001.
Vladimir Osipov, Aleksei Posadskii, and Tatiana Sivakova. Methods of rational decision making in multi-agent systems for evaluating the effectiveness of innovations. In AIP Conference Proceedings, page 020031. AIP Publishing LLC, 2019.
Ya’akov Gal, Barbara Grosz, Sarit Kraus, Avi Pfeffer, and Stuart Shieber. Agent decision-making in open mixed networks. Artificial Intelligence, 174(18):1460–1480, 2010.
Sarit Kraus and Ronald C Arkin. Strategic negotiation in multiagent environments. MIT press, 2001.
Davide Calvaresi, Kevin Appoggetti, Luca Lustrissimini, Mauro Marinoni, Paolo Sernani, Aldo Franco Dragoni, and Michael Schumacher. Multi-agent systems’ negotiation protocols for cyber-physical systems: Results from a systematic literature review. In ICAART (1), pages 224–235, 2018.
Julian Barreiro-Gomez, Germ´an Obando, and Nicanor Quijano. Distributed population dynamics: Optimization and control applications. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(2):304–314, 2016.
Michael Rinehart and Munther A Dahleh. The value of side information in shortest path optimization. IEEE transactions on automatic control, 56(9):2038–2049, 2011.
David Knoke and Song Yang. Social network analysis, volume 154. SAGE Publications, Incorporated, 2019
Olga Pacheco and Jos´e Carmo. A role based model for the normative specification of organized collective agency and agents interaction. Autonomous Agents and Multi- Agent Systems, 6(2):145–184, 2003.
Yichuan Jiang, Jing Hu, and Donghui Lin. Decision making of networked multiagent systems for interaction structures. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(6):1107–1121, 2011.
Sridevi V Sarma and Munther A Dahleh. Remote control over noisy communication channels: A first-order example. IEEE transactions on automatic control, 52(2):284– 289, 2007.
Arthur Turrell. Agent-based models: understanding the economy from the bottom up. Bank of England Quarterly Bulletin, page Q4, 2016.
Donghwan Lee, Niao He, Parameswaran Kamalaruban, and Volkan Cevher. Optimization for reinforcement learning: From a single agent to cooperative agents. IEEE Signal Processing Magazine, 37(3):123–135, 2020.
Tiago C dos Santos and Denis F Wolf. Bargaining game approach for lane change maneuvers. In 2019 19th International Conference on Advanced Robotics (ICAR), pages 629–634. IEEE, 2019.
Miguel A Lopez-Carmona, Ivan Marsa-Maestre, and Enrique de la Hoz. A cooperative framework for mediated group decision making. In Modern Approaches to Agent-based Complex Automated Negotiation, pages 35–50. Springer, 2017.
Dmitrii Iarosh, G Reneva, A Kornilova, and Petr Konovalov. Multiagent system of mobile robots for robotic football. In 2019 26th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), pages 1–3. IEEE, 2019.
Aisha D Farooqui and Muaz A Niazi. Game theory models for communication between agents: a review. Complex Adaptive Systems Modeling, 4(1):13, 2016.
Jianlei Zhang and Ming Cao. Strategy competition dynamics of multi-agent systems in the framework of evolutionary game theory. IEEE Transactions on Circuits and Systems II: Express Briefs, 2019.
Sihua Chen, Qin He, and Hua Xiao. A study on cross-border e-commerce partner selection in b2b mode. Electronic Commerce Research, pages 1–21, 2020.
Sulaiman A Alghunaim and Ali H Sayed. Distributed coupled multi-agent stochastic optimization. IEEE Transactions on Automatic Control, 2019.
Sandip Roy, Kristin Herlugson, and Ali Saberi. A control-theoretic approach to distributed discrete-valued decision-making in networks of sensing agents. IEEE Transactions on Mobile Computing, 5(8):945–957, 2006.
Liang Xu, Jianying Zheng, Nan Xiao, and Lihua Xie. Mean square consensus of multi-agent systems over fading networks with directed graphs. Automatica, 95:503– 510, 2018.
Kwangwon Seo, Jinhyun Ahn, and Dong-Hyuk Im. Optimization of shortest-path search on rdbms-based graphs. ISPRS International Journal of Geo-Information, 8(12):550, 2019.
Michael Rinehart and Munther A Dahleh. The value of sequential information in shortest path optimization. In Proceedings of the 2010 American Control Conference, pages 4084–4089. IEEE, 2010.
Xiao-Wei Jiang, Bin Hu, Zhi-Hong Guan, Xian-He Zhang, and Li Yu. The minimal signal-to-noise ratio required for stability of control systems over a noisy channel in the presence of packet dropouts. Information Sciences, 372:579–590, 2016.
Yang Song, Jie Yang, Min Zheng, and Chen Peng. Disturbance attenuation for markov jump linear system over an additive white gaussian noise channel. International Journal of Control, 89(12):2482–2491, 2016.
Xiaohua Ge, Fuwen Yang, and Qing-Long Han. Distributed networked control systems: A brief overview. Information Sciences, 380:117–131, 2017.
Mar´ıa Guinaldo, Jos´e S´anchez, and Sebasti´an Dormido. Control en red basado en eventos: De lo centralizado a lo distribuido. Revista Iberoamericana de Autom´atica e Inform´atica industrial, 14(1):16–30, 2017.
Munther A Dahleh. Distributed decisions for networked systems. Technical report, Massachusetts Inst of Tech Cambridge Dept. of Electrical Engineering, 2012.
Chicheng Huang, Huaqing Li, Dawen Xia, and Li Xiao. Quantized subgradient algorithm with limited bandwidth communications for solving distributed optimization over general directed multi-agent networks. Neurocomputing, 185:153–162, 2016.
Jiahe Jiang and Yangyang Jiang. Leader-following consensus of linear time-varying multi-agent systems under fixed and switching topologies. Automatica, 113:108804, 2020.
Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, and Satwik Kottur. On emergent communication in competitive multi-agent teams. arXiv preprint arXiv:2003.01848, 2020.
Andreas Kasprzok, Beshah Ayalew, and Chad Lau. Decentralized traffic rerouting using minimalist communications. In 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), pages 1–7. IEEE, 2017.
Rosaria Conte and Jaime Sim˜ao Sichman. Dependence graphs: Dependence within and between groups. Computational & Mathematical Organization Theory, 8(2):87– 112, 2002.
S. K. Michael Wong and Cory J. Butz. Constructing the dependency structure of a multiagent probabilistic network. IEEE Transactions on Knowledge and Data Engineering, 13(3):395–415, 2001.
Hamed Rezaee and Farzaneh Abdollahi. Discrete-time consensus strategy for a class of high-order linear multiagent systems under stochastic communication topologies. Journal of the Franklin Institute, 354(9):3690–3705, 2017.
Wolfgang H¨onig, Scott Kiesel, Andrew Tinka, JosephWDurham, and Nora Ayanian. Conflict-based search with optimal task assignment. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 757–765. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
Felipe Leno Da Silva and Anna Helena Reali Costa. A survey on transfer learning for multiagent reinforcement learning systems. Journal of Artificial Intelligence Research, 64:645–703, 2019.
Jan De Houwer, Sean Hughes, and Dermot Barnes-Holmes. Associative learning as higher order cognition: Learning in human and nonhuman animals from the perspective of propositional theories and relational frame theory. Journal of Comparative Psychology, 130(3):215, 2016.
Geoff Hollis. Learning about things that never happened: A critique and refinement of the rescorla-wagner update rule when many outcomes are possible. Memory & cognition, 47(7):1415–1430, 2019
Norman Stuart Sutherland and Nicholas John Mackintosh. Mechanisms of animal discrimination learning. Academic Press, 2016.
Andr´e Luzardo, Eduardo Alonso, and Esther Mondrag´on. A rescorla-wagner drift-diffusion model of conditioning and timing. PLoS computational biology, 13(11):e1005796, 2017
Gianluca Calcagni, Justin A Harris, and Ricardo Pell´on. Beyond rescorla-wagner: the ups and downs of learning. arXiv preprint arXiv:2004.05069, 2020.
Sara C Keen, Ella F Cole, Michael J Sheehan, and Ben C Sheldon. Social learning of acoustic anti-predator cues occurs between wild bird species. Proceedings of the Royal Society B, 287(1920):20192513, 2020.
JE Meyers-Manor. Learning and behavior: A contemporary synthesis., 2016.
John M McNamara and Alasdair I Houston. Integrating function and mechanism. Trends in ecology & evolution, 24(12):670–675, 2009.
Mark E Bouton. Learning and behavior: A contemporary synthesis. Sinauer Associates, 2007.
Noam Y Miller and Sara J Shettleworth. Learning about environmental geometry: an associative model. Journal of Experimental Psychology: Animal Behavior Processes, 33(3):191, 2007.
David JT Sumpter. Collective animal behavior. Princeton University Press, 2010.
Dora Biro, David JT Sumpter, Jessica Meade, and Tim Guilford. From compromise to leadership in pigeon homing. Current biology, 16(21):2123–2128, 2006.
Ryan Lukeman, Yue-Xian Li, and Leah Edelstein-Keshet. Inferring individual rules from collective behavior. Proceedings of the National Academy of Sciences, 107(28):12576–12580, 2010.
Nicole Abaid and Maurizio Porfiri. Consensus over numerosity-constrained random networks. IEEE Transactions on Automatic Control, 56(3):649–654, 2010.
Iain D Couzin, Christos C Ioannou, G¨uven Demirel, Thilo Gross, Colin J Torney, Andrew Hartnett, Larissa Conradt, Simon A Levin, and Naomi E Leonard. Uninformed individuals promote democratic consensus in animal groups. science, 334(6062):1578–1580, 2011.
Shmuel Nitzan and Jacob Paroush. Optimal decision rules in uncertain dichotomous choice situations. International Economic Review, pages 289–297, 1982.
Albert B Kao and Iain D Couzin. Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B: Biological Sciences, 281(1784):20133305, 2014.
Constantinos Vrohidis, Charalampos P Bechlioulis, and Kostas J Kyriakopoulos. Decentralized reconfigurable multi-robot coordination from local connectivity and collision avoidance specifications. IFAC-PapersOnLine, 50(1):15798–15803, 2017.
Soumya Banerjee and Joshua P Hecker. A multi-agent system approach to loadbalancing and resource allocation for distributed computing. In First Complex Systems Digital Campus World E-Conference 2015, pages 41–54. Springer, 2017.
HSVS Kumar Nunna and Dipti Srinivasan. Multiagent-based transactive energy framework for distribution systems with smart microgrids. IEEE Transactions on Industrial Informatics, 13(5):2241–2250, 2017.
Supriyo Ghosh, Sean Laguna, Shiau Hong Lim, LauraWynter, and Hasan Poonawala. A deep ensemble multi-agent reinforcement learning approach for air traffic control. arXiv preprint arXiv:2004.01387, 2020.
Yiguang Hong, Guanrong Chen, and Linda Bushnell. Distributed observers design for leader-following control of multi-agent networks (extended version). arXiv preprint arXiv:1801.00258, 2017.
Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53:659–697, 2015.
Dashuang Chong and Na Sun. Explore emission reduction strategy and evolutionary mechanism under central environmental protection inspection system for multi-agent based on evolutionary game theory. Computer Communications, 2020.
Tilman B¨orgers and Rajiv Sarin. Learning through reinforcement and replicator dynamics. Journal of economic theory, 77(1):1–14, 1997.
Aram Galstyan. Continuous strategy replicator dynamics for multi-agent q-learning. Autonomous agents and multi-agent systems, 26(1):37–53, 2013.
Ahmad Esmaeili, Zahra Ghorrati, and Eric Matson. Multi-agent cooperation using snow-drift evolutionary game model: Case study in foraging task. In 2018 Second IEEE International Conference on Robotic Computing (IRC), pages 308–312. IEEE, 2018.
Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014
Vikram Krishnamurthy. Partially observed Markov decision processes. Cambridge University Press, 2016.
HowardMSchwartz. Multi-agent machine learning: A reinforcement approach. John Wiley & Sons, 2014.
Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pages 157–163. Elsevier, 1994
Dongbin Zhao, Derong Liu, Frank L Lewis, Jose C Principe, and Stefano Squartini. Special issue on deep reinforcement learning and adaptive dynamic programming. IEEE transactions on neural networks and learning systems, 29(6):2038–2041, 2018
Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. AAAI/IAAI, 1998(746-752):2, 1998.
Ibrahim Althamary, Chih-Wei Huang, and Phone Lin. A survey on multi-agent reinforcement learning methods for vehicular networks. In 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), pages 1154–1159. IEEE, 2019.
Ardeshir Kianercy and Aram Galstyan. Dynamics of boltzmann q learning in twoplayer two-action games. Physical Review E, 85(4):041145, 2012.
Kaddour Najim and Alexander S Poznyak. Learning automata: theory and applications. Elsevier, 2014
Hans Peters. Game theory: A Multi-leveled approach. Springer, 2015.
Michael Bacharach. Economics and the Theory of Games. CRC Press, 2019.
Samuel S Komorita. Social dilemmas. Routledge, 2019.
Caleb A Cox, Arz´e Karam, and Ryan J Murphy. Social preferences and cooperation in simple social dilemma games. Journal of behavioral and experimental economics, 69:1–3, 2017.
Tamer Bas¸ar and Georges Zaccour. Handbook of Dynamic Game Theory. Springer, 2018.
J¨orgen W Weibull. Evolutionary game theory. MIT press, 1997.
Karl Tuyls, Katja Verbeeck, and Tom Lenaerts. A selection-mutation model for qlearning in multi-agent systems. In Proceedings of the second international joint conference on Autonomous agents and multiagent systems, pages 693–700, 2003.
Michael Kaisers and Karl Tuyls. Frequency adjusted multi-agent q-learning. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pages 309–316, 2010.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. Cambridge, MA: MIT Press, 2011.
Wolfgang Ertel. Reinforcement learning. In Introduction to Artificial Intelligence, pages 289–311. Springer, 2017.
Jonathan Newton. Evolutionary game theory: A renaissance. Games, 9(2):31, 2018.
Larry Samuelson. Evolutionary games and equilibrium selection, volume 1. MIT press, 1997.
Agoston E Eiben and James E Smith. Introduction to Evolutionary Computing. Springer, 2015.
Dietrich Stauffer. Life, love and death: Models of biological reproduction and aging. Institute for Theoretical physics, K¨oln, Euroland, 1999.
Josef Hofbauer and William H Sandholm. Stable games and their dynamics. Journal of Economic theory, 144(4):1665–1693, 2009.
Eduardo Mojica-Nava, Carlos Barreto, and Nicanor Quijano. Population games methods for distributed control of microgrids. IEEE Transactions on Smart Grid, 6(6):2586–2595, 2015.
William H Sandholm. Population games and evolutionary dynamics. MIT press, 2010.
Luc Moreau. Stability of multiagent systems with time-dependent communication links. IEEE Transactions on automatic control, 50(2):169–182, 2005.
Wei Ren and Randal W Beard. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Transactions on automatic control, 50(5):655–661, 2005.
A Cagnano, E De Tuglie, and P Mancarella. Microgrids: Overview and guidelines for practical implementations and operation. Applied Energy, 258:114039, 2020.
JA Pec¸as Lopes, CL Moreira, and AG Madureira. Defining control strategies for microgrids islanded operation. IEEE Transactions on power systems, 21(2):916–924, 2006.
Toshihide Ibaraki and Naoki Katoh. Resource allocation problems: algorithmic approaches. MIT press, 1988.
Seon-Ju Ahn and Seung-Il Moon. Economic scheduling of distributed generators in a microgrid considering various constraints. In 2009 IEEE Power & Energy Society General Meeting, pages 1–6. IEEE, 2009
Goran Strbac. Demand side management: Benefits and challenges. Energy policy, 36(12):4419–4426, 2008.
Daniel E Olivares, Claudio A Ca˜nizares, and Mehrdad Kazerani. A centralized optimal energy management system for microgrids. In 2011 IEEE Power and Energy Society General Meeting, pages 1–6. IEEE, 2011.
Pablo Quintana-Barcia, Tomislav Dragicevic, Jorge Garcia, Javier Ribas, and JosepM Guerrero. A distributed control strategy for islanded single-phase microgrids with hybrid energy storage systems based on power line signaling. Energies, 12(1):85, 2019.
Bonan Huang, Lining Liu, Huaguang Zhang, Yushuai Li, and Qiuye Sun. Distributed optimal economic dispatch for microgrids considering communication delays. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(8):1634–1642, 2019.
Juan C Vasquez, Josep M Guerrero, Jaume Miret, Miguel Castilla, and Luis Garcia De Vicuna. Hierarchical control of intelligent microgrids. IEEE Industrial Electronics Magazine, 4(4):23–29, 2010.
Gustavo Chica-Pedraza, Eduardo Mojica-Nava, and Ernesto Cadena-Mu˜noz. Boltzmann distributed replicator dynamics: Population games in a microgrid context. Games, 12(1):1–1, 2021.
Wood Aj and BF Wollenberg. Power generation, operation and control. New York: John Wiley & Sons, page 592, 1996.
Daniel P´erez Palomar and Mung Chiang. A tutorial on decomposition methods for network utility maximization. IEEE Journal on Selected Areas in Communications, 24(8):1439–1451, 2006.
Andr´es Pantoja and Nicanor Quijano. A population dynamics approach for the dispatch of distributed generators. IEEE Transactions on Industrial Electronics, 58(10):4559–4567, 2011.
Eduardo Mojica-Nava, Carlos Andr´es Macana, and Nicanor Quijano. Dynamic population games for optimal dispatch on hierarchical microgrid control. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(3):306–317, 2013.
Nicholas F Britton. Essential mathematical biology. Springer Science & Business Media, 2012.
H Peyton Young and Shmuel Zamir. Handbook of game theory with economic applications. Technical report, Elsevier, 2015.
Gustavo Chica, Eduardo Mojica, and Ernesto Cadena. Boltzmann-based distributed replicator dynamics: A smart grid application. In 2020 Congreso Internacional de Innovaci´on y Tendencias en Ingenier´ıa (CONIITI), pages 1–6. IEEE, 2020.
Andr´es Pantoja, Nicanor Quijano, and Kevin M Passino. Dispatch of distributed generators under local-information constraints. In 2014 American Control Conference, pages 2682–2687. IEEE, 2014.
Carlos Barreto, Eduardo Mojica-Nava, and Nicanor Quijano. Design of mechanisms for demand response programs. In 52nd IEEE Conference on Decision and Control, pages 1828–1833. IEEE, 2013.
Carlos Barreto, Eduardo Mojica-Nava, and Nicanor Quijano. Incentives-based mechanism for efficient demand response programs. arXiv preprint arXiv:1408.5366, 2014.
M Hadi Amini, Saber Talari, Hamidreza Arasteh, Nadali Mahmoudi, Mostafa Kazemi, Amir Abdollahi, Vikram Bhattacharjee, Miadreza Shafie-Khah, Pierluigi Siano, and Jo˜ao PS Catal˜ao. Demand response in future power networks: panorama and state-of-the-art. In Sustainable interdependent networks II, pages 167–191. Springer, 2019.
Ramesh Johari and John N Tsitsiklis. Efficiency of scalar-parameterized mechanisms. Operations Research, 57(4):823–839, 2009.
Drew Fudenberg, Fudenberg Drew, David K Levine, and David K Levine. The theory of learning in games, volume 2. MIT press, 1998.
Tim Roughgarden. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.
Julian Barreiro-G´omez, Nicanor Quijano, and Carlos Ocampo-Martinez. Distributed control of drinking water networks using population dynamics: Barcelona case study. In 53rd IEEE Conference on Decision and Control, pages 3216–3221. IEEE, 2014.
Yamin Wang, Shouxiang Wang, and Lei Wu. Distributed optimization approaches for emerging power systems operation: A review. Electric Power Systems Research, 144:127–135, 2017.
Paul A Jensen and Jonathan F Bard. Operations research models and methods, volume 1. John Wiley & Sons Incorporated, 2003.
Hamidou Tembine, Eitan Altman, Rachid El-Azouzi, and Yezekael Hayel. Evolutionary games in wireless networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(3):634–646, 2009.
Jason R Marden. State based potential games. Automatica, 48(12):3075–3088, 2012.
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
http://creativecommons.org/licenses/by-nc-nd/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv xxii, 125 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Bogotá - Ingeniería - Doctorado en Ingeniería - Ingeniería Eléctrica
dc.publisher.department.spa.fl_str_mv Departamento de Ingeniería Eléctrica y Electrónica
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.spa.fl_str_mv Bogotá, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Bogotá
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/80689/1/license.txt
https://repositorio.unal.edu.co/bitstream/unal/80689/2/80032454.2021.pdf
https://repositorio.unal.edu.co/bitstream/unal/80689/3/80032454.2021.pdf.jpg
bitstream.checksum.fl_str_mv 8153f7789df02f0a4c9e079953658ab2
95312fe0da6b6e6b98d64bd00cc52aa3
435547d26cb3a618fe4ffa856b4bf846
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089781985935360
spelling Atribución-NoComercial-SinDerivadas 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Mojica Nava, Eduardo Alirio609c35fb4a7e288ee81a2ef0fb802397Chica Pedraza, Gustavo Alonsof100d6daf4e074d70f3cc85edb6d9127Programa de Investigacion sobre Adquisicion y Analisis de Señales Paas-Un2021-11-16T21:03:48Z2021-11-16T21:03:48Z2021-11-15https://repositorio.unal.edu.co/handle/unal/80689Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/Documento en Inglés de la tesis de DoctoradoIn the past few decades, animal behavior has become one of the most attractive subjects of study in the academic field. This can be understood due to its connection with evolutionary theories, which employ concepts of natural selection that allow organisms to better adapt to their environment, thus helping them to survive and have more offspring. Emergent theories, such as artificial intelligence and evolutionary game theory (EGT), have focused their attention on finding a way to use these evolutionary concepts to incorporate them into real-life applications. Modeling animal behavior implies the understanding of learning since animals have shown to respond to stimuli, which means that animals learn somehow to associate actions to outcomes (rewards or punishments). Some approaches in this field usually study the learning process by associating animals to agents, players, or populations. In this sense, traditional reinforcement learning (RL) is a useful tool when treating a single-agent framework. Nevertheless, on a multi-agent system (MAS), this tool could fall short, since the agents in a MAS have interference between them, that is, the feedback is not only about the agent but also for all agents in the MAS. In addition, when a multi-agent framework is treated, the environments are not stationary and the optimization and convergence of the RL algorithm are missing. To deal with these scenarios, EGT is usually used as a mechanism that uses a population dynamics perspective, in which applications imply the design of networked engineering approaches where concepts such as learning, control systems, stability, and information dependency are relevant issues. Classic approaches of population dynamics (e.g. replicator, Smith, and logit) need full information of the system to find the outcome that allows achieving the Nash Equilibrium. This paradigm is explained considering that the immersed process is done under the assumption that the population is well-mixed, which puts a limit on fields of application where classic theory can be applied. Recent advances in this field have introduced a concept related to non-well-mixed populations, which use a distributed structure able to deal with uncompleted graphs (non-full information). This work aims to find a way to deal with scenarios where the uncertainty level is high for distributed modeling. The main objective is to develop a model to tackle the loss of information in complex and dynamic environments, where the use of parallel computations may address the lack of information between agents, avoiding control problems of centralized schemes. For this purpose, a mathematical abstraction of the dynamics equations of Q-learning is developed and complemented by the introduction of a novel approximation using a population game perspective. The obtained dynamics can be understood as entropy-based learning rules, and their behavior is implemented in applications in the context of classic games, optimization problems, smart grids, and demand response systems. Results show an interesting interconnection between the mechanisms of the selection-mutation of the Evolutionary Game Theory and the exploration-exploitation structure from RL, which allows seeing the learning process in MAS from other perspectives to understand it and adjust it to more realistic scenarios. Results also show that despite using partial information, the obtained dynamics share strong similarities with classic approaches, a fact that can be evidenced by the mass conservation and the Nash Equilibrium convergence. (Text taken from source)En las últimas décadas, el comportamiento animal se ha convertido en uno de los temas de estudio más atractivos en el campo académico. Esto se puede entender por su conexión con las teorías evolutivas, que emplean conceptos de selección natural que permiten a los organismos adaptarse mejor a su entorno, por lo tanto ayudándoles a sobrevivir y tener más descendencia. Teorías emergentes, como la inteligencia artificial y la teoría de juegos evolutivos (EGT), han centrado su atención en encontrar una manera de utilizar estos conceptos evolutivos para incorporarlos en aplicaciones de la vida real. Modelar el comportamiento animal implica la comprensión del aprendizaje, ya que los animales han demostrado responder a los estímulos, lo que significa que los animales aprenden de alguna manera a asociar acciones con resultados (recompensas o castigos). Algunos enfoques en este campo generalmente estudian el proceso de aprendizaje asociando animales a agentes, jugadores, o poblaciones. En este sentido, el aprendizaje por refuerzo tradicional (RL) es una herramienta útil cuando se trata de una estructura con un único agente. Sin embargo, en un sistema multiagente (MAS), esta herramienta no es suficiente, ya que los agentes en MAS tienen interferencia entre ellos, es decir, se debe considerar la interacción con los demás agentes. Adicionalmente, los entornos MAS no son estacionarios y no se garantiza la convergencia del algoritmo RL. Para hacer frente a estos escenarios, la EGT se suele utilizar como un mecanismo que utiliza una perspectiva de dinámica poblacional, en la que las aplicaciones implican el diseño de enfoques de ingeniería en red donde conceptos como aprendizaje, sistemas de control, estabilidad y dependencia de la información son temas relevantes. Los enfoques clásicos de la dinámica de poblaciones (por ejemplo, replicador, Smith y logit) necesitan información completa del sistema para encontrar el resultado que permita lograr el Equilibrio de Nash. Este paradigma se explica considerando que el proceso inmerso se realiza bajo el supuesto de que la población está bien mezclada, lo que limita los campos de aplicación donde se puede aplicar la teoría clásica. Los avances recientes en este campo han introducido un concepto relacionado con las poblaciones que no están bien mezcladas, que utilizan una estructura distribuida capaz de lidiar con gráficos incompletos (información no completa). Este trabajo tiene como objetivo encontrar una manera de lidiar con escenarios donde el nivel de incertidumbre es alto para el modelado distribuido. El objetivo principal es desarrollar un modelo para afrontar la pérdida de información en entornos complejos y dinámicos, donde el uso de cómputos paralelos pueda abordar la falta de información entre agentes, evitando problemas de control de esquemas centralizados. Para ello, se desarrolla una abstracción matemática de las ecuaciones dinámicas de Q-learning y se complementa con la introducción de una aproximación novedosa utilizando una perspectiva de juego poblacional. Las dinámicas obtenidas pueden entenderse como reglas de aprendizaje basadas en la entropía, y su comportamiento se implementa en aplicaciones en el contexto de juegos clásicos, problemas de optimización, redes inteligentes y sistemas de respuesta a la demanda. Los resultados muestran una interesante interconexión entre los mecanismos de selección-mutación de la Teoría de Juegos Evolutivos y la estructura de exploración-explotación de RL, lo que permite ver el proceso de aprendizaje en MAS desde otras perspectivas para comprenderlo y ajustarlo a escenarios más realistas. Los resultados también muestran que a pesar de utilizar información parcial, las dinámicas obtenidas comparten fuertes similitudes con los enfoques clásicos, hecho que puede ser evidenciado por la conservación de masas y la convergencia del Equilibrio de Nash.ColcienciasColfuturoUniversidad Nacional de ColombiaUniversidad Santo TomásDoctoradoDoctor en Ingeniería.Redes Distribuidasxxii, 125 páginasapplication/pdfengUniversidad Nacional de ColombiaBogotá - Ingeniería - Doctorado en Ingeniería - Ingeniería EléctricaDepartamento de Ingeniería Eléctrica y ElectrónicaFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá620 - Ingeniería y operaciones afines::629 - Otras ramas de la ingenieríaBehavior, animalEthologyEtologíaAnimal BehaviorDistributed controlEntropyEvolutionary Game TheoryLearning RulesPopulation dynamicsComportamiento animalcontrol distribuidoEntropíaTeoría de Juegos EvolutivaReglas de AprendizajeDinámicas de PoblaciónAssociative learning for collective decision-making in dynamic environmentsAprendizaje asociativo para toma de decisiones colectivas en ambientes dinámicosTrabajo de grado - Doctoradoinfo:eu-repo/semantics/doctoralThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_db06Texthttp://purl.org/redcol/resource_type/TDAlbert B Kao, Noam Miller, Colin Torney, Andrew Hartnett, and Iain D Couzin. Collective learning and optimal consensus decisions in social animal groups. PLoS computational biology, 10(8), 2014.Nicanor Quijano, Carlos Ocampo-Martinez, Julian Barreiro-Gomez, German Obando, Andres Pantoja, and Eduardo Mojica-Nava. The role of population games and evolutionary dynamics in distributed control systems: The advantages of evolutionary game theory. IEEE Control Systems Magazine, 37(1):70–97, 2017.Peter D Taylor and Leo B Jonker. Evolutionary stable strategies and game dynamics. Mathematical biosciences, 40(1-2):145–156, 1978.J¨orgen W Weibull. Evolutionary game theory. MIT press, 1997.Andres Pantoja, G Obando, and Nicanor Quijano. Distributed optimization with information-constrained population dynamics. Journal of the Franklin Institute, 356(1):209–236, 2019.SM Zafaruddin, Ilai Bistritz, Amir Leshem, and Dusit Niyato. Distributed learning for channel allocation over a shared spectrum. IEEE Journal on Selected Areas in Communications, 37(10):2337–2349, 2019.Udari Madhushani and Naomi Ehrich Leonard. Distributed learning: Sequential decision making in resource-constrained environments. arXiv preprint arXiv:2004.06171, 2020.Guillaume Sartoretti, William Paivine, Yunfei Shi, Yue Wu, and Howie Choset. Distributed learning of decentralized control policies for articulated mobile robots. IEEE Transactions on Robotics, 35(5):1109–1122, 2019.Josef Hofbauer, Karl Sigmund, et al. Evolutionary games and population dynamics. Cambridge university press, 1998.William H Sandholm. Population games and deterministic evolutionary dynamics. In Handbook of game theory with economic applications, volume 4, pages 703–778. Elsevier, 2015.Maojiao Ye and Guoqiang Hu. Distributed nash equilibrium seeking in multi-agent games with partially coupled payoff functions. In 2017 13th IEEE International Conference on Control & Automation (ICCA), pages 265–270. IEEE, 2017.Bahman Gharesifard and Jorge Cort´es. Distributed convergence to nash equilibria by adversarial networks with undirected topologies. In 2012 American Control Conference (ACC), pages 5881–5886. IEEE, 2012.Octave Boussaton, Johanne Cohen, Joanna Tomasik, and Dominique Barth. On the distributed learning of nash equilibria with minimal information. In 2012 6th International Conference on Network Games, Control and Optimization (NetGCooP), pages 30–37. IEEE, 2012.Andr´es Pantoja and Nicanor Quijano. Distributed optimization using population dynamics with a local replicator equation. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 3790–3795. IEEE, 2012.Lee Alan Dugatkin. Principles of animal behavior. University of Chicago Press, 2020.Sarah Krichbaum, Adam Davila, Lucia Lazarowski, and Jeffrey S Katz. Animal cognition. In Oxford Research Encyclopedia of Psychology. University of Oxford, 2020.Anthony M Zador. A critique of pure learning and what artificial neural networks can learn from animal brains. Nature communications, 10(1):1–7, 2019.William H Sandholm. Evolutionary game theory. Complex Social and Behavioral Systems: Game Theory and Agent-Based Models, pages 573–608, 2020.Vicky A Melfi, Nicole R Dorey, and Samantha J Ward. Zoo animal learning and training. Wiley Online Library, 2020.Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.Ronald L Akers and Wesley G Jennings. Social learning theory. Wiley Handbooks in Criminology and Criminal Justice, pages 230–240, 2016.Mark Haselgrove and IPL McLaren. The psychology of associative learning, 2019.G´erard Mani`ere and G´erard Coureaud. From stimulus to behavioral decision-making. Frontiers in Behavioral Neuroscience, 13:274, 2020.Howard Raiffa. Decision analysis: Introductory lectures on choices under uncertainty. Addison-Wesley, 1968.Nils Bulling. A survey of multi-agent decision making. KI-K¨unstliche Intelligenz, 28(3):147–158, 2014.Mariusz Flasi´nski. Introduction to artificial intelligence. Springer, 2016.Zeng Wei, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. Reinforcement learning to rank with markov decision process. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 945–948, 2017.Didier Dubois and Henri Prade. Qualitative decision theory. In Proceedings IJCAI 95, pages 1924–1930, 2017.Bruce Edmonds. Towards a descriptive model of agent strategy search. Computational Economics, 18(1):111–133, 2001.Vladimir Osipov, Aleksei Posadskii, and Tatiana Sivakova. Methods of rational decision making in multi-agent systems for evaluating the effectiveness of innovations. In AIP Conference Proceedings, page 020031. AIP Publishing LLC, 2019.Ya’akov Gal, Barbara Grosz, Sarit Kraus, Avi Pfeffer, and Stuart Shieber. Agent decision-making in open mixed networks. Artificial Intelligence, 174(18):1460–1480, 2010.Sarit Kraus and Ronald C Arkin. Strategic negotiation in multiagent environments. MIT press, 2001.Davide Calvaresi, Kevin Appoggetti, Luca Lustrissimini, Mauro Marinoni, Paolo Sernani, Aldo Franco Dragoni, and Michael Schumacher. Multi-agent systems’ negotiation protocols for cyber-physical systems: Results from a systematic literature review. In ICAART (1), pages 224–235, 2018.Julian Barreiro-Gomez, Germ´an Obando, and Nicanor Quijano. Distributed population dynamics: Optimization and control applications. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(2):304–314, 2016.Michael Rinehart and Munther A Dahleh. The value of side information in shortest path optimization. IEEE transactions on automatic control, 56(9):2038–2049, 2011.David Knoke and Song Yang. Social network analysis, volume 154. SAGE Publications, Incorporated, 2019Olga Pacheco and Jos´e Carmo. A role based model for the normative specification of organized collective agency and agents interaction. Autonomous Agents and Multi- Agent Systems, 6(2):145–184, 2003.Yichuan Jiang, Jing Hu, and Donghui Lin. Decision making of networked multiagent systems for interaction structures. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(6):1107–1121, 2011.Sridevi V Sarma and Munther A Dahleh. Remote control over noisy communication channels: A first-order example. IEEE transactions on automatic control, 52(2):284– 289, 2007.Arthur Turrell. Agent-based models: understanding the economy from the bottom up. Bank of England Quarterly Bulletin, page Q4, 2016.Donghwan Lee, Niao He, Parameswaran Kamalaruban, and Volkan Cevher. Optimization for reinforcement learning: From a single agent to cooperative agents. IEEE Signal Processing Magazine, 37(3):123–135, 2020.Tiago C dos Santos and Denis F Wolf. Bargaining game approach for lane change maneuvers. In 2019 19th International Conference on Advanced Robotics (ICAR), pages 629–634. IEEE, 2019.Miguel A Lopez-Carmona, Ivan Marsa-Maestre, and Enrique de la Hoz. A cooperative framework for mediated group decision making. In Modern Approaches to Agent-based Complex Automated Negotiation, pages 35–50. Springer, 2017.Dmitrii Iarosh, G Reneva, A Kornilova, and Petr Konovalov. Multiagent system of mobile robots for robotic football. In 2019 26th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), pages 1–3. IEEE, 2019.Aisha D Farooqui and Muaz A Niazi. Game theory models for communication between agents: a review. Complex Adaptive Systems Modeling, 4(1):13, 2016.Jianlei Zhang and Ming Cao. Strategy competition dynamics of multi-agent systems in the framework of evolutionary game theory. IEEE Transactions on Circuits and Systems II: Express Briefs, 2019.Sihua Chen, Qin He, and Hua Xiao. A study on cross-border e-commerce partner selection in b2b mode. Electronic Commerce Research, pages 1–21, 2020.Sulaiman A Alghunaim and Ali H Sayed. Distributed coupled multi-agent stochastic optimization. IEEE Transactions on Automatic Control, 2019.Sandip Roy, Kristin Herlugson, and Ali Saberi. A control-theoretic approach to distributed discrete-valued decision-making in networks of sensing agents. IEEE Transactions on Mobile Computing, 5(8):945–957, 2006.Liang Xu, Jianying Zheng, Nan Xiao, and Lihua Xie. Mean square consensus of multi-agent systems over fading networks with directed graphs. Automatica, 95:503– 510, 2018.Kwangwon Seo, Jinhyun Ahn, and Dong-Hyuk Im. Optimization of shortest-path search on rdbms-based graphs. ISPRS International Journal of Geo-Information, 8(12):550, 2019.Michael Rinehart and Munther A Dahleh. The value of sequential information in shortest path optimization. In Proceedings of the 2010 American Control Conference, pages 4084–4089. IEEE, 2010.Xiao-Wei Jiang, Bin Hu, Zhi-Hong Guan, Xian-He Zhang, and Li Yu. The minimal signal-to-noise ratio required for stability of control systems over a noisy channel in the presence of packet dropouts. Information Sciences, 372:579–590, 2016.Yang Song, Jie Yang, Min Zheng, and Chen Peng. Disturbance attenuation for markov jump linear system over an additive white gaussian noise channel. International Journal of Control, 89(12):2482–2491, 2016.Xiaohua Ge, Fuwen Yang, and Qing-Long Han. Distributed networked control systems: A brief overview. Information Sciences, 380:117–131, 2017.Mar´ıa Guinaldo, Jos´e S´anchez, and Sebasti´an Dormido. Control en red basado en eventos: De lo centralizado a lo distribuido. Revista Iberoamericana de Autom´atica e Inform´atica industrial, 14(1):16–30, 2017.Munther A Dahleh. Distributed decisions for networked systems. Technical report, Massachusetts Inst of Tech Cambridge Dept. of Electrical Engineering, 2012.Chicheng Huang, Huaqing Li, Dawen Xia, and Li Xiao. Quantized subgradient algorithm with limited bandwidth communications for solving distributed optimization over general directed multi-agent networks. Neurocomputing, 185:153–162, 2016.Jiahe Jiang and Yangyang Jiang. Leader-following consensus of linear time-varying multi-agent systems under fixed and switching topologies. Automatica, 113:108804, 2020.Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, and Satwik Kottur. On emergent communication in competitive multi-agent teams. arXiv preprint arXiv:2003.01848, 2020.Andreas Kasprzok, Beshah Ayalew, and Chad Lau. Decentralized traffic rerouting using minimalist communications. In 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), pages 1–7. IEEE, 2017.Rosaria Conte and Jaime Sim˜ao Sichman. Dependence graphs: Dependence within and between groups. Computational & Mathematical Organization Theory, 8(2):87– 112, 2002.S. K. Michael Wong and Cory J. Butz. Constructing the dependency structure of a multiagent probabilistic network. IEEE Transactions on Knowledge and Data Engineering, 13(3):395–415, 2001.Hamed Rezaee and Farzaneh Abdollahi. Discrete-time consensus strategy for a class of high-order linear multiagent systems under stochastic communication topologies. Journal of the Franklin Institute, 354(9):3690–3705, 2017.Wolfgang H¨onig, Scott Kiesel, Andrew Tinka, JosephWDurham, and Nora Ayanian. Conflict-based search with optimal task assignment. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 757–765. International Foundation for Autonomous Agents and Multiagent Systems, 2018.Felipe Leno Da Silva and Anna Helena Reali Costa. A survey on transfer learning for multiagent reinforcement learning systems. Journal of Artificial Intelligence Research, 64:645–703, 2019.Jan De Houwer, Sean Hughes, and Dermot Barnes-Holmes. Associative learning as higher order cognition: Learning in human and nonhuman animals from the perspective of propositional theories and relational frame theory. Journal of Comparative Psychology, 130(3):215, 2016.Geoff Hollis. Learning about things that never happened: A critique and refinement of the rescorla-wagner update rule when many outcomes are possible. Memory & cognition, 47(7):1415–1430, 2019Norman Stuart Sutherland and Nicholas John Mackintosh. Mechanisms of animal discrimination learning. Academic Press, 2016.Andr´e Luzardo, Eduardo Alonso, and Esther Mondrag´on. A rescorla-wagner drift-diffusion model of conditioning and timing. PLoS computational biology, 13(11):e1005796, 2017Gianluca Calcagni, Justin A Harris, and Ricardo Pell´on. Beyond rescorla-wagner: the ups and downs of learning. arXiv preprint arXiv:2004.05069, 2020.Sara C Keen, Ella F Cole, Michael J Sheehan, and Ben C Sheldon. Social learning of acoustic anti-predator cues occurs between wild bird species. Proceedings of the Royal Society B, 287(1920):20192513, 2020.JE Meyers-Manor. Learning and behavior: A contemporary synthesis., 2016.John M McNamara and Alasdair I Houston. Integrating function and mechanism. Trends in ecology & evolution, 24(12):670–675, 2009.Mark E Bouton. Learning and behavior: A contemporary synthesis. Sinauer Associates, 2007.Noam Y Miller and Sara J Shettleworth. Learning about environmental geometry: an associative model. Journal of Experimental Psychology: Animal Behavior Processes, 33(3):191, 2007.David JT Sumpter. Collective animal behavior. Princeton University Press, 2010.Dora Biro, David JT Sumpter, Jessica Meade, and Tim Guilford. From compromise to leadership in pigeon homing. Current biology, 16(21):2123–2128, 2006.Ryan Lukeman, Yue-Xian Li, and Leah Edelstein-Keshet. Inferring individual rules from collective behavior. Proceedings of the National Academy of Sciences, 107(28):12576–12580, 2010.Nicole Abaid and Maurizio Porfiri. Consensus over numerosity-constrained random networks. IEEE Transactions on Automatic Control, 56(3):649–654, 2010.Iain D Couzin, Christos C Ioannou, G¨uven Demirel, Thilo Gross, Colin J Torney, Andrew Hartnett, Larissa Conradt, Simon A Levin, and Naomi E Leonard. Uninformed individuals promote democratic consensus in animal groups. science, 334(6062):1578–1580, 2011.Shmuel Nitzan and Jacob Paroush. Optimal decision rules in uncertain dichotomous choice situations. International Economic Review, pages 289–297, 1982.Albert B Kao and Iain D Couzin. Decision accuracy in complex environments is often maximized by small group sizes. Proceedings of the Royal Society B: Biological Sciences, 281(1784):20133305, 2014.Constantinos Vrohidis, Charalampos P Bechlioulis, and Kostas J Kyriakopoulos. Decentralized reconfigurable multi-robot coordination from local connectivity and collision avoidance specifications. IFAC-PapersOnLine, 50(1):15798–15803, 2017.Soumya Banerjee and Joshua P Hecker. A multi-agent system approach to loadbalancing and resource allocation for distributed computing. In First Complex Systems Digital Campus World E-Conference 2015, pages 41–54. Springer, 2017.HSVS Kumar Nunna and Dipti Srinivasan. Multiagent-based transactive energy framework for distribution systems with smart microgrids. IEEE Transactions on Industrial Informatics, 13(5):2241–2250, 2017.Supriyo Ghosh, Sean Laguna, Shiau Hong Lim, LauraWynter, and Hasan Poonawala. A deep ensemble multi-agent reinforcement learning approach for air traffic control. arXiv preprint arXiv:2004.01387, 2020.Yiguang Hong, Guanrong Chen, and Linda Bushnell. Distributed observers design for leader-following control of multi-agent networks (extended version). arXiv preprint arXiv:1801.00258, 2017.Daan Bloembergen, Karl Tuyls, Daniel Hennes, and Michael Kaisers. Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53:659–697, 2015.Dashuang Chong and Na Sun. Explore emission reduction strategy and evolutionary mechanism under central environmental protection inspection system for multi-agent based on evolutionary game theory. Computer Communications, 2020.Tilman B¨orgers and Rajiv Sarin. Learning through reinforcement and replicator dynamics. Journal of economic theory, 77(1):1–14, 1997.Aram Galstyan. Continuous strategy replicator dynamics for multi-agent q-learning. Autonomous agents and multi-agent systems, 26(1):37–53, 2013.Ahmad Esmaeili, Zahra Ghorrati, and Eric Matson. Multi-agent cooperation using snow-drift evolutionary game model: Case study in foraging task. In 2018 Second IEEE International Conference on Robotic Computing (IRC), pages 308–312. IEEE, 2018.Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014Vikram Krishnamurthy. Partially observed Markov decision processes. Cambridge University Press, 2016.HowardMSchwartz. Multi-agent machine learning: A reinforcement approach. John Wiley & Sons, 2014.Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pages 157–163. Elsevier, 1994Dongbin Zhao, Derong Liu, Frank L Lewis, Jose C Principe, and Stefano Squartini. Special issue on deep reinforcement learning and adaptive dynamic programming. IEEE transactions on neural networks and learning systems, 29(6):2038–2041, 2018Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. AAAI/IAAI, 1998(746-752):2, 1998.Ibrahim Althamary, Chih-Wei Huang, and Phone Lin. A survey on multi-agent reinforcement learning methods for vehicular networks. In 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), pages 1154–1159. IEEE, 2019.Ardeshir Kianercy and Aram Galstyan. Dynamics of boltzmann q learning in twoplayer two-action games. Physical Review E, 85(4):041145, 2012.Kaddour Najim and Alexander S Poznyak. Learning automata: theory and applications. Elsevier, 2014Hans Peters. Game theory: A Multi-leveled approach. Springer, 2015.Michael Bacharach. Economics and the Theory of Games. CRC Press, 2019.Samuel S Komorita. Social dilemmas. Routledge, 2019.Caleb A Cox, Arz´e Karam, and Ryan J Murphy. Social preferences and cooperation in simple social dilemma games. Journal of behavioral and experimental economics, 69:1–3, 2017.Tamer Bas¸ar and Georges Zaccour. Handbook of Dynamic Game Theory. Springer, 2018.J¨orgen W Weibull. Evolutionary game theory. MIT press, 1997.Karl Tuyls, Katja Verbeeck, and Tom Lenaerts. A selection-mutation model for qlearning in multi-agent systems. In Proceedings of the second international joint conference on Autonomous agents and multiagent systems, pages 693–700, 2003.Michael Kaisers and Karl Tuyls. Frequency adjusted multi-agent q-learning. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pages 309–316, 2010.Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. Cambridge, MA: MIT Press, 2011.Wolfgang Ertel. Reinforcement learning. In Introduction to Artificial Intelligence, pages 289–311. Springer, 2017.Jonathan Newton. Evolutionary game theory: A renaissance. Games, 9(2):31, 2018.Larry Samuelson. Evolutionary games and equilibrium selection, volume 1. MIT press, 1997.Agoston E Eiben and James E Smith. Introduction to Evolutionary Computing. Springer, 2015.Dietrich Stauffer. Life, love and death: Models of biological reproduction and aging. Institute for Theoretical physics, K¨oln, Euroland, 1999.Josef Hofbauer and William H Sandholm. Stable games and their dynamics. Journal of Economic theory, 144(4):1665–1693, 2009.Eduardo Mojica-Nava, Carlos Barreto, and Nicanor Quijano. Population games methods for distributed control of microgrids. IEEE Transactions on Smart Grid, 6(6):2586–2595, 2015.William H Sandholm. Population games and evolutionary dynamics. MIT press, 2010.Luc Moreau. Stability of multiagent systems with time-dependent communication links. IEEE Transactions on automatic control, 50(2):169–182, 2005.Wei Ren and Randal W Beard. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Transactions on automatic control, 50(5):655–661, 2005.A Cagnano, E De Tuglie, and P Mancarella. Microgrids: Overview and guidelines for practical implementations and operation. Applied Energy, 258:114039, 2020.JA Pec¸as Lopes, CL Moreira, and AG Madureira. Defining control strategies for microgrids islanded operation. IEEE Transactions on power systems, 21(2):916–924, 2006.Toshihide Ibaraki and Naoki Katoh. Resource allocation problems: algorithmic approaches. MIT press, 1988.Seon-Ju Ahn and Seung-Il Moon. Economic scheduling of distributed generators in a microgrid considering various constraints. In 2009 IEEE Power & Energy Society General Meeting, pages 1–6. IEEE, 2009Goran Strbac. Demand side management: Benefits and challenges. Energy policy, 36(12):4419–4426, 2008.Daniel E Olivares, Claudio A Ca˜nizares, and Mehrdad Kazerani. A centralized optimal energy management system for microgrids. In 2011 IEEE Power and Energy Society General Meeting, pages 1–6. IEEE, 2011.Pablo Quintana-Barcia, Tomislav Dragicevic, Jorge Garcia, Javier Ribas, and JosepM Guerrero. A distributed control strategy for islanded single-phase microgrids with hybrid energy storage systems based on power line signaling. Energies, 12(1):85, 2019.Bonan Huang, Lining Liu, Huaguang Zhang, Yushuai Li, and Qiuye Sun. Distributed optimal economic dispatch for microgrids considering communication delays. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(8):1634–1642, 2019.Juan C Vasquez, Josep M Guerrero, Jaume Miret, Miguel Castilla, and Luis Garcia De Vicuna. Hierarchical control of intelligent microgrids. IEEE Industrial Electronics Magazine, 4(4):23–29, 2010.Gustavo Chica-Pedraza, Eduardo Mojica-Nava, and Ernesto Cadena-Mu˜noz. Boltzmann distributed replicator dynamics: Population games in a microgrid context. Games, 12(1):1–1, 2021.Wood Aj and BF Wollenberg. Power generation, operation and control. New York: John Wiley & Sons, page 592, 1996.Daniel P´erez Palomar and Mung Chiang. A tutorial on decomposition methods for network utility maximization. IEEE Journal on Selected Areas in Communications, 24(8):1439–1451, 2006.Andr´es Pantoja and Nicanor Quijano. A population dynamics approach for the dispatch of distributed generators. IEEE Transactions on Industrial Electronics, 58(10):4559–4567, 2011.Eduardo Mojica-Nava, Carlos Andr´es Macana, and Nicanor Quijano. Dynamic population games for optimal dispatch on hierarchical microgrid control. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(3):306–317, 2013.Nicholas F Britton. Essential mathematical biology. Springer Science & Business Media, 2012.H Peyton Young and Shmuel Zamir. Handbook of game theory with economic applications. Technical report, Elsevier, 2015.Gustavo Chica, Eduardo Mojica, and Ernesto Cadena. Boltzmann-based distributed replicator dynamics: A smart grid application. In 2020 Congreso Internacional de Innovaci´on y Tendencias en Ingenier´ıa (CONIITI), pages 1–6. IEEE, 2020.Andr´es Pantoja, Nicanor Quijano, and Kevin M Passino. Dispatch of distributed generators under local-information constraints. In 2014 American Control Conference, pages 2682–2687. IEEE, 2014.Carlos Barreto, Eduardo Mojica-Nava, and Nicanor Quijano. Design of mechanisms for demand response programs. In 52nd IEEE Conference on Decision and Control, pages 1828–1833. IEEE, 2013.Carlos Barreto, Eduardo Mojica-Nava, and Nicanor Quijano. Incentives-based mechanism for efficient demand response programs. arXiv preprint arXiv:1408.5366, 2014.M Hadi Amini, Saber Talari, Hamidreza Arasteh, Nadali Mahmoudi, Mostafa Kazemi, Amir Abdollahi, Vikram Bhattacharjee, Miadreza Shafie-Khah, Pierluigi Siano, and Jo˜ao PS Catal˜ao. Demand response in future power networks: panorama and state-of-the-art. In Sustainable interdependent networks II, pages 167–191. Springer, 2019.Ramesh Johari and John N Tsitsiklis. Efficiency of scalar-parameterized mechanisms. Operations Research, 57(4):823–839, 2009.Drew Fudenberg, Fudenberg Drew, David K Levine, and David K Levine. The theory of learning in games, volume 2. MIT press, 1998.Tim Roughgarden. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.Julian Barreiro-G´omez, Nicanor Quijano, and Carlos Ocampo-Martinez. Distributed control of drinking water networks using population dynamics: Barcelona case study. In 53rd IEEE Conference on Decision and Control, pages 3216–3221. IEEE, 2014.Yamin Wang, Shouxiang Wang, and Lei Wu. Distributed optimization approaches for emerging power systems operation: A review. Electric Power Systems Research, 144:127–135, 2017.Paul A Jensen and Jonathan F Bard. Operations research models and methods, volume 1. John Wiley & Sons Incorporated, 2003.Hamidou Tembine, Eitan Altman, Rachid El-Azouzi, and Yezekael Hayel. Evolutionary games in wireless networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 40(3):634–646, 2009.Jason R Marden. State based potential games. Automatica, 48(12):3075–3088, 2012.Colciencias - MinCienciasEstudiantesInvestigadoresMaestrosPúblico generalLICENSElicense.txtlicense.txttext/plain; charset=utf-84074https://repositorio.unal.edu.co/bitstream/unal/80689/1/license.txt8153f7789df02f0a4c9e079953658ab2MD51ORIGINAL80032454.2021.pdf80032454.2021.pdfapplication/pdf17525077https://repositorio.unal.edu.co/bitstream/unal/80689/2/80032454.2021.pdf95312fe0da6b6e6b98d64bd00cc52aa3MD52THUMBNAIL80032454.2021.pdf.jpg80032454.2021.pdf.jpgGenerated Thumbnailimage/jpeg5549https://repositorio.unal.edu.co/bitstream/unal/80689/3/80032454.2021.pdf.jpg435547d26cb3a618fe4ffa856b4bf846MD53unal/80689oai:repositorio.unal.edu.co:unal/806892023-07-30 23:04:41.052Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUExBTlRJTExBIERFUMOTU0lUTwoKQ29tbyBlZGl0b3IgZGUgZXN0ZSDDrXRlbSwgdXN0ZWQgcHVlZGUgbW92ZXJsbyBhIHJldmlzacOzbiBzaW4gYW50ZXMgcmVzb2x2ZXIgbG9zIHByb2JsZW1hcyBpZGVudGlmaWNhZG9zLCBkZSBsbyBjb250cmFyaW8sIGhhZ2EgY2xpYyBlbiBHdWFyZGFyIHBhcmEgZ3VhcmRhciBlbCDDrXRlbSB5IHNvbHVjaW9uYXIgZXN0b3MgcHJvYmxlbWFzIG1hcyB0YXJkZS4KClBhcmEgdHJhYmFqb3MgZGVwb3NpdGFkb3MgcG9yIHN1IHByb3BpbyBhdXRvcjoKIApBbCBhdXRvYXJjaGl2YXIgZXN0ZSBncnVwbyBkZSBhcmNoaXZvcyBkaWdpdGFsZXMgeSBzdXMgbWV0YWRhdG9zLCB5byBnYXJhbnRpem8gYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVbmFsIGVsIGRlcmVjaG8gYSBhbG1hY2VuYXJsb3MgeSBtYW50ZW5lcmxvcyBkaXNwb25pYmxlcyBlbiBsw61uZWEgZGUgbWFuZXJhIGdyYXR1aXRhLiBEZWNsYXJvIHF1ZSBsYSBvYnJhIGVzIGRlIG1pIHByb3BpZWRhZCBpbnRlbGVjdHVhbCB5IHF1ZSBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVuYWwgbm8gYXN1bWUgbmluZ3VuYSByZXNwb25zYWJpbGlkYWQgc2kgaGF5IGFsZ3VuYSB2aW9sYWNpw7NuIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGFsIGRpc3RyaWJ1aXIgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuIChTZSByZWNvbWllbmRhIGEgdG9kb3MgbG9zIGF1dG9yZXMgYSBpbmRpY2FyIHN1cyBkZXJlY2hvcyBkZSBhdXRvciBlbiBsYSBww6FnaW5hIGRlIHTDrXR1bG8gZGUgc3UgZG9jdW1lbnRvLikgRGUgbGEgbWlzbWEgbWFuZXJhLCBhY2VwdG8gbG9zIHTDqXJtaW5vcyBkZSBsYSBzaWd1aWVudGUgbGljZW5jaWE6IExvcyBhdXRvcmVzIG8gdGl0dWxhcmVzIGRlbCBkZXJlY2hvIGRlIGF1dG9yIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gY29uZmllcmVuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgdW5hIGxpY2VuY2lhIG5vIGV4Y2x1c2l2YSwgbGltaXRhZGEgeSBncmF0dWl0YSBzb2JyZSBsYSBvYnJhIHF1ZSBzZSBpbnRlZ3JhIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwsIHF1ZSBzZSBhanVzdGEgYSBsYXMgc2lndWllbnRlcyBjYXJhY3RlcsOtc3RpY2FzOiBhKSBFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWUgZW4gZWwgcmVwb3NpdG9yaW8sIHF1ZSBzZXLDoW4gcHJvcnJvZ2FibGVzIGluZGVmaW5pZGFtZW50ZSBwb3IgZWwgdGllbXBvIHF1ZSBkdXJlIGVsIGRlcmVjaG8gcGF0cmltb25pYWwgZGVsIGF1dG9yLiBFbCBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgbGljZW5jaWEgc29saWNpdMOhbmRvbG8gYSBsYSBVbml2ZXJzaWRhZC4gYikgTG9zIGF1dG9yZXMgYXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBwdWJsaWNhciBsYSBvYnJhIGVuIGVsIGZvcm1hdG8gcXVlIGVsIHJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSB5IGNvbm9jZW4gcXVlIGRhZG8gcXVlIHNlIHB1YmxpY2EgZW4gSW50ZXJuZXQgcG9yIGVzdGUgaGVjaG8gY2lyY3VsYSBjb24gYWxjYW5jZSBtdW5kaWFsLiBjKSBMb3MgYXV0b3JlcyBhY2VwdGFuIHF1ZSBsYSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgYSB0w610dWxvIGdyYXR1aXRvLCBwb3IgbG8gdGFudG8sIHJlbnVuY2lhbiBhIHJlY2liaXIgZW1vbHVtZW50byBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBjb24gcXVlIHNlIHB1YmxpY2EuIGQpIExvcyBhdXRvcmVzIG1hbmlmaWVzdGFuIHF1ZSBzZSB0cmF0YSBkZSB1bmEgb2JyYSBvcmlnaW5hbCBzb2JyZSBsYSBxdWUgdGllbmVuIGxvcyBkZXJlY2hvcyBxdWUgYXV0b3JpemFuIHkgcXVlIHNvbiBlbGxvcyBxdWllbmVzIGFzdW1lbiB0b3RhbCByZXNwb25zYWJpbGlkYWQgcG9yIGVsIGNvbnRlbmlkbyBkZSBzdSBvYnJhIGFudGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgeSBhbnRlIHRlcmNlcm9zLiBFbiB0b2RvIGNhc28gbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgc2UgY29tcHJvbWV0ZSBhIGluZGljYXIgc2llbXByZSBsYSBhdXRvcsOtYSBpbmNsdXllbmRvIGVsIG5vbWJyZSBkZWwgYXV0b3IgeSBsYSBmZWNoYSBkZSBwdWJsaWNhY2nDs24uIGUpIExvcyBhdXRvcmVzIGF1dG9yaXphbiBhIGxhIFVuaXZlcnNpZGFkIHBhcmEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcywgaW5kaWNlc3MgeSBidXNjYWRvcmVzIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBzdSBkaWZ1c2nDs24uIGYpIExvcyBhdXRvcmVzIGFjZXB0YW4gcXVlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHB1ZWRhIGNvbnZlcnRpciBlbCBkb2N1bWVudG8gYSBjdWFscXVpZXIgbWVkaW8gbyBmb3JtYXRvIHBhcmEgcHJvcMOzc2l0b3MgZGUgcHJlc2VydmFjacOzbiBkaWdpdGFsLiBTSSBFTCBET0NVTUVOVE8gU0UgQkFTQSBFTiBVTiBUUkFCQUpPIFFVRSBIQSBTSURPIFBBVFJPQ0lOQURPIE8gQVBPWUFETyBQT1IgVU5BIEFHRU5DSUEgTyBVTkEgT1JHQU5JWkFDScOTTiwgQ09OIEVYQ0VQQ0nDk04gREUgTEEgVU5JVkVSU0lEQUQgTkFDSU9OQUwgREUgQ09MT01CSUEsIExPUyBBVVRPUkVTIEdBUkFOVElaQU4gUVVFIFNFIEhBIENVTVBMSURPIENPTiBMT1MgREVSRUNIT1MgWSBPQkxJR0FDSU9ORVMgUkVRVUVSSURPUyBQT1IgRUwgUkVTUEVDVElWTyBDT05UUkFUTyBPIEFDVUVSRE8uIAoKUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igb3RyYXMgcGVyc29uYXMgZGlzdGludGFzIGEgc3UgYXV0b3I6IAoKRGVjbGFybyBxdWUgZWwgZ3J1cG8gZGUgYXJjaGl2b3MgZGlnaXRhbGVzIHkgbWV0YWRhdG9zIGFzb2NpYWRvcyBxdWUgZXN0b3kgYXJjaGl2YW5kbyBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVOKSBlcyBkZSBkb21pbmlvIHDDumJsaWNvLiBTaSBubyBmdWVzZSBlbCBjYXNvLCBhY2VwdG8gdG9kYSBsYSByZXNwb25zYWJpbGlkYWQgcG9yIGN1YWxxdWllciBpbmZyYWNjacOzbiBkZSBkZXJlY2hvcyBkZSBhdXRvciBxdWUgY29ubGxldmUgbGEgZGlzdHJpYnVjacOzbiBkZSBlc3RvcyBhcmNoaXZvcyB5IG1ldGFkYXRvcy4KTk9UQTogU0kgTEEgVEVTSVMgQSBQVUJMSUNBUiBBRFFVSVJJw5MgQ09NUFJPTUlTT1MgREUgQ09ORklERU5DSUFMSURBRCBFTiBFTCBERVNBUlJPTExPIE8gUEFSVEVTIERFTCBET0NVTUVOVE8uIFNJR0EgTEEgRElSRUNUUklaIERFIExBIFJFU09MVUNJw5NOIDAyMyBERSAyMDE1LCBQT1IgTEEgQ1VBTCBTRSBFU1RBQkxFQ0UgRUwgUFJPQ0VESU1JRU5UTyBQQVJBIExBIFBVQkxJQ0FDScOTTiBERSBURVNJUyBERSBNQUVTVFLDjUEgWSBET0NUT1JBRE8gREUgTE9TIEVTVFVESUFOVEVTIERFIExBIFVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU4sIEVYUEVESURBIFBPUiBMQSBTRUNSRVRBUsONQSBHRU5FUkFMLiAqTEEgVEVTSVMgQSBQVUJMSUNBUiBERUJFIFNFUiBMQSBWRVJTScOTTiBGSU5BTCBBUFJPQkFEQS4gCgpBbCBoYWNlciBjbGljIGVuIGVsIHNpZ3VpZW50ZSBib3TDs24sIHVzdGVkIGluZGljYSBxdWUgZXN0w6EgZGUgYWN1ZXJkbyBjb24gZXN0b3MgdMOpcm1pbm9zLiBTaSB0aWVuZSBhbGd1bmEgZHVkYSBzb2JyZSBsYSBsaWNlbmNpYSwgcG9yIGZhdm9yLCBjb250YWN0ZSBjb24gZWwgYWRtaW5pc3RyYWRvciBkZWwgc2lzdGVtYS4KClVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIC0gw5psdGltYSBtb2RpZmljYWNpw7NuIDE5LzEwLzIwMjEK