HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization

The emerging demand for a variety of novel Location-based Services (LBS) by consumers and industrial users is driven by the rapid and extensive proliferation of mobile smart devices. Sensors embedded in smart devices or machines provide wireless connectivity and Global Positioning System (GPS) capab...

Full description

Autores:
Nasir, Muneeb
Esguerra, Kiara
Faye, Ibrahima
Tang, Tong Boon
Yahya, Mazlaini
Tumian, Afidalina
Ho, Eric Tatt Wei
Tipo de recurso:
Article of journal
Fecha de publicación:
2024
Institución:
Universidad Tecnológica de Bolívar
Repositorio:
Repositorio Institucional UTB
Idioma:
eng
OAI Identifier:
oai:repositorio.utb.edu.co:20.500.12585/13522
Acceso en línea:
https://hdl.handle.net/20.500.12585/13522
https://doi.org/10.32397/tesea.vol5.n1.542
Palabra clave:
Deep learning
Transformer Neural Networks
WiFi Fingerprinting
Indoor Localization
Rights
openAccess
License
https://creativecommons.org/licenses/by/4.0
id UTB2_aa5aa2df6472a38f0765747d373729a3
oai_identifier_str oai:repositorio.utb.edu.co:20.500.12585/13522
network_acronym_str UTB2
network_name_str Repositorio Institucional UTB
repository_id_str
dc.title.spa.fl_str_mv HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
dc.title.translated.spa.fl_str_mv HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
title HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
spellingShingle HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
Deep learning
Transformer Neural Networks
WiFi Fingerprinting
Indoor Localization
title_short HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
title_full HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
title_fullStr HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
title_full_unstemmed HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
title_sort HyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor Localization
dc.creator.fl_str_mv Nasir, Muneeb
Esguerra, Kiara
Faye, Ibrahima
Tang, Tong Boon
Yahya, Mazlaini
Tumian, Afidalina
Ho, Eric Tatt Wei
dc.contributor.author.eng.fl_str_mv Nasir, Muneeb
Esguerra, Kiara
Faye, Ibrahima
Tang, Tong Boon
Yahya, Mazlaini
Tumian, Afidalina
Ho, Eric Tatt Wei
dc.subject.eng.fl_str_mv Deep learning
Transformer Neural Networks
WiFi Fingerprinting
Indoor Localization
topic Deep learning
Transformer Neural Networks
WiFi Fingerprinting
Indoor Localization
description The emerging demand for a variety of novel Location-based Services (LBS) by consumers and industrial users is driven by the rapid and extensive proliferation of mobile smart devices. Sensors embedded in smart devices or machines provide wireless connectivity and Global Positioning System (GPS) capability, and are co-utilized to acquire location-linked data which are algorithmically transformed into reliable and accurate location estimates. GPS is a mature and reliable technology for outdoor localization but indoor localization in a complex multi-storey building environment remains challenging due to fluctuations in wireless signal strength arising from multipath fading. Location-linked data from wireless access points (WAPs) such as received signal strength (RSS) are acquired as numerical sequences. By conceptualizing a fixed order sequence of WAP measurements as a sentence where the RSS from each WAP are words, we may leverage on recent advances in artificial intelligence for natural language processing (NLP) to enhance localization accuracy and improve robustness against signal fluctuations. We propose the hyper-class Transformer (HyTra), an encoder-only Transformer neural network which learns the relative positions of wireless access points (WAPs) through multiple learnable embeddings. We propose a second network, HyTra-HF, which improves upon HyTra by applying a hierarchical relationship between location classes. We test our proposed networks on public and private datasets varying in sizes. HyTra-HF outperforms existing deep learning solutions by obtaining 96.7\% accuracy for the floor classification task on the UJIIndoorloc dataset. HyTra-HF is amenable to deep model compression and achieves accuracy of 95.95\% with over ten-fold reduction in model size using Sparsity Aware Orthogonal (SAO) initialization and has the best-in-class accuracy for the sparse model.
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-06-30 11:55:40
2025-05-21T19:15:48Z
dc.date.available.none.fl_str_mv 2024-06-30 11:55:40
dc.date.issued.none.fl_str_mv 2024-06-30
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.driver.eng.fl_str_mv info:eu-repo/semantics/article
dc.type.coar.eng.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.local.eng.fl_str_mv Journal article
dc.type.content.eng.fl_str_mv Text
dc.type.version.eng.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.coarversion.eng.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
format http://purl.org/coar/resource_type/c_6501
status_str publishedVersion
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12585/13522
dc.identifier.url.none.fl_str_mv https://doi.org/10.32397/tesea.vol5.n1.542
dc.identifier.doi.none.fl_str_mv 10.32397/tesea.vol5.n1.542
dc.identifier.eissn.none.fl_str_mv 2745-0120
url https://hdl.handle.net/20.500.12585/13522
https://doi.org/10.32397/tesea.vol5.n1.542
identifier_str_mv 10.32397/tesea.vol5.n1.542
2745-0120
dc.language.iso.eng.fl_str_mv eng
language eng
dc.relation.references.eng.fl_str_mv Germán Martín Mendoza-Silva, Joaquín Torres-Sospedra, and Joaquín Huerta. A meta-review of indoor positioning systems. Sensors, 19(20), 2019. [2] George Sithole and Sisi Zlatanova. Position, location, place and area: An indoor perspective.Photogrammetry, Remote Sensing and Spatial Information Sciences, III-4:89–96, 2016. ISPRS Annals of the [3] Location-based services (lbs) market size and forecast - 2030, 2023. https://www.alliedmarketresearch.com/location-basedservices-market, Accessed on 20.09.23. [4] Shuang Shang and Lixing Wang. Overview of wifi fingerprinting-based indoor positioning. IET Communications,16:725–733, 4 2022. [5] Oludare Isaac Abiodun, Aman Jantan, Abiodun Esther Omolara, Kemi Victoria Dada, Abubakar Malah Umar, Okafor Uchenwa Linus, Humaira Arshad, Abdullahi Aminu Kazaure, Usman Gana, and Muhammad Ubale Kiru. Comprehensive review of artificial neural network applications to pattern recognition. IEEE access, 7:158820–158846, 2019. [6] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 6000–6010. Curran Associates Inc., 2017. [7] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar, October 2014. Association for Computational Linguistics. [8] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. [9] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020. [10] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), volume 1, pages 4171–4186, 2019. [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [13] Zhongfeng Zhang, Hongxin Du, Seungwon Choi, and Sung Ho Cho. Tips: Transformer based indoor positioning system using both csi and doa of wifi signal. IEEE Access, 10:111363–111376, 2022. [14] Wen Liu, Qianqian Cheng, Zhongliang Deng, Hong Chen, Xiao Fu, Xinyu Zheng, Shixuan Zheng, Cunzhe Chen, and Shuo Wang. Survey on csi-based indoor positioning systems and recent advances. In 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8, 2019. [15] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650. Association for Computational Linguistics, 2019. [16] Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Proceedings of the 2nd International Conference on Neural Information Processing Systems, pages 598–605, 1989. [17] Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE International Conference on Neural Networks, volume 1, pages 293–299, 1993. [18] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. pages 1–42, 2019. [19] Hidenori Tanaka, Daniel Kunin, Daniel L K Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. 2020. [20] Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. TernaryBERT: Distillation-aware ultra-low bit BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 509–521. Association for Computational Linguistics, 2020. [21] Faheem Zafari, Athanasios Gkelias, and Kin K. Leung. A survey of indoor localization systems and technologies. IEEE Communications Surveys & Tutorials, 21(3):2568–2599, 2019. [22] Navneet Singh, Sangho Choe, and Rajiv Punmiya. Machine learning based indoor localization using wi-fi rssi fingerprints: An overview. IEEE Access, 9:127150–127174, 2021. [23] X. Feng, K. A. Nguyen, and Z. Luo. A survey of deep learning approaches for wifi-based indoor positioning. Journal of Information and Telecommunication, 6:163–216, 2022. [24] Joaquín Torres-Sospedra, Raúl Montoliu, Adolfo Martínez-Usó, Joan P. Avariento, Tomás J. Arnau, Mauri Benedito-Bordonau, and Joaquín Huerta. Ujiindoorloc: A new multi-building and multi-floor database for wlan fingerprint-based indoor localization problems. In 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 261–270, 2014. [25] Yasmine Rezgui, Ling Pei, Xin Chen, Fei Wen, and Chen Han. An efficient normalized rank based svm for room level indoor wifi localization with diverse devices. Mobile Information Systems, 2017:1–19, 2017. [26] Navneet Singh, Sangho Choe, Rajiv Punmiya, and Navneesh Kaur. Xgbloc: Xgboost-based indoor localization in multi-building multi-floor environments. Sensors, 22:1–17, 2022. [27] Roberto Battiti, Nhat Thang Le, and Alessandro Villani. Location-aware computing: A neural network model for determining location in wireless lans. 2002. [28] Zhenyu Liu, Bin Dai, Xiang Wan, and Xueyi Li. Hybrid wireless fingerprint indoor localization method based on a convolutional neural network. Sensors, 19(20), 2019. [29] Xudong Song, Xiaochen Fan, Chaocan Xiang, Qianwen Ye, Leyu Liu, Zumin Wang, Xiangjian He, Ning Yang, and Gengfa Fang. A novel convolutional neural network based indoor localization framework with wifi fingerprinting. IEEE Access, 7:110698–110709, 2019. [30] Yuan Lukito and Antonius Rachmat Chrismanto. Recurrent neural networks model for wifi-based indoor positioning system. In 2017 International Conference on Smart Cities, Automation & Intelligent Computing Systems (ICON-SONICS), pages 121–125, 2017. [31] Minh Tu Hoang, Brosnan Yuen, Xiaodai Dong, Tao Lu, Robert Westendorp, and Kishore Reddy. Recurrent neural networks for accurate rssi indoor localization. IEEE Internet of Things Journal, 6(6):10639–10651, 2019. [32] Zhenghua Chen, Han Zou, Jian Fei Yang, Hao Jiang, and Lihua Xie. Wifi fingerprinting indoor localization using local feature-based deep lstm. IEEE Systems Journal, 14:3001–3010, 6 2020. [33] Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. [34] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(110):3371–3408, 2010. [35] Michał Nowicki and Jan Wietrzykowski. Low-effort place recognition with wifi fingerprints using deep learning. In Automation 2017, pages 575–584. Springer International Publishing, 2017. [36] Kyeong Soo Kim, Sanghyuk Lee, and Kaizhu Huang. A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on wi-fi fingerprinting. Big Data Analytics, 3(1):1–17, 2018. [37] Feng Qin, Tao Zuo, and Xing Wang. Ccpos: Wifi fingerprint indoor positioning system based on cdae-cnn. Sensors, 21(4):1–17, 2021. [38] Raul Montoliu, E Sansano, Joaquin Torres-Sospedra, and Oscar Belmonte. Indoorloc platform: A public repository for comparing and evaluating indoor positioning systems. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8. IEEE, 11 2017. [39] Marius Laska and Jörg Blankenbach. Deeplocbox: Reliable fingerprinting-based indoor area localization. Sensors, 21(6):1–23, 2021. [40] Marius Laska and Jorg Blankenbach. Multi-task neural network for position estimation in large-scale indoor environments.IEEE Access, 10:26024–26032, 2022. [41] Abdalla Elmokhtar Ahmed Elesawi and Kyeong Soo Kim. Hierarchical multi-building and multi-floor indoor localization based on recurrent neural networks. In 2021 Ninth International Symposium on Computing and Networking Workshops (CANDARW), pages 193–196. IEEE Computer Society, 2021. [42] Yu-An Wang and Yun-Nung Chen. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. arXiv preprint arXiv:2010.04903, 2020. [43] Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. [44] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. [45] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [46] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [47] Kiara Esguerra, Muneeb Nasir, Tong Boon Tang, Afidalina Tumian, and Eric Tatt Wei Ho. Sparsity-aware orthogonal initialization of deep neural networks. IEEE Access, 11:74165–74181, 2023. [48] Jeffrey Pennington, Samuel S. Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: Theory and practice. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 4788–4798. Curran Associates Inc., 2017.
dc.relation.ispartofjournal.eng.fl_str_mv Transactions on Energy Systems and Engineering Applications
dc.relation.citationvolume.eng.fl_str_mv 5
dc.relation.citationstartpage.none.fl_str_mv 1
dc.relation.citationendpage.none.fl_str_mv 24
dc.relation.bitstream.none.fl_str_mv https://revistas.utb.edu.co/tesea/article/download/542/390
dc.relation.citationedition.eng.fl_str_mv Núm. 1 , Año 2024 : Transactions on Energy Systems and Engineering Applications
dc.relation.citationissue.eng.fl_str_mv 1
dc.rights.uri.eng.fl_str_mv https://creativecommons.org/licenses/by/4.0
dc.rights.accessrights.eng.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.creativecommons.eng.fl_str_mv This work is licensed under a Creative Commons Attribution 4.0 International License.
dc.rights.coar.eng.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv https://creativecommons.org/licenses/by/4.0
This work is licensed under a Creative Commons Attribution 4.0 International License.
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.mimetype.eng.fl_str_mv application/pdf
dc.publisher.eng.fl_str_mv Universidad Tecnológica de Bolívar
dc.source.eng.fl_str_mv https://revistas.utb.edu.co/tesea/article/view/542
institution Universidad Tecnológica de Bolívar
repository.name.fl_str_mv Repositorio Digital Universidad Tecnológica de Bolívar
repository.mail.fl_str_mv bdigital@metabiblioteca.com
_version_ 1858228436242268160
spelling Nasir, MuneebEsguerra, KiaraFaye, IbrahimaTang, Tong BoonYahya, MazlainiTumian, AfidalinaHo, Eric Tatt Wei2024-06-30 11:55:402025-05-21T19:15:48Z2024-06-30 11:55:402024-06-30https://hdl.handle.net/20.500.12585/13522https://doi.org/10.32397/tesea.vol5.n1.54210.32397/tesea.vol5.n1.5422745-0120The emerging demand for a variety of novel Location-based Services (LBS) by consumers and industrial users is driven by the rapid and extensive proliferation of mobile smart devices. Sensors embedded in smart devices or machines provide wireless connectivity and Global Positioning System (GPS) capability, and are co-utilized to acquire location-linked data which are algorithmically transformed into reliable and accurate location estimates. GPS is a mature and reliable technology for outdoor localization but indoor localization in a complex multi-storey building environment remains challenging due to fluctuations in wireless signal strength arising from multipath fading. Location-linked data from wireless access points (WAPs) such as received signal strength (RSS) are acquired as numerical sequences. By conceptualizing a fixed order sequence of WAP measurements as a sentence where the RSS from each WAP are words, we may leverage on recent advances in artificial intelligence for natural language processing (NLP) to enhance localization accuracy and improve robustness against signal fluctuations. We propose the hyper-class Transformer (HyTra), an encoder-only Transformer neural network which learns the relative positions of wireless access points (WAPs) through multiple learnable embeddings. We propose a second network, HyTra-HF, which improves upon HyTra by applying a hierarchical relationship between location classes. We test our proposed networks on public and private datasets varying in sizes. HyTra-HF outperforms existing deep learning solutions by obtaining 96.7\% accuracy for the floor classification task on the UJIIndoorloc dataset. HyTra-HF is amenable to deep model compression and achieves accuracy of 95.95\% with over ten-fold reduction in model size using Sparsity Aware Orthogonal (SAO) initialization and has the best-in-class accuracy for the sparse model.application/pdfengUniversidad Tecnológica de BolívarMuneeb, Kiara, Ibrahima Faye, Tong Boon Tang, Mazlaini Yahya, Afidalina Tumian, Eric Tatt Wei Ho - 2024https://creativecommons.org/licenses/by/4.0info:eu-repo/semantics/openAccessThis work is licensed under a Creative Commons Attribution 4.0 International License.http://purl.org/coar/access_right/c_abf2https://revistas.utb.edu.co/tesea/article/view/542Deep learningTransformer Neural NetworksWiFi FingerprintingIndoor LocalizationHyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor LocalizationHyTra: Hyperclass Transformer for WiFi Fingerprinting-based Indoor LocalizationArtículo de revistainfo:eu-repo/semantics/articlehttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Journal articleTextinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85Germán Martín Mendoza-Silva, Joaquín Torres-Sospedra, and Joaquín Huerta. A meta-review of indoor positioning systems. Sensors, 19(20), 2019. [2] George Sithole and Sisi Zlatanova. Position, location, place and area: An indoor perspective.Photogrammetry, Remote Sensing and Spatial Information Sciences, III-4:89–96, 2016. ISPRS Annals of the [3] Location-based services (lbs) market size and forecast - 2030, 2023. https://www.alliedmarketresearch.com/location-basedservices-market, Accessed on 20.09.23. [4] Shuang Shang and Lixing Wang. Overview of wifi fingerprinting-based indoor positioning. IET Communications,16:725–733, 4 2022. [5] Oludare Isaac Abiodun, Aman Jantan, Abiodun Esther Omolara, Kemi Victoria Dada, Abubakar Malah Umar, Okafor Uchenwa Linus, Humaira Arshad, Abdullahi Aminu Kazaure, Usman Gana, and Muhammad Ubale Kiru. Comprehensive review of artificial neural network applications to pattern recognition. IEEE access, 7:158820–158846, 2019. [6] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 6000–6010. Curran Associates Inc., 2017. [7] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar, October 2014. Association for Computational Linguistics. [8] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. [9] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020. [10] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), volume 1, pages 4171–4186, 2019. [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. [13] Zhongfeng Zhang, Hongxin Du, Seungwon Choi, and Sung Ho Cho. Tips: Transformer based indoor positioning system using both csi and doa of wifi signal. IEEE Access, 10:111363–111376, 2022. [14] Wen Liu, Qianqian Cheng, Zhongliang Deng, Hong Chen, Xiao Fu, Xinyu Zheng, Shixuan Zheng, Cunzhe Chen, and Shuo Wang. Survey on csi-based indoor positioning systems and recent advances. In 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8, 2019. [15] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650. Association for Computational Linguistics, 2019. [16] Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Proceedings of the 2nd International Conference on Neural Information Processing Systems, pages 598–605, 1989. [17] Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE International Conference on Neural Networks, volume 1, pages 293–299, 1993. [18] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. pages 1–42, 2019. [19] Hidenori Tanaka, Daniel Kunin, Daniel L K Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. 2020. [20] Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. TernaryBERT: Distillation-aware ultra-low bit BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 509–521. Association for Computational Linguistics, 2020. [21] Faheem Zafari, Athanasios Gkelias, and Kin K. Leung. A survey of indoor localization systems and technologies. IEEE Communications Surveys & Tutorials, 21(3):2568–2599, 2019. [22] Navneet Singh, Sangho Choe, and Rajiv Punmiya. Machine learning based indoor localization using wi-fi rssi fingerprints: An overview. IEEE Access, 9:127150–127174, 2021. [23] X. Feng, K. A. Nguyen, and Z. Luo. A survey of deep learning approaches for wifi-based indoor positioning. Journal of Information and Telecommunication, 6:163–216, 2022. [24] Joaquín Torres-Sospedra, Raúl Montoliu, Adolfo Martínez-Usó, Joan P. Avariento, Tomás J. Arnau, Mauri Benedito-Bordonau, and Joaquín Huerta. Ujiindoorloc: A new multi-building and multi-floor database for wlan fingerprint-based indoor localization problems. In 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 261–270, 2014. [25] Yasmine Rezgui, Ling Pei, Xin Chen, Fei Wen, and Chen Han. An efficient normalized rank based svm for room level indoor wifi localization with diverse devices. Mobile Information Systems, 2017:1–19, 2017. [26] Navneet Singh, Sangho Choe, Rajiv Punmiya, and Navneesh Kaur. Xgbloc: Xgboost-based indoor localization in multi-building multi-floor environments. Sensors, 22:1–17, 2022. [27] Roberto Battiti, Nhat Thang Le, and Alessandro Villani. Location-aware computing: A neural network model for determining location in wireless lans. 2002. [28] Zhenyu Liu, Bin Dai, Xiang Wan, and Xueyi Li. Hybrid wireless fingerprint indoor localization method based on a convolutional neural network. Sensors, 19(20), 2019. [29] Xudong Song, Xiaochen Fan, Chaocan Xiang, Qianwen Ye, Leyu Liu, Zumin Wang, Xiangjian He, Ning Yang, and Gengfa Fang. A novel convolutional neural network based indoor localization framework with wifi fingerprinting. IEEE Access, 7:110698–110709, 2019. [30] Yuan Lukito and Antonius Rachmat Chrismanto. Recurrent neural networks model for wifi-based indoor positioning system. In 2017 International Conference on Smart Cities, Automation & Intelligent Computing Systems (ICON-SONICS), pages 121–125, 2017. [31] Minh Tu Hoang, Brosnan Yuen, Xiaodai Dong, Tao Lu, Robert Westendorp, and Kishore Reddy. Recurrent neural networks for accurate rssi indoor localization. IEEE Internet of Things Journal, 6(6):10639–10651, 2019. [32] Zhenghua Chen, Han Zou, Jian Fei Yang, Hao Jiang, and Lihua Xie. Wifi fingerprinting indoor localization using local feature-based deep lstm. IEEE Systems Journal, 14:3001–3010, 6 2020. [33] Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. [34] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(110):3371–3408, 2010. [35] Michał Nowicki and Jan Wietrzykowski. Low-effort place recognition with wifi fingerprints using deep learning. In Automation 2017, pages 575–584. Springer International Publishing, 2017. [36] Kyeong Soo Kim, Sanghyuk Lee, and Kaizhu Huang. A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on wi-fi fingerprinting. Big Data Analytics, 3(1):1–17, 2018. [37] Feng Qin, Tao Zuo, and Xing Wang. Ccpos: Wifi fingerprint indoor positioning system based on cdae-cnn. Sensors, 21(4):1–17, 2021. [38] Raul Montoliu, E Sansano, Joaquin Torres-Sospedra, and Oscar Belmonte. Indoorloc platform: A public repository for comparing and evaluating indoor positioning systems. In 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pages 1–8. IEEE, 11 2017. [39] Marius Laska and Jörg Blankenbach. Deeplocbox: Reliable fingerprinting-based indoor area localization. Sensors, 21(6):1–23, 2021. [40] Marius Laska and Jorg Blankenbach. Multi-task neural network for position estimation in large-scale indoor environments.IEEE Access, 10:26024–26032, 2022. [41] Abdalla Elmokhtar Ahmed Elesawi and Kyeong Soo Kim. Hierarchical multi-building and multi-floor indoor localization based on recurrent neural networks. In 2021 Ninth International Symposium on Computing and Networking Workshops (CANDARW), pages 193–196. IEEE Computer Society, 2021. [42] Yu-An Wang and Yun-Nung Chen. What do position embeddings learn? an empirical study of pre-trained language model positional encoding. arXiv preprint arXiv:2010.04903, 2020. [43] Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. [44] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. [45] Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [46] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [47] Kiara Esguerra, Muneeb Nasir, Tong Boon Tang, Afidalina Tumian, and Eric Tatt Wei Ho. Sparsity-aware orthogonal initialization of deep neural networks. IEEE Access, 11:74165–74181, 2023. [48] Jeffrey Pennington, Samuel S. Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: Theory and practice. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 4788–4798. Curran Associates Inc., 2017.Transactions on Energy Systems and Engineering Applications5124https://revistas.utb.edu.co/tesea/article/download/542/390Núm. 1 , Año 2024 : Transactions on Energy Systems and Engineering Applications120.500.12585/13522oai:repositorio.utb.edu.co:20.500.12585/135222025-05-21 14:15:48.327https://creativecommons.org/licenses/by/4.0Muneeb, Kiara, Ibrahima Faye, Tong Boon Tang, Mazlaini Yahya, Afidalina Tumian, Eric Tatt Wei Ho - 2024metadata.onlyhttps://repositorio.utb.edu.coRepositorio Digital Universidad Tecnológica de Bolívarbdigital@metabiblioteca.com