Fast and precise: Parallel processing of vehicle traffic videos using big data analytics
Cities worldwide use camera systems that collect and store large amounts of images, which are used to study vehicle traffic conditions, facilitating traffic management author- ities’ decision-making. Typically, the inspection of those images is performed manually, which prevents extracting relevant...
- Autores:
-
Perafán Villota, Juan Carlos
Mondragon, Oscar H
Mayor Toro, Walter M.
- Tipo de recurso:
- Article of journal
- Fecha de publicación:
- 2021
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- eng
- OAI Identifier:
- oai:red.uao.edu.co:10614/13900
- Acceso en línea:
- https://hdl.handle.net/10614/13900
https://red.uao.edu.co/
- Palabra clave:
- Redes neurales (Computadores)
Accidentes de tránsito
Big data
Neural networks (Computer science)
Traffic accidents
Accident detection
Big data
Convolutional neural network
Fast processing
Hadoop
Intersection over Union (IoU)
Kalman filter
Multi-tracking
Smart cities
Spark
You Only Look Once (YOLO)
- Rights
- openAccess
- License
- Derechos reservados - IEEE, 2021
id |
REPOUAO2_7bb50af38f59354170804ecc1fa39552 |
---|---|
oai_identifier_str |
oai:red.uao.edu.co:10614/13900 |
network_acronym_str |
REPOUAO2 |
network_name_str |
RED: Repositorio Educativo Digital UAO |
repository_id_str |
|
dc.title.eng.fl_str_mv |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
title |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
spellingShingle |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics Redes neurales (Computadores) Accidentes de tránsito Big data Neural networks (Computer science) Traffic accidents Accident detection Big data Convolutional neural network Fast processing Hadoop Intersection over Union (IoU) Kalman filter Multi-tracking Smart cities Spark You Only Look Once (YOLO) |
title_short |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
title_full |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
title_fullStr |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
title_full_unstemmed |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
title_sort |
Fast and precise: Parallel processing of vehicle traffic videos using big data analytics |
dc.creator.fl_str_mv |
Perafán Villota, Juan Carlos Mondragon, Oscar H Mayor Toro, Walter M. |
dc.contributor.author.none.fl_str_mv |
Perafán Villota, Juan Carlos Mondragon, Oscar H Mayor Toro, Walter M. |
dc.subject.armarc.spa.fl_str_mv |
Redes neurales (Computadores) Accidentes de tránsito Big data |
topic |
Redes neurales (Computadores) Accidentes de tránsito Big data Neural networks (Computer science) Traffic accidents Accident detection Big data Convolutional neural network Fast processing Hadoop Intersection over Union (IoU) Kalman filter Multi-tracking Smart cities Spark You Only Look Once (YOLO) |
dc.subject.armarc.eng.fl_str_mv |
Neural networks (Computer science) Traffic accidents |
dc.subject.proposal.eng.fl_str_mv |
Accident detection Big data Convolutional neural network Fast processing Hadoop Intersection over Union (IoU) Kalman filter Multi-tracking Smart cities Spark You Only Look Once (YOLO) |
description |
Cities worldwide use camera systems that collect and store large amounts of images, which are used to study vehicle traffic conditions, facilitating traffic management author- ities’ decision-making. Typically, the inspection of those images is performed manually, which prevents extracting relevant informa- tion in a timely manner. There is a lack of platforms to collect and analyze key data from traffic videos in an automatic and speedy way. Computer vision can be used in combination with parallel distributed systems to provide city authorities tools for automatic and fast processing of stored videos to determine the most significant driving patterns that cause traffic accidents while allowing to measure the traffic density. We use a Convolutional Neural Network (CNN) to detect vehicles captured by traffic cameras, which are then tracked using an algorithm that we designed, based on multi-tracking Kalman filters. To speed up analysis, we propose a low-cost distributed infrastructure based on Hadoop and Spark frameworks for data processing: videos are equally divided and distributed to multicore CPU nodes for analysis. However, splitting up videos could generate inaccuracies in vehicle counting, which were avoided through the use of an algorithm that we present in this work. We found that it is possible to rapidly determine traffic densities, identify dangerous driving maneuvers, and detect accidents with high accuracy by using low-cost commodity cluster computing. There is a lack of computing platforms to collect and analyze key data from traffic videos in an automatic and speedy way. Computer vision can be used in combination with parallel distributed systems to provide city authorities tools for automatic and fast processing of stored videos to determine the most significant driving patterns that cause traffic accidents while allowing to measure the traffic density. This study explores the integration of different tools such as parallel data processing, deep learning, and probabilistic models. We present an approach based on Convolutional Neural Network (CNN) and Kalman filters to detect and track vehicles captured by traffic cameras. To speed up analysis, we propose and evaluate a low-cost distributed infrastructure based on Hadoop and Spark frameworks and comprised of multicore CPU nodes for data processing. Finally, we present an algorithm to allow vehicle counting while avoiding inaccuracies generated when videos are split to be distributed for analysis. We found that it is possible to rapidly determine traffic densities, identify dangerous driving maneuvers, and detect accidents with high accuracy by using low-cost commodity cluster computing. |
publishDate |
2021 |
dc.date.issued.none.fl_str_mv |
2021-09 |
dc.date.accessioned.none.fl_str_mv |
2022-05-20T16:33:09Z |
dc.date.available.none.fl_str_mv |
2022-05-20T16:33:09Z |
dc.type.spa.fl_str_mv |
Artículo de revista |
dc.type.coar.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.coar.eng.fl_str_mv |
http://purl.org/coar/resource_type/c_6501 |
dc.type.content.eng.fl_str_mv |
Text |
dc.type.driver.eng.fl_str_mv |
info:eu-repo/semantics/article |
dc.type.redcol.eng.fl_str_mv |
http://purl.org/redcol/resource_type/ART |
dc.type.version.eng.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
format |
http://purl.org/coar/resource_type/c_6501 |
status_str |
publishedVersion |
dc.identifier.issn.spa.fl_str_mv |
1524-9050 |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/10614/13900 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Educativo Digital |
dc.identifier.repourl.spa.fl_str_mv |
https://red.uao.edu.co/ |
identifier_str_mv |
1524-9050 Universidad Autónoma de Occidente Repositorio Educativo Digital |
url |
https://hdl.handle.net/10614/13900 https://red.uao.edu.co/ |
dc.language.iso.eng.fl_str_mv |
eng |
language |
eng |
dc.relation.citationendpage.spa.fl_str_mv |
10 |
dc.relation.citationstartpage.spa.fl_str_mv |
1 |
dc.relation.cites.eng.fl_str_mv |
Perafan Villota, J. C., Mondragón Martínez, O. H., Mayor Toro, W. M., (2021). Fast and Precise: Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics. IEEE Transactions on Intelligent Transportation Systems, pp. 1-10. https://ieeexplore.ieee.org/document/9531568 |
dc.relation.ispartofjournal.eng.fl_str_mv |
IEEE Transactions on Intelligent Transportation Systems |
dc.relation.references.none.fl_str_mv |
[1] S. C. Freire et al., “Atlas of the human planet 2019,” Tech. Rep. EUR 30010 EN, 2019. [2] Osborne Clarke International. Smart Cities in Europe. Accessed: May 21, 2021. [Online]. Available: http://smartcities.osborneclarke.com [3] M. N. Smith. The Number of Cars Will Double Worldwide by 2040. Accessed: May 21, 2021. [Online]. Available: https://www.weforum.org/ agenda/2016/04/the-number-of-cars-worldwide-is-set-to-double-by- 2040 [4] Y. Wei, N. Song, L. Ke, M.-C. Chang, and S. Lyu, “Street object detection/tracking for AI city traffic analysis,” in Proc. IEEE SmartWorld, Ubiquitous Intell. Comput., Adv. Trusted Comput., Scalable Comput. Commun., Cloud Big Data Comput., Internet People Smart City Innov. (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Aug. 2017, pp. 1–5. [5] A. Mohan, K. Gauen, Y.-H. Lu, W. W. Li, and X. Chen, “Internet of video things in 2030: A world with many cameras,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2017, pp. 1–4. [6] Y. K. Ki and D. Y. Lee, “A traffic accident recording and reporting model at intersections,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 2, pp. 188–194, Jun. 2007. [7] H.-S. Song, S.-N. Lu, X. Ma, Y. Yang, X.-Q. Liu, and P. Zhang, “Vehicle behavior analysis using target motion trajectories,” IEEE Trans. Veh. Technol., vol. 63, no. 8, pp. 3580–3591, Oct. 2014. [8] C.-P. Lin, J.-C. Tai, and K.-T. Song, “Traffic monitoring based on realtime image tracking,” in Proc. IEEE Int. Conf. Robot. Automat., vol. 2, Sep. 2003, pp. 2091–2096. [9] N. K. Kanhere and S. T. Birchfield, “Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features,” IEEE Trans. Intell. Transp. Syst., vol. 9, no. 1, pp. 148–160, Mar. 2008. [10] W. Hu, X. Xiao, D. Xie, T. Tan, and S. Maybank, “Traffic accident prediction using 3-D model-based vehicle tracking,” IEEE Trans. Veh. Technol., vol. 53, no. 3, pp. 677–694, May 2004. [11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 91–99. [12] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788. [13] W. Liu et al., “SSD: Single shot MultiBox detector,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 21–37. [14] Ö. Aköz and M. E. Karsligil, “Traffic event classification at intersections based on the severity of abnormality,” Mach. Vis. Appl., vol. 25, no. 3, pp. 613–632, Dec. 2014. [15] G. Ning et al., “Spatially supervised recurrent convolutional neural networks for visual object tracking,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2017, pp. 1–4. [16] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2017, pp. 3645–3649. [17] H. Veeraraghavan, O. Masoud, and N. P. Papanikolopoulos, “Computer vision algorithms for intersection monitoring,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 2, pp. 78–89, Jun. 2003. [18] S. Srivastava, A. V. Divekar, C. Anilkumar, I. Naik, V. Kulkarni, and V. Pattabiraman, “Comparative analysis of deep learning image detection algorithms,” J. Big Data, vol. 8, no. 1, pp. 1–27, Dec. 2021. [19] J. Qiu, Q. Wu, G. Ding, Y. Xu, and S. Feng, “A survey of machine learning for big data processing,” EURASIP J. Adv. Signal Process., vol. 2016, no. 1, p. 67, 2016. [20] M. M. Rathore, H. Son, A. Ahmad, and A. Paul, “Real-time video processing for traffic control in smart city using Hadoop ecosystem with GPUs,” Soft Comput., vol. 22, no. 5, pp. 1533–1544, Mar. 2018. [21] I. Triguero, G. P. Figueredo, M. Mesgarpour, J. M. Garibaldi, and R. I. John, “Vehicle incident hot spots identification: An approach for big data,” in Proc. IEEE Trustcom/BigDataSE/ICESS, Aug. 2017, pp. 901–908. [22] S. Amini, I. Gerostathopoulos, and C. Prehofer, “Big data analytics architecture for real-time traffic control,” in Proc. 5th IEEE Int. Conf. Models Technol. Intell. Transp. Syst. (MT-ITS), Jun. 2017, pp. 710–715. [23] W. Zhang, B. Xue, J. Zhou, X. Liu, and H. Lv, “A scalable and efficient multi-label CNN-based license plate recognition on spark,” in Proc. IEEE SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI, Oct. 2018, pp. 1738–1744. [24] A. Sundareswaran and L. K., “Real-time vehicle traffic prediction in apache spark using ensemble learning for deep neural networks,” Int. J. Intell. Inf. Technol., vol. 16, no. 4, pp. 19–36, Oct. 2020. [25] Z. Solarte, J. D. Gonzalez, L. Peña, and O. H. Mondragon, “Microservices-based architecture for resilient cities applications,” in Proc. Int. Conf. Adv. Eng. Theory Appl. Cham, Switzerland: Springer, 2019, pp. 423–432. [26] M. Zaharia et al., “Apache spark: A unified engine for big data processing,” Commun. ACM, vol. 59, no. 11, pp. 56–65, 2016. [27] K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The Hadoop distributed file system,” in Proc. MSST, vol. 10, 2010, pp. 1–10. [28] X.-W. Chen and X. Lin, “Big data deep learning: Challenges and perspectives,” IEEE Access, vol. 2, pp. 514–525, 2014. [29] S. Nowozin, “Optimal decisions from probabilistic models: The intersection-over-union case,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 548–555. [30] H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Res. Logistics Quart., vol. 2, nos. 1–2, pp. 83–97, Mar. 1955. [31] Federal Highway Administration. Intersection Safety Issue Briefs. Accessed: May 21, 2021. [Online]. Available: https://rosap.ntl.bts.gov/view/dot/49962 [32] National Highway Traffic Safety Administration. Crash Factors in Intersection-Related Crashes: An On-Scene Perspective. Accessed: May 2021. [Online]. Available: https://crashstats.nhtsa.dot.gov/ Api/Public/ViewPublication/811366 [33] W. M. M. Toro, J. C. P. Villota, O. H. Mondragon, and J. S. O. Ceron, “Divide and conquer: An accurate machine learning algorithm to process split videos on a parallel processing infrastructure,” 2019, arXiv:1912.09601. [Online]. Available: http://arxiv.org/abs/1912.09601 |
dc.rights.spa.fl_str_mv |
Derechos reservados - IEEE, 2021 |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.eng.fl_str_mv |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.eng.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.creativecommons.spa.fl_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) |
rights_invalid_str_mv |
Derechos reservados - IEEE, 2021 https://creativecommons.org/licenses/by-nc-nd/4.0/ Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
10 páginas |
dc.format.mimetype.eng.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
IEEE |
dc.source.eng.fl_str_mv |
https://ieeexplore.ieee.org/document/9531568 |
institution |
Universidad Autónoma de Occidente |
bitstream.url.fl_str_mv |
https://red.uao.edu.co/bitstreams/35cd72d3-86ef-44e6-b25e-b6cb1fcad52b/download https://red.uao.edu.co/bitstreams/a634af0e-3ae2-49aa-b2b1-92eb2ec9f2ad/download https://red.uao.edu.co/bitstreams/38d4ef90-dded-4dad-8d74-13a868e798cf/download https://red.uao.edu.co/bitstreams/8648ea1c-934d-4ab4-a1bb-bce76b50c1f3/download |
bitstream.checksum.fl_str_mv |
20b5ba22b1117f71589c7318baa2c560 02a9b07c338540ea66ef9acc33dd7398 4782e995baaf8d23c472719c6622a3d8 9304befe75e26a698fa34fb32bd829fe |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Digital Universidad Autonoma de Occidente |
repository.mail.fl_str_mv |
repositorio@uao.edu.co |
_version_ |
1814260150862610432 |
spelling |
Perafán Villota, Juan Carlosvirtual::4113-1Mondragon, Oscar H9a500a92750e819d0d2ce6bad9af8505Mayor Toro, Walter M.04dce2db13eac32603dbd86eb6543e5e2022-05-20T16:33:09Z2022-05-20T16:33:09Z2021-091524-9050https://hdl.handle.net/10614/13900Universidad Autónoma de OccidenteRepositorio Educativo Digitalhttps://red.uao.edu.co/Cities worldwide use camera systems that collect and store large amounts of images, which are used to study vehicle traffic conditions, facilitating traffic management author- ities’ decision-making. Typically, the inspection of those images is performed manually, which prevents extracting relevant informa- tion in a timely manner. There is a lack of platforms to collect and analyze key data from traffic videos in an automatic and speedy way. Computer vision can be used in combination with parallel distributed systems to provide city authorities tools for automatic and fast processing of stored videos to determine the most significant driving patterns that cause traffic accidents while allowing to measure the traffic density. We use a Convolutional Neural Network (CNN) to detect vehicles captured by traffic cameras, which are then tracked using an algorithm that we designed, based on multi-tracking Kalman filters. To speed up analysis, we propose a low-cost distributed infrastructure based on Hadoop and Spark frameworks for data processing: videos are equally divided and distributed to multicore CPU nodes for analysis. However, splitting up videos could generate inaccuracies in vehicle counting, which were avoided through the use of an algorithm that we present in this work. We found that it is possible to rapidly determine traffic densities, identify dangerous driving maneuvers, and detect accidents with high accuracy by using low-cost commodity cluster computing. There is a lack of computing platforms to collect and analyze key data from traffic videos in an automatic and speedy way. Computer vision can be used in combination with parallel distributed systems to provide city authorities tools for automatic and fast processing of stored videos to determine the most significant driving patterns that cause traffic accidents while allowing to measure the traffic density. This study explores the integration of different tools such as parallel data processing, deep learning, and probabilistic models. We present an approach based on Convolutional Neural Network (CNN) and Kalman filters to detect and track vehicles captured by traffic cameras. To speed up analysis, we propose and evaluate a low-cost distributed infrastructure based on Hadoop and Spark frameworks and comprised of multicore CPU nodes for data processing. Finally, we present an algorithm to allow vehicle counting while avoiding inaccuracies generated when videos are split to be distributed for analysis. We found that it is possible to rapidly determine traffic densities, identify dangerous driving maneuvers, and detect accidents with high accuracy by using low-cost commodity cluster computing.10 páginasapplication/pdfengIEEEDerechos reservados - IEEE, 2021https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAtribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)http://purl.org/coar/access_right/c_abf2https://ieeexplore.ieee.org/document/9531568Fast and precise: Parallel processing of vehicle traffic videos using big data analyticsArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85Redes neurales (Computadores)Accidentes de tránsitoBig dataNeural networks (Computer science)Traffic accidentsAccident detectionBig dataConvolutional neural networkFast processingHadoopIntersection over Union (IoU)Kalman filterMulti-trackingSmart citiesSparkYou Only Look Once (YOLO)101Perafan Villota, J. C., Mondragón Martínez, O. H., Mayor Toro, W. M., (2021). Fast and Precise: Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics. IEEE Transactions on Intelligent Transportation Systems, pp. 1-10. https://ieeexplore.ieee.org/document/9531568IEEE Transactions on Intelligent Transportation Systems[1] S. C. Freire et al., “Atlas of the human planet 2019,” Tech. Rep. EUR 30010 EN, 2019.[2] Osborne Clarke International. Smart Cities in Europe. Accessed: May 21, 2021. [Online]. Available: http://smartcities.osborneclarke.com[3] M. N. Smith. The Number of Cars Will Double Worldwide by 2040. Accessed: May 21, 2021. [Online]. Available: https://www.weforum.org/ agenda/2016/04/the-number-of-cars-worldwide-is-set-to-double-by- 2040[4] Y. Wei, N. Song, L. Ke, M.-C. Chang, and S. Lyu, “Street object detection/tracking for AI city traffic analysis,” in Proc. IEEE SmartWorld, Ubiquitous Intell. Comput., Adv. Trusted Comput., Scalable Comput. Commun., Cloud Big Data Comput., Internet People Smart City Innov. (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Aug. 2017, pp. 1–5.[5] A. Mohan, K. Gauen, Y.-H. Lu, W. W. Li, and X. Chen, “Internet of video things in 2030: A world with many cameras,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2017, pp. 1–4.[6] Y. K. Ki and D. Y. Lee, “A traffic accident recording and reporting model at intersections,” IEEE Trans. Intell. Transp. Syst., vol. 8, no. 2, pp. 188–194, Jun. 2007.[7] H.-S. Song, S.-N. Lu, X. Ma, Y. Yang, X.-Q. Liu, and P. Zhang, “Vehicle behavior analysis using target motion trajectories,” IEEE Trans. Veh. Technol., vol. 63, no. 8, pp. 3580–3591, Oct. 2014.[8] C.-P. Lin, J.-C. Tai, and K.-T. Song, “Traffic monitoring based on realtime image tracking,” in Proc. IEEE Int. Conf. Robot. Automat., vol. 2, Sep. 2003, pp. 2091–2096.[9] N. K. Kanhere and S. T. Birchfield, “Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features,” IEEE Trans. Intell. Transp. Syst., vol. 9, no. 1, pp. 148–160, Mar. 2008.[10] W. Hu, X. Xiao, D. Xie, T. Tan, and S. Maybank, “Traffic accident prediction using 3-D model-based vehicle tracking,” IEEE Trans. Veh. Technol., vol. 53, no. 3, pp. 677–694, May 2004.[11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 91–99.[12] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788.[13] W. Liu et al., “SSD: Single shot MultiBox detector,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 21–37.[14] Ö. Aköz and M. E. Karsligil, “Traffic event classification at intersections based on the severity of abnormality,” Mach. Vis. Appl., vol. 25, no. 3, pp. 613–632, Dec. 2014.[15] G. Ning et al., “Spatially supervised recurrent convolutional neural networks for visual object tracking,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), May 2017, pp. 1–4.[16] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2017, pp. 3645–3649.[17] H. Veeraraghavan, O. Masoud, and N. P. Papanikolopoulos, “Computer vision algorithms for intersection monitoring,” IEEE Trans. Intell. Transp. Syst., vol. 4, no. 2, pp. 78–89, Jun. 2003.[18] S. Srivastava, A. V. Divekar, C. Anilkumar, I. Naik, V. Kulkarni, and V. Pattabiraman, “Comparative analysis of deep learning image detection algorithms,” J. Big Data, vol. 8, no. 1, pp. 1–27, Dec. 2021.[19] J. Qiu, Q. Wu, G. Ding, Y. Xu, and S. Feng, “A survey of machine learning for big data processing,” EURASIP J. Adv. Signal Process., vol. 2016, no. 1, p. 67, 2016.[20] M. M. Rathore, H. Son, A. Ahmad, and A. Paul, “Real-time video processing for traffic control in smart city using Hadoop ecosystem with GPUs,” Soft Comput., vol. 22, no. 5, pp. 1533–1544, Mar. 2018.[21] I. Triguero, G. P. Figueredo, M. Mesgarpour, J. M. Garibaldi, and R. I. John, “Vehicle incident hot spots identification: An approach for big data,” in Proc. IEEE Trustcom/BigDataSE/ICESS, Aug. 2017, pp. 901–908.[22] S. Amini, I. Gerostathopoulos, and C. Prehofer, “Big data analytics architecture for real-time traffic control,” in Proc. 5th IEEE Int. Conf. Models Technol. Intell. Transp. Syst. (MT-ITS), Jun. 2017, pp. 710–715.[23] W. Zhang, B. Xue, J. Zhou, X. Liu, and H. Lv, “A scalable and efficient multi-label CNN-based license plate recognition on spark,” in Proc. IEEE SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI, Oct. 2018, pp. 1738–1744.[24] A. Sundareswaran and L. K., “Real-time vehicle traffic prediction in apache spark using ensemble learning for deep neural networks,” Int. J. Intell. Inf. Technol., vol. 16, no. 4, pp. 19–36, Oct. 2020.[25] Z. Solarte, J. D. Gonzalez, L. Peña, and O. H. Mondragon, “Microservices-based architecture for resilient cities applications,” in Proc. Int. Conf. Adv. Eng. Theory Appl. Cham, Switzerland: Springer, 2019, pp. 423–432.[26] M. Zaharia et al., “Apache spark: A unified engine for big data processing,” Commun. ACM, vol. 59, no. 11, pp. 56–65, 2016.[27] K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The Hadoop distributed file system,” in Proc. MSST, vol. 10, 2010, pp. 1–10.[28] X.-W. Chen and X. Lin, “Big data deep learning: Challenges and perspectives,” IEEE Access, vol. 2, pp. 514–525, 2014.[29] S. Nowozin, “Optimal decisions from probabilistic models: The intersection-over-union case,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 548–555.[30] H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Res. Logistics Quart., vol. 2, nos. 1–2, pp. 83–97, Mar. 1955.[31] Federal Highway Administration. Intersection Safety Issue Briefs. Accessed: May 21, 2021. [Online]. Available: https://rosap.ntl.bts.gov/view/dot/49962[32] National Highway Traffic Safety Administration. Crash Factors in Intersection-Related Crashes: An On-Scene Perspective. Accessed: May 2021. [Online]. Available: https://crashstats.nhtsa.dot.gov/ Api/Public/ViewPublication/811366[33] W. M. M. Toro, J. C. P. Villota, O. H. Mondragon, and J. S. O. Ceron, “Divide and conquer: An accurate machine learning algorithm to process split videos on a parallel processing infrastructure,” 2019, arXiv:1912.09601. [Online]. Available: http://arxiv.org/abs/1912.09601Comunidad generalPublication286553f4-3942-4404-9123-b85ee6e69330virtual::4113-1286553f4-3942-4404-9123-b85ee6e69330virtual::4113-1https://scholar.google.com/citations?user=MW2zbLAAAAAJ&hl=envirtual::4113-10000-0002-7275-9839virtual::4113-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000637769virtual::4113-1LICENSElicense.txtlicense.txttext/plain; charset=utf-81665https://red.uao.edu.co/bitstreams/35cd72d3-86ef-44e6-b25e-b6cb1fcad52b/download20b5ba22b1117f71589c7318baa2c560MD52ORIGINALFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdfFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdfTexto archivo completo del artículo de revista, PDFapplication/pdf3281065https://red.uao.edu.co/bitstreams/a634af0e-3ae2-49aa-b2b1-92eb2ec9f2ad/download02a9b07c338540ea66ef9acc33dd7398MD53TEXTFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdf.txtFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdf.txtExtracted texttext/plain52716https://red.uao.edu.co/bitstreams/38d4ef90-dded-4dad-8d74-13a868e798cf/download4782e995baaf8d23c472719c6622a3d8MD54THUMBNAILFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdf.jpgFast and Precise - Parallel Processing of Vehicle Traffic Videos Using Big Data Analytics.pdf.jpgGenerated Thumbnailimage/jpeg19382https://red.uao.edu.co/bitstreams/8648ea1c-934d-4ab4-a1bb-bce76b50c1f3/download9304befe75e26a698fa34fb32bd829feMD5510614/13900oai:red.uao.edu.co:10614/139002024-03-13 11:38:57.216https://creativecommons.org/licenses/by-nc-nd/4.0/Derechos reservados - IEEE, 2021open.accesshttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coRUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS4K |