Study of the current state and prospects of artificial intelligence in cybersecurity

Authors

DOI:

https://doi.org/10.30837/rt.2025.1.220.03

Keywords:

artificial intelligence, cybersecurity, machine learning, SIEM, IDS, AI-based antivirus

Abstract

In the modern world, with the development of new technologies, artificial intelligence (AI) in cybersecurity has become an integral component. Therefore, studying its advantages, risks, and potential use cases is a highly relevant research topic. In current digital environment, where cyber threats are becoming increasingly sophisticated, the implementation of AI technologies significantly enhances the effectiveness of security systems by enabling automated threat detection and response. In this study the main applications of AI in cybersecurity were examined, including threat detection, malware analysis, cryptographic security enhancement, phishing protection, and attack prediction. One of the key aspects is the integration of AI into Security Information and Event Management (SIEM) systems, which analyze vast amounts of data and help to detect anomalies. Such systems reduce the workload on security teams and improve the accuracy and speed of threat response. Special attention is given to the analysis of modern AI-powered antivirus solutions, particularly Microsoft Defender for Endpoint and Darktrace. These solutions are based on behavioral analysis algorithms and machine learning, allowing for more effective detection of complex threats and incident prevention. Microsoft Defender provide a high level of endpoint protection. Meanwhile, Darktrace utilizes self-learning models to analyze network traffic, enabling the detection of zero-day threats and internal risks within organizations. The study also learns the major risks associated with the use of AI in cybercrime. AI is increasingly leveraged by malicious actors to automate attacks, significantly increasing their effectiveness and making detection more challenging. The primary AI-based cyber threats discussed include Data Poisoning attacks, Evasion Attacks, Prompt Injection Attacks, and AI-based social engineering. To mitigate these risks, the development of robust AI models resistant to adversarial attacks, increased algorithm transparency, and implementation of the International AI regulation standards is recommended, including NIST. Additionally, raising awareness among users and cybersecurity specialists is crucial, as the human factor remains one of the most significant vulnerabilities in security systems. In conclusion, it is said that AI is a key factor in the advancement of cybersecurity, offering significant improvements in protecting information and critical systems. However, without proper regulation and protective measures, AI can become a powerful tool for cybercriminals, posing new security challenges in the digital age. Striking a balance between innovation, ethical standards, and security will be essential in shaping the future strategy for the effective use of AI.

References

NIST standardization process "Post-Quantum Cryptography: Digital Signature Schemes". Access mode: https://csrc.nist.gov/Projects/pqc-dig-sig/round-1-additional-signatures.

TAO, Feng; Akhtar, Muhammad Shoaib; Jiayuan, Zhang. The future of artificial intelligence in cybersecurity: A comprehensive survey // EAI Endorsed Transactions on Creative Technologies. 2021. 8.28: e3–e3. https://doi.org/10.4108/eai.7-7-2021.170285.

Leung B. K. (2021). Security Information and Event Management (SIEM) Evaluation Report. ScholarWorks. May 2021. Access mode: https://scholarworks.calstate.edu/downloads/41687p49q.

González-Granadillo G., González-Zarzosa S., Diaz, R. Security Information and Event Management (SIEM) // Analysis, Trends, and Usage in Critical Infrastructures. Sensors. 2021. 21(14). Access mode: https://doi.org/10.3390/s21144759

Muhammad S., et al. Effective Security Monitoring Using Efficient SIEM Architecture // Human-centric Com-puting and Information Sciences. 2023. 13. Access mode: https://doi.org/10.22967/HCIS.2023.13.017.

What is SIEM. Security Information and Event Management Tools. (n.d.). Imperva. Access mode: https://www.imperva.com/learn/application-security/siem/.

IBM Security QRadar. What is security information and event management (SIEM)? https://www.ibm.com/think/topics/siem.

Splunk. The Splunk SIEM. Access mode: https://www.splunk.com/en_us/products/enterprise-security.html.

Stellar Cyber. AI SIEM: The 6 Components of AI-Based SIEM. - Access mode: https://stellarcyber.ai/learn/ai-driven-siem/.

ISO/IEC 27001:2022. Information technology - Security techniques - Information security management sys-tems - Requirements. International standard. 3 Edition.

Microsoft Defender for Endpoint. 2024. Access mode: https://learn.microsoft.com/uk-ua/defender-endpoint/microsoft-defender-endpoint.

Darktrace. Official website. 2025. Access mode: https://darktrace.com/.

Mauri L., Damiani E. Modeling Threats to AI-ML Systems Using STRIDE. Sensors 2022, 22(17), 6662; - Access mode: https://doi.org/10.3390/s22176662.

The near-term impact of AI on the cyber threat: https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat.

Nihad Hassan. What is data poisoning (AI poisoning) and how does it work? Search Enterprise AI, TechTar-get, 2024. Access mode: https://www.techtarget.com/searchenterpriseai/definition/data-poisoning-AI-poisoning.

Tom Krantz, Alexandra Jonker. What is data poisoning? IBM. Access mode: https://www.ibm.com/think/topics/data-poisoning.

NIST Trustworthy and Responsible AI NIST AI 100-5. A Plan for Global Engagement on AI Standards: https://doi.org/10.6028/NIST.AI.100-5.

Vassilev A, Oprea A, Fordyce A, Anderson H (2024) Adversarial Machine Learning: A Taxonomy and Termi-nology of Attacks and Mitigations. (National Institute of Standards and Technology, Gaithersburg, MD) NIST Artifcial Intelligence (AI) Report, NIST Trustworthy and Responsible AI NIST AI 100-2e2023. Access mode: https://doi.org/10.6028/NIST.AI.100-2e2023.

R. Perdisci, D. Dagon, Wenke Lee, P. Fogla, and M. Sharif. Misleading worm signature generators using de-liberate noise injection // 2006 IEEE Symposium on Security and Privacy (S&P'06), Berkeley/Oakland, CA, 2006.

Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I.P. Rubinstein, Udam Saini, Charles Sutton, and Kai Xia. Exploiting machine learning to subvert your spam flter // First USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET 08), San Francisco, CA, April 2008. USENIX Association.

Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks // International Conference on Learning Representations, 2014.

Alesia Chernikova and Alina Oprea. FENCE: Feasible evasion attacks on neural networks in constrained en-vironments // ACM Transactions on Privacy and Security (TOPS) Journal. 2022.

Ryan Sheatsley, Blaine Hoak, Eric Pauley, Yohan Beugin, Michael J. Weisman, and Patrick McDaniel. On the robustness of domain constraints // Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communica-tions Security, CCS '21, p. 495–515, New York, NY, USA, 2021. Association for Computing Machinery.

Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. Intriguing properties of ad-versarial ML attacks in the problem space // 2020 IEEE Symposium on Security and Privacy (S&P). P. 1308–1325. IEEE Computer Society, 2020.

Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. Exploiting programmatic behavior of llms // Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023.

Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you signed up for // Compromising realworld llm-integrated applications with indirect prompt injection. arXiv preprint arXiv:2302.12173, 2023.

Downloads

Published

2025-04-10

How to Cite

Golikov, Y. (2025). Study of the current state and prospects of artificial intelligence in cybersecurity. Radiotekhnika, (220), 40–49. https://doi.org/10.30837/rt.2025.1.220.03

Issue

Section

Articles