The latest news on jobs and human resources

Provided by AGP

Building Security, Trust in the Age of Artificial Intelligence

(MENAFN) Cybersecurity is no longer limited to protecting devices or digital systems; it has expanded into a broad, interconnected field that affects public services, critical infrastructure, data integrity, institutional decision-making, public trust, and even national strategic independence.

According to a report published by the Turkish National Intelligence Academy titled “Cybersecurity in the Age of Artificial Intelligence and Türkiye's Strategic Priorities,” artificial intelligence should be understood not just as a technological development, but as a powerful force that reshapes how attacks are carried out, how defenses are organized, and how decisions and regulations are formed.

The report highlights that AI is fundamentally transforming the cyber threat environment by increasing both the scale and complexity of risks. Traditional cybersecurity mainly focused on protecting hardware, software, and databases. In contrast, AI-driven systems introduce a much wider range of vulnerable points, including datasets, machine learning models, training pipelines, prompts, plugins, cloud systems, and automated decision-making tools. This shift means cybersecurity is no longer purely a technical issue, but one that also involves governance, law, procurement, human resources, and strategic policy planning.

Another major concern is that AI-enabled cyberattacks are becoming faster, more cost-efficient, and increasingly difficult to detect. Techniques such as AI-generated phishing messages, deepfake audio and video content, synthetic identities, impersonated executive communications, and automated system scanning are significantly enhancing the capabilities of malicious actors. These developments can undermine trust not only in digital systems but also in institutions and social interactions, potentially affecting individuals, organizations, and critical national infrastructure such as energy, finance, defense supply chains, and communications networks.

The report also draws attention to risks associated with large language models and autonomous AI systems. While these technologies can improve efficiency in both public and private sectors, they also introduce serious governance challenges if not properly managed.

Uncertainty about what data is used, how outputs influence decisions, and the extent of human oversight can turn efficiency gains into systemic vulnerabilities. Risks such as data manipulation, prompt injection, unauthorized access, information leakage, and excessive reliance on automated outputs are described as not only technical problems but also failures in accountability and institutional control.

Additionally, the report stresses that the ethical frameworks and embedded assumptions within AI systems remain contested and not fully understood, making it difficult to predict their full impact, especially in sensitive or high-stakes environments.

MENAFN04052026000045017281ID1111067030

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:

Sign up for Human Resources Times.

The daily local news briefing you can trust. Every day. Subscribe now.

By signing up, you agree to our Terms & Conditions.