Author: Yogi Muthuswamy (February, 2026)
According to a McKinsey 2024 survey, 78% of organizations use AI in at least one business function and this is likely an understatement. According to IBM, the average cost of a data breach was $4.5M in 2025, and this rises to $4.92M for malicious insider incidents.
To mitigate risks and damages from sensitive data leakages, insider risks, improper data use, Project leaders and managers must address the following:
● Ensure proper use of authorized AI tools and data controls. This could include use of 3rd party AI tools (ChatGPT, CoPilot, Claude etc), enterprise AI tools, and the data used by these tools, and fed in as prompts to avoid sensitive data leakage, and other damage.
● Leverage any data security tools that are available to improve the monitoring and visibility of AI use, applying technology driven policy enforcement (tools like DSPM, DLP etc) and improving team awareness. This could include data discovery and classification for governance tools such as Microsoft Purview.
● Implement all enterprise policies for data labeling, classification and governance. This will protect sensitive data from leakage and malicious insider risks. It is especially important to manage regulatory compliance risks.
Shown below are techniques that are used to secure AI use, to mitigate the various sources of the Data security risk. For a more in depth expose Addressing AI Security Risks Blog by Sentra IO. This article IBM 10 AI Dangers and How to manage them provides a nice higher level view.

