Why is data privacy important? As we mark Data Privacy Week 2025, the growing complexity of data-driven technologies and the escalating risk of cyber-threats underscore the need for businesses to rethink their privacy strategies. Our online activity creates a treasure trove of data, from interests and hobbies to purchases and online behaviours. It can […]
Why is data privacy important?
As we mark Data Privacy Week 2025, the growing complexity of data-driven technologies and the escalating risk of cyber-threats underscore the need for businesses to rethink their privacy strategies.
Our online activity creates a treasure trove of data, from interests and hobbies to purchases and online behaviours. It can even include information about your physical self, such as health data — think about how an app on your phone might track your steps.
That data is valuable to businesses looking to capitalise on it and personalise every interaction, but are they keeping it safe?
With insights from industry leaders, TI explores how businesses can navigate data privacy challenges while leveraging it as a competitive advantage.
Data privacy as a market differentiator
The financial and reputational repercussions of data breaches are escalating. According to IBM’s 2024 Cost of a Data Breach Report, the global average cost of a data breach has reached $4.88 million, marking a 10% increase from the previous year and the highest figure on record.
Meanwhile, a 2024 Cisco Consumer Privacy Survey found that 75% of consumers will not purchase from organisations they don’t trust with their data. These figures highlight that data privacy is no longer just a compliance requirement but critical to customer trust and business longevity.
Akhil Mittal, senior security consulting manager at Black Duck, emphasises the increasing importance of data privacy, not just as a compliance requirement but as a fundamental part of building trust and market differentiation.
“High-profile breaches and stricter regulations like GDPR, CCPA, and emerging AI-related privacy laws are pushing companies to make data privacy a fundamental part of their operations,” says Mittal.
He adds that adopting a “privacy by design” approach — integrating privacy measures from the start of development — can mitigate risks, particularly in cloud-native and distributed systems.
Mittal also highlights the role of privacy-enhancing technologies (PETs), such as data anonymisation and AI-based data protection, in addressing modern challenges. These tools reduce risk and strengthen customer trust.
Building security by design: the shift to zero trust and encryption
The rapid adoption of zero trust architecture (ZTA) is transforming how businesses approach security. Carlos Aguilar Melchor, chief scientist for cybersecurity at SandboxAQ, advocates for ZTA as a cornerstone of modern data privacy strategies.
“Zero trust underscores the ‘never trust, always verify’ principle, enhancing resilience against cyber threats,” he explains.
Aguilar also emphasises the importance of Post-Quantum Cryptography (PQC) to future-proof encryption against emerging quantum computing threats.
Read more during Data Privacy Week 2025: Most businesses unprepared for post-quantum world, study finds
Similarly, Boris Cipot, senior security engineer at Black Duck, adds that security must be embedded into every stage of the Software Development Lifecycle (SDLC).
“Implementing technologies like Static Application Security Testing (SAST) and Software Composition Analysis (SCA) is a must. SAST tools will help discover and mitigate vulnerabilities in your code. In contrast, SCA tools will help organisations identify open-source sources used in their development and mitigate their vulnerabilities and license compliance risks.
“Additionally, Dynamic Application Security Testing (DAST) and Interactive Application Security Testing (IAST) help organisations uncover vulnerabilities in code, configurations, and dangerous application behaviour,” he explained.
AI in Data Privacy: threats and solutions
AI is reshaping data privacy risks and protective strategies. On one hand, AI-powered cyber threats — such as deepfake phishing scams and automated hacking tools — pose significant risks. On the other hand, AI-driven PETs, including data anonymisation and federated learning, are helping businesses strengthen security.
As Paul Bischoff, consumer privacy advocate at Comparitech, notes, “AI programs scrape as much data as they can from public sources to train their algorithms. As a result, personal info can be included in an AI’s response to a prompt, either intentionally or unintentionally.”
“AI significantly lowers the barriers to finding and collecting personal data, making it easier for criminals to exploit. I recommend disabling search engines from scraping social media and using data removal services like Incogni or PrivacyBee to get your data out of the hands of data brokers.”
Sunil Agrawal, CISO at Glean, stresses the need for robust governance in AI systems. “Data privacy isn’t a box to check — it’s a day-zero imperative,” he states. Agrawal recommends integrating real-time detection and correction mechanisms to address access anomalies as they arise, ensuring responsible AI practices.
“A strong commitment to data privacy and responsible AI use isn’t just ethical; it’s essential for safeguarding sensitive information, protecting innovation, and ensuring sustainable growth in the workplace,” he adds.
Transparency as a competitive edge
Dr Andrew Bolster, senior R&D manager at Black Duck, points out that open-source AI models, such as DeepSeek, emphasise the importance of transparency in advancing privacy and innovation. However, he cautions against neglecting security measures when leveraging open-source platforms.
“DeepSeek’s rumoured use of OpenAI Chain of Thought data for its initial training highlights the importance of transparency and shared resources in advancing AI. In the context of ‘Open-Source AI,’ it’s crucial that the underlying training and evaluation data are open, as well as the initial architecture and the resultant model weights,” he said.
He added: “Open-source AI, with its transparency and collective development, often outpaces closed-source alternatives in terms of adaptability and trust. As more organisations recognise these benefits, we could see a significant shift towards open-source AI, driving a new era of technological advancement.”
Read more during Data Privacy Week 2025: DeepSeek R1: Five key takeaways from GenAI’s “Sputnik moment”
High-profile breaches: lessons learned
Recent very public breaches demonstrate the urgent need for stronger data privacy measures.
The MOVEit breach in 2023, where almost 100 million individuals were impacted due to a third-party vulnerability, or the T-Mobile API leak that exposed the data of 37 million customers, and the Samsung AI data leak, also in 2023, where employees unintentionally shared sensitive source code by inputting it into ChatGPT, all emphasise the importance of securing the software supply chain.
Besnik Vrellaku, CEO and founder of Salesflow.io, a Go-To-Market (GTM) software provider, explains: “One of the major challenges we face is ensuring data privacy while using third-party generative AI tools. Data leakage concerns are significant, especially considering that larger corporations have experienced breaches.”
“To mitigate this risk, we conduct thorough due diligence on legal fronts and continuously update our terms and conditions, something many businesses tend to overlook.”
Vrellaku adds that despite such challenges, Salesflow.ai is able to build trust with clients by implementing end-to-end encryption and maintaining transparency with third-party providers.
He stressed the importance of staying “updated with AI legal developments and industry best practices” to effectively navigate the complexities of AI integration and data privacy.
What’s coming next?
Governments worldwide are tightening data privacy regulations. AI Privacy Laws, such as the EU AI Act and various US AI regulations, have introduced new transparency requirements for AI-driven data processing.
Data localisation laws in countries like India and China are enforcing stricter data sovereignty rules, requiring businesses to store data within national borders.
Post-quantum security standards from the US National Institute of Standards and Technology (NIST) are finalising post-quantum encryption requirements to safeguard against future quantum computing risks.
Chris Linnell, associate director of data privacy at Bridewell, notes that losing consumer trust is now of more concern than regulatory fines. He stresses that transparency and compliance with privacy laws are key to maintaining customer relationships.
“Often, we hear regulatory fines discussed as the main reason to achieve compliance, but what we’re seeing is that losing trust from consumers is one of the biggest impacts of poor data privacy practice, and subsequently one of the biggest drivers for our clients who can demonstrate proactive compliance,” he says.
According to experts, the next five years will bring major shifts in data privacy, including stronger AI governance. We might expect stricter global regulations on AI-driven data processing as AI adoption grows.
Businesses will need to adopt post-quantum cryptography to safeguard sensitive data. As consumers become more privacy-conscious, companies will need to prioritise transparency and ethical data practices.
Mittal concludes: “Data Privacy Week serves as a reminder that protecting consumer data isn’t just a regulatory requirement — it’s a moral duty and a business advantage.” Organisations that lead in data security today will build trust, reduce risks, and stay ahead of regulatory scrutiny in the future.