

A coffee with… Bronwyn Boyle, CISO, PPRO
Bronwyn Boyle didn’t begin her career in computer science, cryptography or coding bootcamps. She studied classics and philosophy, delving into ancient texts, formal logic, and even the propaganda of Cleopatra and Mark Anthony. Yet those…
Bronwyn Boyle didn’t begin her career in computer science, cryptography or coding bootcamps. She studied classics and philosophy, delving into ancient texts, formal logic, and even the propaganda of Cleopatra and Mark Anthony.
Yet those early fascinations — with rhetoric, ethics, and the ways ideas shape societies — have proved to be surprisingly relevant to a world now dominated by misinformation, AI, and cyber risk.
Over the past two decades, Boyle has built a career at the sharp end of cybersecurity. She’s held senior roles across global banking and fintech, helping highly regulated institutions transform their security strategies while grappling with the promises and perils of artificial intelligence.
Now, as chief information security officer at payments platform PPRO, she sits at the crossroads of finance, technology, and trust — a vantage point that gives her a clear view of both the innovation opportunities and the human toll of today’s fast-moving threat environment.
You studied classics and philosophy. How did you end up in technology and then cybersecurity?
UCD had just put in a HDip post grad in computer science. So for want of a better thing to do, more than anything else, I did that course, and it was really interesting. I had been a big fan of formal logic in philosophy, and I had studied maths as well in first year in uni. So actually, I had a lot of foundations for software engineering without even knowing it.
And how does that classics background help in cybersecurity now?
I did my master’s thesis on religious propaganda used by Mark Anthony and Cleopatra in ancient Roman Egypt. And, it’s all the stuff that we’re seeing with misinformation on social media. There are huge parallels. I also did an awful lot of work on the ethics of technology, Biomedical Ethics. All of that, again, really plays very naturally into the AI space. More than ever critical thinking skills are so important.
Before your current role you were consulting on security transformation and AI enablement in banking. What’s happening in that space?
I think particularly for regulated organisations, many of them are in the same boat. They’re keen to test how to adopt AI safely, very aware that there’s regulation coming through, and there’s a need to be cautious and careful about how it’s being implemented. And I think there’s a bit of scrutinising the hype cycle. The business case is out for assessment.
The basics are actually remarkably congruent across most organisations. You want to find the good opportunities and use cases are that proven, where you know you can realise value. You want to be able to use the technology safely and ensure that it’s ethically deployed and monitor for bias. I think there’s principles here that apply more generically across any kind of use of AI. And again, the specifics will vary but I think we’ve got a kind of we’re converging now as an industry.
How is AI changing the threat environment?
It’s hotting up. The barrier to entry is now being completely demolished. You’ve got the opportunity to really scale attacks and targeted attacks that once took investment, took manual effort, took a bit of research. And on the on the flipside, we can’t defend with the same pace. So, there’s a kind of widening asymmetry between what offensive use of AI can achieve versus defence.
And with the use of agentic AI, you’ve got nondeterministic systems operating in a paradigm of security that was built on deterministic paradigms.
How are you working around that?
Well, we’re all running very fast, me and the rest of the world. There’s certainly a lot of great innovations that are coming through around deploying AI in your security stack, and we’re looking to embrace that where there is proven success. I think the other piece is just working across the business on improving AI literacy, and making sure that anything that is changing in terms of the operational piece is on the radar and that we can build those controls in.
All this accelerated change that sounds like a recipe for burnout…
It can really lead to significant burnout. [As an industry] we’re seeing this in terms of people suddenly having kind of mental health issues or struggling. The level of sick leave is going up. We’re losing talent from the pipeline as well. When we’ve got such a big skills gap, we can’t afford to lose really talented professionals.
You’re involved with cybersecurity burnout non-profit Cybermindz. What can cybersecurity leaders do to tackle this?
Cybermindz is really leaning into this in terms of having a benchmark of team health. It’s setting up surveys and ways of actually quantitatively measuring how well teams are doing.
And also it’s just really important, from more of a qualitative perspective, just to be having the conversations, opening space for people to talk about how they’re feeling, which is still quite difficult. People don’t always feel comfortable opening about it, but I think creating that space is one of the first things you can do.
OK, if we’re in a safe space, how do you have your coffee?
With a tea bag and a teacup. I’m sorry, I’m an Irish stereotype. I would like a nice cup of Barry’s, a good drop of milk, right? Not that strong.