Managing Data Risk in an AI-Driven World
Anna Hannem’s job is to get to ‘yes’ as responsibly as possible
The ascent of big data and artificial intelligence have opened businesses up to previously unimaginable opportunities, unlocking everything from marketing innovations, to operational efficiencies. But the current information age can also expose companies to very real risks, including AI hallucinations, biases, privacy violations, and cybersecurity threats. This helps to explain the rapid—and pronounced—rise of data risk as a profession.

Anna Hannem (MMAI'25)
Title: Vice-President, Data Risk Management, TD
More and more organizations that deal in data (and that’s pretty much everyone) are appointing savvy people to lead the safe and prudent use of the information that feeds leading-edge technologies like AI.
"Organizations need people whose job it is to ensure AI and data tools are integrated responsibly—maximizing efficiency and profitability while minimizing risk,” explains Karen Jackson-Cox, executive director of the Smith Career Advancement Centre
People like Anna Hannem.
Hannem works as Vice-President of Data Risk Management at TD, a role she started just a few months before completing her Smith Master of Management in Artificial Intelligence degree. Every day, she draws on nearly 20 years of progressive and diverse experience in financial services to ensure the bank is handling its vast stores of data as soundly as possible. It’s challenging and fulfilling work—and she’s used to people not understanding what she does: “It’s still such a new field,” she explains. “I get asked all the time, ‘What does that mean? How do you provide oversight around data?’”
Below, Hannem explains just that, and provides some insightful—and sometimes surprising—intel about what a career in data risk actually involves.
How did you come to work in data risk?
To be honest, it’s the result of some serendipity. I grew up wanting to pursue a path in psychology. But as sometimes happens in life, things change. While I was in school, I got a part-time job in customer service at TD, at the branch level. Then an opportunity came for me to move into corporate—specifically in project management. That’s when I realized I really liked the business world. I liked the way it was structured and organized, and I liked working towards an end result.
I spent nearly eight years in project management. One of our projects ended up being very technology focused, involving data. Originally that scared me, because I didn’t have a technology background. But I’m one of those people that likes to tackle things that make me uncomfortable head-on. I don’t shy away from it. And with this project, I realized that technology wasn’t as scary as I had thought. In fact, it was logical and easy to understand—especially if you approach it analytically. So, I pivoted into data management, and then data governance, both at the bank, and then analytics, at a different company. That opened more opportunities and brought me a role in data and analytics at Scotiabank. From there, I had the chance to set up responsible AI function, which did not yet exist in any of the banks. There was no regulation mandating it, but my leaders at that point thought it was a prudent thing to start, so they tapped me on the shoulder and asked if I’d do it.
That was one of the best moves I made. We got responsible AI standing at Scotiabank, which went on to win several awards. And then ChatGPT blew up and I was asked to oversee AI risk, which was another new and novel function. I did that for a few years and then TD called me to come back to lead data risk there.
My career has been about going down paths that didn’t seem obvious at the beginning—paths that, in fact, sometimes didn’t even exist yet—without fear. And that’s what’s led me to a good space today.
How do you explain your job to others?
My job is to help innovation happen safely. While some people may think that risk’s job is to say “no” to things, it is in fact the opposite. With the pace of AI and innovation, people really want to go fast, but they don’t always have a full understanding around what that means. We want to make it easier to get to “yes”, but within certain boundaries and within certain guardrails. And we have the expertise to provide advice and guidance to get there.
What are your key responsibilities?
I need to stay at the forefront of understanding the big picture implications, and—with that knowledge—to educate stakeholders about how they can prudently go about doing what they want to do, in a way that is safe for everyone. It’s also my responsibility to make sure no one is uncomfortable. I want the customer to feel comfortable because they know they can trust whatever it is they are interacting with, and for the company to feel comfortable because they know they have the appropriate guardrails in place. To me, that’s the core responsibility of the risk function.
What about your job tends to surprise people?
That it’s not all about doom and gloom. When I told my husband I was moving into this type of work, he said, “You’re too positive for risk.” I countered with, “No, I think that’s what risk needs.” As we move into new frontiers like AI, there’s a lot of fear about it, but there’s also so much good in its potential. In my experience you have to weigh both.
What do you love most about what you do?
I love the people aspect of my work, whether I’m dealing with the team I’m leading or the stakeholders I work with. Whatever challenge we’re approaching together, I always strive to focus on the people involved. What are their needs? Yes, we have to have difficult conversations sometimes, but in my experience, when you focus on the human aspect you’re more likely to land in a good spot, which is always the goal. And I find that very satisfying.
What skills are most valuable in your work?
People skills are essential. My job is to make sure people are doing things prudently. That sometimes involves telling people to slow down or pull back on things they’ve spent a lot of time or energy to move forward—things that can impact revenue, speed to market and more. It can be very charged. You don’t want to be seen as a blocker, or a naysayer, or a negative entity that people want to avoid. You want to be seen as a partner: You want to impart “Yes, I am totally on board with your enthusiasm, but let’s see how we can make it actually a reality.” You need empathy and a high EQ if you want to do that while also maintaining positive and productive relationships.
Communication skills are also very valuable. Risk can be ambiguous and difficult to understand. If you can translate complicated concepts in a way that makes them easier to understand, it will help you thrive in this career. In any career really.
Finally, I find curiosity is helpful. So is positivity. When you can combine the two? That’s really powerful.
What excites you about the future of your profession?
The fact that there are still so many unknowns. This job didn’t exist 10 years ago, and 10 years from now there will be jobs in this space that don’t exist now—especially with the way new technology is evolving. There will be all sorts of opportunities to make data and AI safer for everyone. To me, that’s so exciting.
I’m also encouraged to see more women getting into the data risk space. It’s good to have diversity in this field and to incorporate as many perspectives as possible, especially with AI—whether we’re talking about ethics, or working with agents, or whatever the case may be.
What’s your best advice for people considering a career in data risk?
The path is never going to go exactly as you think it will. In this space things can change, sometimes very fast. You need to be able to pivot and be able to go along with it. That has been a differentiator for me.