AT: What brought you into the world of fraud, risk, and compliance after working as a banker?
DE: I began my career on the credit side of banking as a credit analyst, which gave me a front-row seat to how banks make lending decisions and how much they rely on accurate information and trust. Through that work, I started noticing patterns that didn’t always add up — files that raised questions about identity, intent, or authenticity. That exposure sparked a deeper interest in understanding how banks protect themselves and their customers from illicit activity. When an opportunity opened to move into a fraud analyst role at the same institution, it felt like a natural next step. It allowed me to build on my credit expertise while diving into the mechanics of fraud detection, risk management, and compliance in a challenging, requiring constant adaptation, creativity, and strategic thinking. Moving from credit to fraud was also a shift from static decisioning to dynamic problem-solving. Credit analysis typically follows a defined process with established inputs and models, while fraud and compliance require interpreting signals that are constantly changing. I was suddenly working in a world where yesterday’s patterns didn’t always predict tomorrow’s attacks, and where understanding human behavior mattered as much as understanding numbers. These same dynamics exist well beyond banking. Whether it’s verifying supplier information in manufacturing, safeguarding patient records in healthcare, or protecting e-commerce platforms from fake accounts, every industry faces the challenge of distinguishing legitimate activity from bad actors. The underlying principles — risk assessment, trust building, and real-time decisioning — are universal. What also appealed to me was the mission-driven nature of the work. Fraud and financial crime aren’t victimless; there are real people and communities affected. Early on, I remember working on a case where a vulnerable customer was nearly defrauded through an elaborate romance scam. Being part of the team that detected and stopped it before the funds left made a lasting impression. It underscored that this work is about more than compliance checklists; it’s about protecting people, trust,
and the integrity of the financial system.
AT: You have been in the field for about 12 years. What’s kept you so committed to it?
DE: Two things: impact and evolution. The impact is tangible; you’re helping to prevent real harm, whether that’s protecting customers from fraud or ensuring a company stays aligned with regulatory expectations. And the field is constantly evolving. Threats, technology, regulations, and business models are always shifting, which means the work is never static. I’ve had the chance to build programs from the ground up, modernize legacy systems, and lead through major regulatory changes, which keeps the work challenging and meaningful. Another big part of sustaining that commitment has been what happens outside of work. This field can be intense; threats are evolving daily, regulatory expectations are high, and the stakes are real. To stay sharp and avoid burnout, I’ve learned to build consistency in other areas of my life. For me, that means getting to the gym regularly and making time for walks or hikes whenever possible. Those moments away from the screen help me reset, clear my head, and maintain energy over the long term. I’ve found that some of my best strategic thinking actually happens during those times — whether it’s working through a complex problem on a run or just giving my brain the space to recharge. The discipline of showing up consistently outside of work mirrors the commitment needed in this profession. Just like fraud and compliance require constant attention and adaptation, maintaining personal routines provides structure and resilience that directly carry over into my professional life.
AT: You’ve worked in large, well-established companies as well as in startups where you had to build the program from scratch. First, how do you turn management’s willingness to invest in compliance into a commitment to the program?
DE: The key is connecting compliance to business outcomes. At the leadership level, willingness often exists, but commitment comes when executives see how a strong compliance framework enables growth, protects the brand, and positions the company with regulators, partners, and customers. I focus on framing compliance not as a cost center but as a strategic enabler, showing clear return on investment through metrics like faster onboarding, reduced fraud losses, improved regulatory relationships, or expanded product opportunities. For example, in a logistics company, this might mean smoother vendor onboarding and fewer counterfeit shipments; in a software as a service (SaaS) company, it could translate to faster customer acquisition and reduced abuse. In practice, this often means speaking the language of the business. Instead of emphasizing regulatory citations, I talk about how compliance can accelerate partnerships or reduce operational drag. For example, when a financial technology company wants to launch a new product, a strong compliance posture can shorten partner bank diligence timelines and speed up market entry. When a technology company wants to expand into a new market, a well-defined compliance posture can shorten partner due diligence timelines, reduce legal review cycles, and accelerate launches. I’ve also learned that early “quick wins” are crucial. When building programs from scratch, I pick one or two areas where a well-designed control can demonstrate tangible impact quickly. For example, improving vendor vetting accuracy in procurement, reducing customer onboarding friction in SaaS, or minimizing manual document checks in human resources or supply chain. Once leadership sees measurable results, it’s much easier to get sustained investment and prioritization.
AT: What other advice would you give to a compliance officer looking to build a program in a fast-moving environment?
DE: Start with clarity and scalability. Don’t try to build the “perfect” program on day one or take on more than is possible or required. Instead, establish a clear risk assessment, define core controls that address the highest risks, and build flexible frameworks that can evolve as the business scales. From a technical perspective, embed compliance into product and engineering early — don’t bolt it on later. And cultivate strong cross-functional relationships. In a fast-moving environment, success often depends on influence and partnership as much as policy. One practical tip is to document decisions and risk tradeoffs as you go. In a startup environment, things move fast, and it’s easy for institutional knowledge to get lost. By clearly recording why certain controls were prioritized or deferred, you create a roadmap that can scale as the company matures. Additionally, don’t underestimate the behavioral analytics, identity graphing, and network analysis — to detect anomalies that individual rules might miss. For example, a retailer might detect unusual order patterns, a manufacturer might spot anomalies in supplier documentation flows, or a corporate human resources team could flag inconsistencies in job applicant histories. Agentic AI can take this a step further by autonomously orchestrating signals across these layers, adapting detection strategies in real time as threats evolve. Real-time decision platforms that bring multiple data points together are critical, and when powered by agentic AI, they can continuously refine risk assessments without relying on manual intervention. Just as important is cross-institutional information sharing. Fraud doesn’t respect organizational boundaries, so our defenses can’t either. A layered approach should also include feedback loops that continuously learn from confirmed fraud and false positives. Too many programs are static; once a rule is set, it stays unchanged for months. Modern AI-driven systems can adjust thresholds dynamically, detect emerging attack patterns earlier, and redistribute analyst resources to truly suspicious activity.
AT: How do we help the workforce better understand what they need to look out for?
DE: Training must be ongoing, relevant, and scenario-based. Instead of one-off annual modules, use short, frequent touchpoints that highlight real emerging threats, such as deepfake examples, phishing attempts, and fraud patterns relevant to the business. Encourage a “see something, say something” culture and make it easy for employees to escalateconcerns. When people understand not just what to look for but why it matters, vigilance goes up significantly.
AT: A lot of these threats are driven by AI, which also has great opportunity for compliance programs. First, what’s your advice for someone who finds AI a bit alien? What’s the best way to start?
DE: Start small and focus on usecases — not the technology itself. You don’t need to be a data scientist to understand how AI can support compliance. Identify one or two high-impact pain points — such as alert triage or transaction monitoring — and explore AI tools that can augment those processes. Partner with your data or engineering teams, ask questions, and focus on explainability. Familiarity grows through practical exposure. Think of AI as a new teammate, not a mystery box. By starting with one contained workflow, like automating low-risk alert dispositions or enhancing fuzzy name matching for sanctions, teams can build confidence and see tangible benefits quickly. Over time, this incremental approach creates organizational comfort and literacy around AI, making larger deployments smoother.
AT: How can AI be used to help with anti-money laundering (AML) compliance?
DE: AI can significantly enhance both efficiency and effectiveness. For example, machine learning models can prioritize alerts based on risk, reducing false positives and allowing analysts to focus on the highest-risk cases. This approach applies well beyond AML. In healthcare, AI can triage data privacy alerts; in retail, it can prioritize chargeback investigations; in manufacturing, it can flag suspicious vendor behavior. Modern large language models can help analyze unstructured data — like adverse media or customer communications — to identify potential red flags. AI can also support better customer risk segmentation, improving ongoing monitoring and enhanced due diligence processes. The opportunity to reimagine and transform how we do AML with AI was a major reason I joined a software company, where I can focus on driving this revolution. AI can also detect complex layering patterns across multiple counterparties, which might be invisible to linear rule sets. Similarly, in supply chains, AI can uncover hidden relationships between shell vendors, fraudulent intermediaries, or coordinated bidding behavior.
AT: What about other areas of compliance, such as third-party risk?
DE: AI can streamline due diligence by quickly analyzing largenvolumes of public data, news, and filings to flag potential concerns with vendors or partners. It can also help with continuous monitoring: detecting changes in a third party’s risk profile over time rather than relying on static, periodic reviews. This proactive monitoring can prevent downstream exposure from vendor misconduct, sanctions violations, or fraud. It’s not just about reducing manual work; with
AI, we can actively improve our risk posture. We have never had better tools to understand customer data. For example, AI can surface reputational risks early by monitoring multilingual media sources, regulatory announcements, or litigation databases that a human team might not have the capacity to review continuously. This gives compliance teams a “heads up” before issues escalate. Moreover, by integrating AI-driven insights with procurement and legal workflows, organizations can embed risk awareness directly into onboarding processes, making third-party risk management a living, breathing
process rather than a periodic box-checking exercise.
AT: Implementing any new technology inevitably comes with challenges. What should compliance teams expect when deploying AI in their programs?
DE: Expect challenges around data quality, model explainability, and change management. AI is only as good as the data it’s used with, so cleansing and structuring data is a big lift. Regulators and auditors will expect clear explanations of
how models make decisions, so governance and documentation are critical. And internally, there’s often cultural resistance; people worry AI will replace them or “black box” decisions will be made without oversight. Another frequent challenge is integration with legacy systems. Many compliance teams operate on fragmented data infrastructures, making it difficult to get the clean, centralized datasets AI systems need. Planning for data architecture improvements upfront can save significant headaches down the line. It’s also common to underestimate the human element. Rolling out AI is not just
a technical deployment; it’s an organizational transformation. Clear communication, training, and change champions are
essential to avoid skepticism and resistance.
AT: What have you found to be the best ways to overcome them?
DE: Transparency and collaboration are key. Engage compliance, engineering, legal, and audit early to build shared
understanding and clear governance. Provide training so staff understand how the AI works and how it fits into workflows. Start with pilot programs to demonstrate value, and ensure you have strong model governance and validation frameworks to build regulator and internal trust. Another effective tactic is to establish a cross-functional AI working group early on. This group can align priorities, address ethical considerations, and create shared accountability. By involving skeptics in the design and testing phases, you turn potential blockers into advocates. Pilots should also be scoped thoughtfully — not just to prove the technology works, but to demonstrate business value and regulatory soundness simultaneously.
AT: There’s been a lot of talk about AI governance for the enterprise as a whole, but not so much about governance within
the compliance program itself. Should we be talking about it more? If so, what should we be addressing?
DE: Yes, absolutely. Compliance teams need their own AI governance frameworks that align with enterprise standards while being tailored to regulatory expectations. That includes clear accountability for model performance, documentation of
how decisions are made, periodic validation, bias testing, and audit trails. We also need to define escalation paths when models behave unexpectedly. Strong governance isn’t just good practice; it’s what will allow compliance teams to harness AI confidently and defensibly. Compliance-specific AI governance should also include model lifecycle management, ensuring that controls evolve alongside regulatory expectations and business changes. This involves version control, retraining protocols, and clear sign-offs for model updates. Finally, ethics and fairness testing must be integral. AI models that inadvertently introduce bias can create compliance risk themselves — especially in areas like the Know Your Customer process or transaction monitoring, where decisions affect customers directly. Governance must ensure that models are not just accurate, but also fair and explainable.
AT: Thanks, Dustin, for sharing with our readers!
