Shaping the Future of Trustworthy AI Governance

5 months ago 94

Fontaine Chan, Executive Director and Actuary, AI and Data Science Model Governance, Reinsurance Group of America, Incorporated

At Model Risk Management, Fontaine Chan leads the development of the model governance strategy and framework for Reinsurance Group of America. Fontaine collaborated closely with Karin Wu, who supported Fontaine on the technical aspects of model testing. Karin joins this interview to provide a comprehensive discussion.

Journey to Executive Director and Actuary at RGA

My 10-year tenure at KPMG was transformative, shaping my core competencies and skills. Being called upon to solve complex client challenges taught me to think quickly under pressure while maintaining analytical rigor. This experience highlighted the importance of model governance as a strategic partner with model owners to ensure transparent and reliable model projections.

After KPMG, I sought to understand reinsurance from multiple perspectives. My four years in valuation and financial reporting at Munich Re, followed by five years in pricing at RGA, provided me with exposure to key modeling areas that drive our industry. This cross-functional experience helped me strike a balance between innovation and risk management in governance frameworks.

My first experience with AI was on RGA's pricing team, where I leveraged machine learning to forecast mortality curves. Seeing algorithms learn from vast datasets and improve decisionmaking has revealed AI’s strategic potential in setting actuarial assumptions.

When RGA began the AI and Data Science Model Governance team, it was the perfect convergence of my skills: governance expertise, innovation mindset, and problem-solving. RGA's commitment to innovation offered the ideal platform to shape AI adoption while ensuring reliability and regulatory compliance.

Experience as Senior Data Scientist, AI and Data Science Model Governance

I have always been passionate about data science and deriving insights from data. Over the past decade, I’ve focused on building and deploying machine learning models in the financial industry, with an emphasis on AI model governance.

My work involves not only developing models but also ensuring that each phase meets standards for fairness, safety, and bias mitigation. Responsible AI governance ensures models perform well while complying with regulatory and ethical guidelines, protecting both the company and its clients.

The AI landscape is evolving rapidly, introducing new tools, methodologies, and associated risks. This keeps the work exciting and intellectually stimulating. Robust governance in this dynamic environment is critical for delivering trustworthy and impactful AI solutions in insurance.

Integrating AI Technologies with a Human-Centered Leadership Approach

My approach centers on empowering people while establishing governance frameworks that enable safe innovation. Building relationships with model owners as a trusted partner is key. While my team tracks regulations and best practices, we also provide guidance on modeling practices across the model lifecycle.

High standards in documentation and governance are enablers of sustainable innovation. Independent model validations are conducted enterprise-wide using a risk-based approach, with rigor and frequency aligned to each model’s risk level. This ensures proper allocation of resources while maintaining solid governance.

Ensuring AI Models Are Transparent, Trustworthy, and Compliant

Transparency, trustworthiness, and compliance necessitate a comprehensive governance framework founded on NIST principles. Actionable measures for key considerations include:

Four pillars underpin successful AI governance: comprehensive model lifecycle management, human supervision to review and override AI decisions, regular performance monitoring, and precise documentation for transparency and regulatory compliance.

Developing Robust Model Testing for AI and Data Science Models

Six key areas support robust AI and data science model testing:

1. Data Quality and Representativeness – Use clean, unbiased datasets that promote generalizability.

2. Testing Methodology – Apply adversarial, unit, and coverage testing to address edge cases and ensure robustness.

3. Fairness and Bias Mitigation – Leverage tools to detect and mitigate bias and evaluate outputs for discriminatory impacts.

4. Transparency – Utilize explainability techniques, such as SHAP or LIME, to clarify model reasoning.

5. Continuous Monitoring – Track model performance postdeployment, monitoring for data drift or degradation.

6. Infrastructure and Automation – Build scalable, automated pipelines and adopt MLOps practices for efficient evaluation and deployment.

These areas ensure models are high-performing, ethical, reliable, and robust.

Addressing Industry Challenges Through Emerging Technologies

A significant challenge is the computational inefficiency of conventional actuarial software, which struggles with large datasets and complex projections. Valuation regulations require financial projections with 1000 scenarios, leading to days-long runtimes and operational bottlenecks.

Machine learning and neural networks offer a solution, enabling systems to:

• Handle massive datasets without traditional software constraints

• Reduce runtime

• Improve accuracy through pattern recognition

• Scale efficiently with growing data volumes Research and testing in the industry have shown promising results in areas such as mortality forecasting and underwriting.

Innovations Shaping the Future of Insurance and Reinsurance

AI adoption will accelerate as insurers recognize the advantages of AI. LLMs are already improving underwriting efficiency through faster document processing. AI assistants such as Copilot, Claude, and ChatGPT enhance report generation, financial analysis, and code development.

Large Quantitative Models (LQMs) will transform financial industries, including reinsurance. LQMs can process unstructured data, analyze mega datasets, and perform sophisticated projections at unprecedented speed.

The convergence of AI technologies will create a new paradigm in which data-driven insights, automation, and enhanced decision-making become the standard. Companies embracing this transformation will lead the industry in efficiency and innovation.

Skills and Mindsets for Future Leaders in AI and Model Governance

Aspiring leaders should blend traditional actuarial expertise with knowledge of cutting-edge AI technologies. Balancing innovation with risk management and fostering continuous learning will distinguish the next generation of leaders.

Essential skills and mindsets include:

• Agility and Adaptability – Ability to react quickly to changes in a rapidly evolving field.

• Continuous Learning Mindset – Openness to new ideas and technologies, staying current with advances and regulations.

• Risk-Aware Innovation – Understanding emerging technologies while maintaining governance. Human review, regular performance monitoring, and comprehensive documentation ensure transparency and effective oversight.

I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

Read Entire Article