A deep dive into the PredicTri framework
Published on July 4, 2025
In the world of P&C reserving, the quest for accuracy, transparency, and efficiency is perpetual. I recently came across a compelling framework called PredicTri, which aims to modernize this space using Bayesian methods and machine learning. This post is an interactive analysis of their approach, exploring the methodology, the case study results, and what it means for the future of actuarial science.
The framework was developed by the impressive team of Yulia Yulish-Nechay and Ben Zickel, who combined deep actuarial experience with expertise in algorithm development.
The Core Problem: Why Traditional Methods Fall Short
Traditional reserving tools often rely on manual, opaque, and inconsistent processes. This creates several key challenges that PredicTri is designed to solve.
Inconsistent & Simplistic Models
Disparate methods make objective comparison difficult, while simple models fail to capture the complexity of modern insurance portfolios.
Costly & Opaque Processes
Manual data handling is expensive and error-prone. "Black box" models make it hard to explain results to stakeholders, hindering trust.
The PredicTri Framework: An Interactive Look
The framework is built on a flexible Bayesian engine with a suite of configurable features. Click each factor below to learn more about its role in the modeling process.
Enables the simultaneous modeling of both paid and incurred loss triangles. This leverages the informational content of both data sources to produce a more coherent estimate of ultimate losses.
Accounts for horizontal trends across accident years. This is critical for capturing changes in the underlying business, such as shifts in product mix, underwriting standards, or claims handling processes over time.
Models diagonal trends in the loss triangle, which typically correspond to inflationary pressures. This separates development patterns from systemic economic effects.
Defines how the model treats deviations from expected values. A cumulative approach assumes an early deviation will persist, while a non-cumulative approach treats deviations more independently.
Case Study: Unpacking the Results
ULR Model Comparison
The chart below compares the Ultimate Loss Ratio (ULR) from PredicTri's objectively selected 'Best Model' against the Milliman Arius benchmark. Note the divergence in recent years.
Unlocking the Black Box
Why was the 2018 projection so much higher? PredicTri uses Shapley values to explain this. The table shows the transparent "receipt" for the prediction. Click the highlighted cell to see the explanation.
| Period | Total Change | Joint Factor | Non-cum Res Factor |
|---|---|---|---|
| 2017 | 3.8% | 1.7% | 2.1% |
| 2018 | 9.6% | -3.3% | 12.9% |
| 2019 | 5.9% | -1.7% | 7.6% |
| 2020 | 6.0% | -2.2% | 8.3% |
Quantifying Uncertainty
Instead of a single point estimate, the framework provides a full probability distribution of outcomes. This chart shows the Cumulative Distribution Function for the 2020 ULR, enabling a precise assessment of tail risk.
Conclusion: The AI-Augmented Actuary
PredicTri transforms reserving into a consistent, automated, and explainable process. By empowering an "AI-augmented actuary," it frees professionals to focus on strategic analysis, interpretation, and communication of risk, delivering greater value to their organizations.
Note: This is a living analysis. I'll be adding more thoughts and potentially a deeper dive into the cross-validation techniques, Predic tri project in the future, so check back soon!

