Pricing
Developer Spotlight: Ellen Savoye, Lead Pricing Developer at Markel

Jonathan Bowden
Apr 16, 2026

In the first edition of our Community Developer Spotlight series, Jonathan Bowden sits down with Ellen Savoye, Lead Pricing Developer at Markel, to hear about her journey into technical pricing, how her team is approaching their pricing transformation, and what she's most excited about on the hx platform.
Hi Ellen, tell us a bit about your role and the kind of work you do.
My official title is Lead Pricing Developer, though I tend to refer to myself internally as "the hx person". Essentially, most things regarding the hx platform at Markel sit within my purview. That covers model building, data schema, UI, algorithm development; the full stack of what goes into building a pricing model on hx.
We're currently building over 20 models on the platform. We're really putting the platform through its paces across all lines: inland marine, railroad, primary casualty GL, D&O, cyber, and more. The goal is to have technical pricing for every line and move them to the hx platform, which is an ambitious target, but one we're firmly working towards.
How did you first get into modelling, pricing and technical development?
I've been programming for around 10-11 years now. I started out in SQL and R in a traditional pricing actuarial role; I'm not a credentialed actuary, but my background is in statistics and data science, sitting on top of actuarial work and a few exams, which turned out to be a really useful combination.
The real turning point came at my previous company, where we were evaluating platforms for a pricing transformation. I was the only one on the team with Python experience at the time, so I naturally ended up being the one to test the platform and build the proof of concept; which happened to be hx.
I loved it.
It was everything I had wanted in a model-building environment, and because I already had the Python background, the transition from R to Python on hx felt like a natural evolution rather than a leap.
From there it snowballed. We wanted to own the pricing models within our team rather than rely on external ownership, and my Python experience put me in the right seat to make that happen.
What does a typical week look like for you?
Varied, to put it mildly.
I have daily touchpoints with the IT team regarding a variety of model builds: currently twice a week on casualty and twice a week on professional lines; plus three-times-a-week meetings with the consultants working alongside my team. Beyond that, I'm often meeting with our filings team as admitted models move into UAT, working through integration questions with the IT team, or syncing with underwriting leads on backlog items.
The rest of the time I'm in the models themselves. Some days I'll be writing code all day; last Monday I spent the entire day building a PDF output file for one of our admitted lines. Other days I'm jumping between models doing reviews, applying updates to align with our core schema, or working through feedback from the data team. There was one day recently where I was across five different models in a single day, which was, frankly, a lot. I won't be repeating that in a hurry.
The mental context-switching is genuinely one of the hardest parts of the role. Pivoting between different lines of business mid-thought is its own kind of challenge; I've definitely given someone a confident answer before about how a model works only to realise I was thinking about a different model entirely.
In your experience, what does a good pricing implementation look like?

The most important thing I've learned is that flexibility is everything. You need to allow the nimbleness of hx to work with your policy administration system (PAS), and if everything is too rigidly tied together, you create your own problems.
At Markel, we have different levels of implementation depending on the product line, and I think that's been key to our success. For example, we have two lines that weren't originally integrated with our PAS, so rather than blocking them while we worked through that side of the integration, we’re just pressing on to get the models in the hx platform. The team still get the technical pricing and all the benefits that go with that, they're just not going to see the full PAS integration for now.
Rigid integration requirements that prioritize process over outcomes can produce frustration, delays, and declining morale. Pricing transformations fail when you're so focused on perfection that you stop making progress.
We actually had a model recently where we were chasing perfect for about four months, and our timeline kept slipping. We eventually did a retrospective, hit pause, and asked ourselves honestly: can we get this to 80% and ship it, with the remaining 20% handled through consistent model updates? The answer was yes, and that was absolutely the right call.
Have you developed any approaches, workflows, or practices that work particularly well for your team?
A few things have made a big difference for us. The first is our core schema. When I joined Markel, a core schema was already in place, and having that shared foundation; structured around high-level groupings like submission, coverages, layers, and metrics; has been transformative for cross-team alignment. It levels the playing field between the model development team, the data team, and IT, because everyone knows what they're getting.
It's also forced some really useful conversations about definitions: e.g. what counts as a coverage, versus a sub-coverage, versus a coverage extension, for instance.
The second thing is consistency in how we structure model builds. I strongly recommend having a general approach that anyone can navigate; if a line has five coverages, all coverage-related algorithms should be in a coverages folder. UIs should be deconstructed, with each page in its own JS file. And we've introduced the concept of a "core rating" file, which sits alongside the rating file and holds reusable functions. If a rating calculation is shared across multiple coverages, it lives in core rating and gets called from there, keeping each individual file clean and focused.
The third thing is just regular, structured communication with the IT and data teams. I have four touch-points a week with IT alone. It sounds like a lot, but staying in alignment throughout a build; rather than surfacing mismatches at integration; saves a huge amount of time downstream.
What advice would you give to someone just starting to work with the platform?
Learn object-oriented programming (OOP). This is my biggest piece of advice, and I'll give it to anyone who'll listen.
The initial hx training doesn't cover OOP, but for complex model builds, using classes makes an enormous difference. I use a class per coverage, which means I define parameters once, I can pass data to and from cleanly, and I can contain everything relevant to that coverage in one place; whether it ends up in the UI or the data output.
The objection I hear is that hx is already well-structured and you don't need classes on top. And I agree; you don't use classes for everything. Rating, tasks, all of that operates the same way it always did. Classes are a tool to enhance the programming, not replace the native hx approach. But used in the right places, they open up a lot of possibilities.
The other piece of advice is: don't think like Excel. When you're setting up parameters, ask yourself how you'd make it more efficient; not how you'd replicate a spreadsheet layout. That mental shift takes time, but it's worth making early.
Looking ahead, what are you most excited about?
Two things, really. The first is the upcoming Xpression UI; I got a preview, and I think underwriters are going to love it. Anything that makes the platform more intuitive is a genuine win.
The second is the support agent. I've been using hx since late 2020, so I know the documentation reasonably well, but a lot of my colleagues find the docs difficult to navigate. The support agent is already being used by people on my team, and they're getting genuinely helpful guidance from it; including being able to query the API documentation at the same time.
I've had to give my fair share of "have you read the docs?" responses over the years, so having something that helps bridge that gap is something I'm really enthusiastic about.
More broadly, I'm excited about the underwriter agent use cases. Depending on how that's built out, AI assistance inside hx could meaningfully change how underwriters interact with technical pricing. That's a big deal; it's not just about making things easier for developers, it's about enabling everyone who touches the platform.
Final (fun) question: should data schema nodes be defined on one line, or spread across multiple lines for readability?
I've had both, and I'm genuinely split. One of my early mentors was very committed to multi-line formatting and it certainly was visually pleasing. But I also found myself constantly trying to collapse sections just to see the full picture of a file, which drove me a bit mad.
My honest position is: one line, in most cases. The only exception is when you're using every possible attribute; async input, async output, label, options, linked options, grouping, the works; at which point a single line becomes genuinely unreadable and I have to concede the point.
What I'd actually love is a word wrap option. That would solve the whole debate neatly. Short lines stay short. Long lines wrap rather than forcing a choice between horizontal scrolling and vertical sprawl.
What I will say without any ambiguity: your algorithms should be properly formatted. I have a colleague who keeps everything on one massive line. My response is to go into his models and apply Black formatting, ruthlessly, every single time.



