How I led Haven Technologies' human underwriting platform from a two-person proof of concept to a production system serving 70+ users — while weaving machine learning into one of insurance's most tradition-bound processes, and co-inventing a patented data system along the way.
Underwriting, the process of risk assessment and classification, is the beating heart of any life insurer. Every policy begins with an underwriter — a deeply experienced professional who evaluates risk, reviews applicant data, and ultimately decides whether to issue coverage and at what price. Get it wrong, and the financial consequences are severe. Get it right consistently, and you build a healthy, sustainable book of business.
But for all its importance, the underwriting process in 2015 looked remarkably similar to what it had for decades. Underwriters worked from PDFs and paper files. Data from multiple third-party sources — medical records, motor vehicle reports, prescription history, MIB codes — arrived in fragmented, incompatible formats and had to be manually reconciled. Cases sat in queues, decisions took days, and errors happened. Not because underwriters weren't skilled — they were extraordinarily skilled — but because the tooling was failing them.
The industry had largely accepted this as an immutable reality. Underwriting was too complex, too regulated, too human to be meaningfully automated. Haven Technologies, a MassMutual venture, was built on the conviction that this was wrong.
"The goal wasn't to replace underwriters. It was to give them a platform worthy of their expertise — and to prove that human judgment and machine intelligence could grow stronger together."
I joined Haven Technologies as Product Owner for the underwriting platform in November 2015. I owned the full product lifecycle — from initial discovery and requirements through delivery, iteration, and long-term roadmap strategy. My stakeholders spanned underwriters, data scientists, actuaries, engineers, legal, compliance, risk, and governance teams.
This wasn't a role where I could hand off a spec and wait. Haven was a startup operating inside a 170-year-old mutual insurance company. That meant navigating the speed and ambiguity of a startup environment while respecting the risk controls and regulatory obligations of one of the largest life insurers in the United States. Every decision carried weight.
Growing this platform from two pilot users to 70+ active underwriters required solving four interconnected challenges simultaneously — and getting any one of them wrong would have collapsed the others.
Early in the platform's life, I ran weekly calls with our two to four pilot users. These weren't passive demos or status updates — they were working sessions where underwriters reviewed real cases alongside the product, surfaced friction, and helped shape what came next. Every piece of feedback fed directly into the backlog.
As the user base grew, these sessions evolved into what became known internally as "Ram's Townhalls" — structured forums that ranged from full sprint reviews to deep dives into specific underwriting scenarios the team was encountering in production. Underwriters weren't just end users; they were design partners, QA testers, and strategic advisors.
This approach did something critical: it made underwriters feel invested. When a feature shipped that they had requested, or when we worked through a real case they had flagged, the platform stopped being something done *to* them and became something built *with* them. Adoption grew not because we mandated it, but because users wanted to use a tool they'd helped shape.
"What started as a weekly call with 2 people became a standing institution inside the organization — a community of underwriters who took genuine ownership of the platform's evolution."
On the technical side, I worked closely with data science and engineering teams to integrate machine learning models and rules engines directly into the human underwriting workflow. Rather than positioning algorithmic outputs as a replacement for human judgment, we surfaced them as inputs — a risk score here, a flagged anomaly there — that underwriters could interrogate, override, and learn from.
This bridging of two previously siloed worlds — accelerated underwriting and human underwriting — became one of Haven's most significant product differentiators. The platform didn't just digitize the existing process; it created a new one in which human expertise and machine intelligence compounded each other over time.
The central design challenge was deceptively simple to state and genuinely hard to solve: how do you take data from a dozen incompatible third-party sources — medical records, prescription history, motor vehicle reports, MIB codes, algorithmic model outputs — and present it to a human underwriter in a way that's coherent, actionable, and fast?
The answer was the design of our platform. Each case was broken down into discrete issue cards — one per topic or categorized by life underwriting guidelines and flagged by our rules engine — each with a real-time status indicator: New, In Progress, or Done. When an underwriter opened a card, they saw the relevant data pulled and standardized from its source, alongside ML model scores that provided predictive context. They could review, annotate, flag, request additional information, or mark it complete — all in one place, without switching between systems or reconciling conflicting file formats.
Critically, the platform was built for collaboration. When one underwriter worked a card, its status updated in real time for every other analyst on the case — preventing duplicated effort and surfacing a shared, living picture of where each case stood. This wasn't just good UX. It became the foundation of the patented system we'd eventually file.
One of the core technical challenges we faced was the fragmented, inconsistent nature of underwriting data. Information about a single applicant might arrive from a dozen different sources — in different formats, with different naming conventions, with duplicates, gaps, and conflicts that an underwriter had to manually resolve before they could even begin their analysis.
Working closely with engineering and co-inventor Mark Sayre, I helped conceive and develop a system that fundamentally changed how this data was handled. The patented invention describes a server-based system that retrieves raw data records from multiple sources, standardizes and reconciles them into a unified data record, identifies gaps and flags them for analyst review, and updates analyst devices in real time as any underwriter works a case — preventing duplicated effort across the team.
The system also generates dynamic electronic documents — underwriting outputs that update automatically as underlying data changes, rather than requiring manual regeneration. It was a meaningful technical advancement in how multi-source, multi-format insurance data could be processed and presented to human analysts at scale.
More broadly, this work helped reshape how Haven — and by extension MassMutual — thought about the relationship between human expertise and algorithmic tools. The platform demonstrated that underwriters didn't have to choose between their instincts and the data. With the right product, those two things could make each other stronger.
I'm currently open to product opportunities in FinTech and InsurTech.
Get in touch