Healthcare IT Today Podcast

Healthcare IT Today Podcast: 4 Opportunities to Ease the Tension Between Payers & Providers

When it comes to providers and payers, there’s no avoiding the tension that exists between the two. Ultimately, one’s revenue is the other’s costs. There’s also the fact that providers and payers are optimizing for different things. Providers want to ensure patients get the best care. Individual clinicians are incentivized to provide more care at the individual level to serve the patient and avoid malpractice suits; at the institutional level, more procedures mean more revenue, typically. Payers, on the other hand, face their own competitive dynamics as they sell to employers and individuals who want low premiums above all else. 

You might be surprised to learn that despite the inevitably of this tension, there also lies plenty of opportunity in the space between providers and payers. In a recent episode on Healthcare IT Today Interviews, Steve Rowe, Healthcare Industry Lead at 3Pillar, and host John Lynn discuss why this tension exists and what can be done about it. We’ve captured the four biggest opportunities below.

1. RCM and Claims

The first opportunity is around Revenue Cycle Management (RCM) and claims. Payers have all sorts of different rules around what they will approve and what they’ll deny to balance the tension between keeping premiums low and paying for medical coverage. These rules are sometimes even group-specific. 

The challenge? Providers don’t know what those rules are, which creates difficulties for the member. It’s not easy to understand at the moment what will be approved and what will be denied. That means patients may end up unhappy when a proposed treatment is denied or not paid in full (and they are balanced billed).

The opportunity here is for payers to expose that logic to health systems—essentially preadjucating payment (instead of doing it after the fact). The business rationale: make it easy for in-network providers to get paid in exchange for more competitive rates. Some companies are already doing this:  “Glen Tullman is doing it with Transcarent; he’s essentially trying to intermediate the payer to create a new network. His whole premise to providers is, ‘Join our network because we will pay you the same day you do service,’” notes Steve. “That’s how he’s building his network with the top healthsystem and doctors.”  

2. RCM Complexity

In Steve’s experience building an RCM startup and working with a regional Urgent Care chain, he observed that the expertise and institutional knowledge around claims processing was largely in the heads of the medical billing coders. 

There are two main forms of complexity in RCM he highlights:

  1. Submitting the correct eligibility information (e.g. specific formatting of member ID numbers)
  2. Matching the right diagnostic codes to the appropriate CPT codes, which can be a large and complex matrix.

The risk here is that this institutional knowledge will be lost when these experienced medical billers retire. The processes are very manual, with reimbursements not keeping up with labor inflation. 3Pillar is leveraging AI and data mining to reverse engineer each payer’s algorithm for approvals and denials. The goal is to systematize this knowledge and flag issues proactively, rather than relying on the institutional knowledge of the billing staff.

The vision is to integrate this RCM intelligence engine with clinical documentation tools. That way providers are alerted in real-time during the care planning process about treatments or codes that are likely to be denied by the payer. This will improve the financial experience for providers and patients alike.

3.  The Need for Data Transformation

There is a significant opportunity for data transformation as regional payers have data that lives in separate systems that don’t talk to each other. The pipes to connect these systems haven’t been built and the data isn’t defined in the same way. Regional payers are often at a technological disadvantage compared to national payers because they still have on-premise servers and haven’t moved to the cloud. The IT departments for these payers are swamped putting out fires. They simply don’t have the resources to take on the work associated with major technology modernization projects.

And here’s the rub: Self-insured employers want highly customized insurance products and plans that require flexible and configurable technology platforms. National payers have invested in modern tech stacks that can support this level of customization. However, regional payers struggle to match this same capability. 

So, there’s a real need for regional payers to create a unified data platform and operating system that can integrate data from various systems (e.g., claims, population health, PBM, etc.). This would result in a simplified member experience while enabling seamless workflows for call center representatives, who often have to navigate multiple disparate systems. This is an area where working with a partner who specializes in this capability would be beneficial. 

4. Real-Time Answers to Member Questions

Speaking of member experience, it’s now the number one concern of Vice Presidents of Benefits at self-insured employers thanks to a tight labor market. Top-tier benefits are necessary to attract and retain talent. There’s no doubt that there’s plenty of room to improve. The experience is often fragmented and frustrating as members struggle to get accurate information about coverage, costs, and provider networks. 

There’s an opportunity for payers to make their medical policies and coverage algorithms more transparent and accessible to members at the point of care. Steve explains, “I’m excited about this opportunity because we’ve all been there where it’s like, ‘I just want to know if this particular provider for urgent care who is still open at 10 p.m. is in network. I can’t figure that out on the app. There’s not a search function and the call line doesn’t open until 8 a.m. tomorrow.”

What if patients could get real-time answers to their questions? 3Pillar is making that vision a reality through chatbots powered by AI and knowledge graphs. By using AI to combine data from disparate systems, members can get accurate, up-to-date information at any time, from anywhere.

These chatbots could also help to address the challenge of call center representatives needing to navigate multiple systems to piece together an answer for a member. Steve points out one key consideration: ensuring the chatbots are fed accurate data and avoiding hallucinations. Doing so requires careful design and integration with the underlying data sources. 

While none of these opportunities have “easy buttons” to press, they all provide means for payers to differentiate themselves and better serve patients and providers. You can discover even more areas for payers and providers to win in the full podcast episode

SHARE

The Human Behind the Machine: Why Synthetic Audiences Can’t Replace Authentic User Interviews

The allure of synthetic audiences is undeniable. Imagine a world where you could conduct a hundred “user interviews” in a single afternoon, receiving perfectly transcribed, neatly summarized insights in exactly the order you wanted and without the logistical headaches of recruitment, scheduling, or no-shows. There’s an old adage that says “you can have it done fast, you can have it done cheap, or you can have it done right, but you only ever get two of those things.” The fast-paced world of tech and product development is no different and this promise of speed, scalability, and cost-effectiveness is a siren song. 

Generative AI can create dynamic, data-driven personas that simulate user behavior with remarkable precision, giving us a powerful new tool in the UX research arsenal. But as with any powerful tool, there are hidden dangers, particularly when wielded as a panacea. While synthetic audiences offer a tempting shortcut, a sole reliance on them risks creating a profound chasm between your product and the very people it’s meant to serve. 

The most significant danger isn’t that this technology is flawed, but that we might forget what it was always meant to augment, not replace: the messy, complex, and deeply human truth found in an authentic conversation.

The peril of the perfect persona

Relying exclusively on synthetic audiences can be like building a house based on a blueprint that’s missing a few critical details. On paper, everything is there: the structure looks sound, the dimensions are correct, and everything appears to be in its right place. But when a real person tries to live in it, they discover the doors are too narrow for furniture, the windows don’t open all the way, and the floors are uncomfortably slanted.

The problem lies in the data. Synthetic users are, by definition, a reflection of the data they were trained on. If that data is incomplete, biased, or outdated, the resulting personas will be, too. This isn’t just a hypothetical problem; it’s a very real threat to inclusivity and a perfect breeding ground for confirmation bias. 

If your training data is skewed towards a single demographic—say, young, tech-savvy professionals—your synthetic users will only ever reflect that narrow slice of the world. The struggles of an older user trying to navigate a new interface, the unique needs of someone with a disability, a unique industry among your regular customers, or the cultural nuances of a non-Western audience will be completely lost. The AI, in its earnest attempt to please you, might even generate responses that are overly agreeable or simplistic, praising every concept without offering the critical feedback that leads to true innovation.

The worst-case scenario isn’t a minor design flaw; it’s a product that fails spectacularly because it was built for a user who doesn’t actually exist. It’s a customer experience (CX) that feels cold, impersonal, and frustrating because it was optimized for a machine, not a human.

Another hidden danger in this approach lies in the permanence of the synthetic users. They never change their approaches, feelings, emotions, or opinions unless you take the time to change the data points they have been trained on. Feeding the model data from last year will give you a clear interpretation of what users from last year may have wanted, but any number of changes in personal opinions, competitive offerings, new features in unrelated products, etc. could be impacting what they want/need now, and your dataset is likely not capturing that. You need to talk to real customers to find out what is on the front edge of their experiential needs and what drives their engagement for the future.

The unseen power of a real conversation

This is where the quiet, understated power of the authentic user interview comes into its own. While synthetic audiences are fantastic for speed and scale, they cannot replicate the visceral experience of sitting down with a real person. An interview is more than just a Q&A session; it’s a dance of empathy, a space where you can uncover the “why” behind the “what.”

When you’re face-to-face with a user, you get so much more than words on a screen. You see the frustrated frown when they struggle to find a button. You hear the exasperation in their voice as they recount a previous bad experience. You notice the subtle hand gestures as they try to describe a mental model that doesn’t quite fit your product’s architecture. These are the microexpressions, intonations, and body language that no AI can yet fully replicate. This rich, qualitative data is the bedrock of empathy-driven design. It’s how you discover not just a user’s pain points, but their hopes, their fears, and their motivations. It’s how you uncover the latent needs they didn’t even know they had.

A real user might surprise you by using a feature in an unexpected way, revealing an unmet need or a brilliant workaround you never considered. A synthetic user, by contrast, is likely to stick to the script it was given. Authentic interviews lead to serendipitous insights that can change the entire trajectory of a product for the better. They provide the context, depth, and emotional understanding that truly defines an exceptional customer experience.

The moral of the story

In the end, this isn’t an “either/or” situation. The most successful teams won’t choose between synthetic audiences and authentic interviews; they’ll use them as powerful complements. Synthetic audiences can be invaluable for early-stage ideation, for testing high-volume scenarios, and for quickly validating assumptions. But once you have that initial data, you must do the work of grounding it in reality. Use the speed of AI to inform where you focus your deep, qualitative research. Let the AI identify the “what,” and then use a real, human conversation to uncover the “why” and the “how.”

The true measure of a great customer experience is not how quickly you can develop it, but how well it serves the people who actually use it. By prioritizing the human element and treating authentic interviews not as a chore but as a privilege, we ensure that our products are not just functional, but genuinely empathetic. After all, you can simulate a person’s click, but you can’t truly simulate their heart.

The future of product innovation isn’t human or machine — it’s both.

Let’s talk about how to integrate authentic user insight into your AI-driven product strategy.

About the author

Claude “CJ” Jordy

Principal User Experience Designer

Read bio
SHARE

Building Smarter: How AI and Data Are Transforming Construction

The global construction market, poised to reach $16.45 trillion by the end of 2025, is under growing pressure to deliver projects faster, safer, and at lower cost. Construction has historically lagged behind other industries in technology adoption, but that is changing—and fast. 

AI and cognitive computing are no longer nice-to-haves. They are must-have tools for construction firms that want to boost productivity, reduce risks, and thrive despite today’s era of tight budgets and labor shortages. However, even the most advanced AI systems cannot succeed on a shaky foundation. Case in point: as this LinkedIn post notes, many enterprises are eager for “AI agents,” but their data is “spread across 37 different systems” with no clear source of truth. Before you can leverage predictive analytics, digital twins, or machine learning at scale, you must first solve for data readiness and governance. In other words, you need to build on rock, not sand.

Why AI matters for the Construction industry

AI brings unprecedented speed, accuracy, and foresight to the jobsite. The ability to unify data from design models, equipment logs, real-time sensor inputs, and more allows construction teams to streamline tasks and minimize risk. As Allen Emerick, Senior Sales Executive at 3Pillar puts it, “The holy grail is if you can prevent something bad from happening—either you’re over budget, you’re off schedule, or God forbid a safety incident. The data can help you see that far enough down the road.”

AI is truly reshaping how (and how quickly) projects get built through the following applications.

Predictive analytics

Leveraging data from project histories, materials usage, and even weather forecasts, AI-powered predictive models can help project owners and contractors anticipate cost overruns and schedule slippage. Instead of reacting when issues arise, teams can take proactive steps to avoid them.

AI-driven project scheduling

Manual scheduling is often a mix of spreadsheets, whiteboard sessions, and guesswork. AI scheduling tools can ingest vast amounts of data from subcontractor timelines to supply chain logs. They can dynamically update schedules to reflect real-time changes.

Machine learning for resource optimization

With the construction industry facing an ongoing labor shortage, it’s critical to ensure the right people and equipment are in the right place at the right time. ML algorithms can sift through mountains of data to streamline logistics and minimize idle time.

Computer vision & IoT

AI-enabled cameras and sensors can improve on-site safety and compliance, track inventory and equipment usage, and ensure real-time visibility into the jobsite. Meanwhile, digital twins (a virtual, real-time model of a physical site) can help teams spot design clashes and simulate potential outcomes before ever laying down concrete.

These applications of AI in construction are certainly compelling, but there’s one question to answer first, “Is your data ready for AI?”

Construction’s data reality: Fragmented systems & siloed insights

As construction projects grow in scope and scale, so does the data that powers them. Multiple systems often store information in various, incompatible formats, which creates barriers to collaboration. Mergers and acquisitions introduce even more layers of complexity, while security and governance needs expand with every new subcontractor, supplier, and agency added to the network. 

Capitalizing on AI and digital transformation hinges on a critical first step: bringing clarity and cohesion to the fragmented data landscape. Lance Mohring, Field CTO, explains, “To get great AI, you need great data. So let’s talk about the state of your data… Where are you at here? Do you have it centralized? Do you have it standardized?”

Here’s a closer look at some of the common challenges construction firms face on the road to AI and innovation:

  • Multiple disconnected platforms: Each separate platform (whether for project management, CRM or opportunity tracking) holds data in different structures, often with mismatched naming conventions. Identifying one client consistently across all systems can be incredibly challenging.
  • Mergers and acquisitions: Many construction firms have grown through acquisitions. While this expands reach and capabilities, it also leaves behind a web of legacy platforms and data silos. Standardizing these into a unified architecture is a major undertaking.
  • Security and governance complexities: Construction projects involve a network of stakeholders including subcontractors, suppliers, local agencies and client organizations. Each has different data needs, raising important questions about access permissions, compliance, and cybersecurity. AI can exacerbate these concerns if data is not already well-governed.

Lessons from a hyperscale data center (and what Construction can learn)

At 3Pillar, we recognize the complexities of unifying massive, siloed datasets and transforming them into actionable insights. As Allen puts it, “We tackle tough problems. Construction is a doozy… but this industry is ripe for transformation.” 

Take for example, a project our team recently completed with a leading global hyperscale data center provider. This organization faced challenges with large-scale, disparate data – a common scenario in construction. The provider needed real-time dashboards, predictive models, and anomaly detection to manage everything from power usage to equipment maintenance.

Here’s how our team made it happen:

  • By building a unified, scalable architecture: We integrated diverse data sources (from sensors to billing systems) into a time-series database and custom dashboards. Construction firms need strong pipelines and flexible data environments to consolidate BIM, project schedules, financial data, etc.
  • By applying AI for anomaly detection and forecasting: Predictive models flagged unexpected power spikes and potential system failures in near-real-time. This parallels construction’s need for proactive alerts about schedule delays, supply chain disruptions, or equipment malfunctions.
  • Through driving tangible ROI: The data center provider saved 25 times the cost of the engagement with advanced analytics. Construction firms can achieve similar wins through reduced rework, fewer delays, and stronger margins by applying predictive analytics to complex project data.

We understand for construction firms it can be difficult to operationalize AI due to scattered data, weak security protocols, or a lack of specialized data engineers. Our team at 3Pillar bridges these gaps with:

  • Strategic data engineering and AI readiness: We tackle data fragmentation by implementing modern cloud-based solutions, establishing data governance best practices, and ensuring near real-time visibility across the enterprise.
  • Holistic construction solutions: We can help select and implement the right AI use cases, no matter your current maturity. Whether you’re brand-new to AI or looking to scale advanced machine learning across dozens of worksites, our team will customize solutions to meet you where you are.
  • A track record of innovation: We have deep experience with complex, high-stakes data environments. Our product engineering mindset pairs strategic alignment with rapid prototyping to deliver measurable ROI.

Ultimately, construction is undergoing rapid changes, driven by data and AI. From predictive analytics that prevents delays to computer vision that increases safety compliance, cognitive computing is unlocking possibilities and opportunities. This transformation hinges on one single truth: building effective AI solutions is dependent on a solid data foundation. 

The path forward is clear: centralize and clean your data, layer in strong governance and security, and then bring in AI to make data-driven decisions. 3Pillar partners with construction firms to provide technical expertise and strategic guidance to embed AI across the organization. We’re here to help you do more than just keep pace, but to set new benchmarks across productivity, profitability, and safety. 

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE

Building a New Testing Mindset for AI-Powered Web Apps

SHARE