Using Nash Bargaining and LLMs to Automate Fairness

Toward a universal protocol for fair agreements

January 11, 2026 by Ian Clarke

📝 DRAFT ARTICLE This is a working draft — feedback welcome.

In 2018, when my wife and I worked with a mediator to create a prenup, I noticed that despite productive conversations, the process felt more like talk therapy than a principled system. It got me thinking how often people must negotiate, and the many ways that negotiations can go wrong, with serious consequences.

Over time, I realized this was a very deep problem: Negotiation often rewards assertiveness and strategic bluffing—skills that not everyone has or wants to use. Research shows that more agreeable individuals, who tend to avoid conflict, consistently earn less over their lifetimes. Not because they’re less competent, but because they’re structurally disadvantaged in systems that reward brinkmanship.

Impact of Agreeableness on Lifetime Income

This pointed to a clear problem worth solving.

I’d long been interested in negotiation, and had read the usual classics like Getting to Yes. But the real breakthrough came when I discovered John Nash’s 1950 Nash Bargaining Solution.

Nash, best known from A Beautiful Mind, proposed a rigorous method for resolving conflicts of interest between two parties. His solution assumes that each participant has a utility function—a way of scoring how good different outcomes are for them—and identifies the agreement that maximizes the product of their gains over fallback options.

The Nash Bargaining Solution satisfies several key properties:

  • Pareto optimality: No party can be made better off without making another worse off.
  • Symmetry: If both parties are in identical positions, the solution treats them identically.
  • Invariance to scaling: Only relative preferences matter, not how they’re measured.
  • Balanced gains: It fairly allocates the surplus from cooperation based on each party’s improvement over their baseline.

These properties make it a compelling foundation for fair agreements—if you can define each party’s utility function.

That’s the catch. Even for seemingly simple negotiations—splitting chores, selling a house, writing a prenup—capturing your true preferences in a formal utility function is beyond what most people can (or want to) do. And without that, the Nash solution remains a mathematical ideal, not a practical tool.

The Interface Layer: LLMs

This is where Large Language Models solve the bottleneck.

LLMs aren’t perfect—they can struggle with nuance and consistency when asked to make absolute judgments. But they excel at relative comparisons: “Which option is better, A or B?” is a much more reliable task for an inference engine than “Rate this option on a scale of 1 to 100.”

Mediator.ai uses LLMs to bridge the gap between human intention and structured input. Instead of asking users to explicitly score outcomes, it interviews them in plain language. Through conversation, the system builds a “priorities statement” that captures what matters most to each party.

To translate that into something precise enough for Nash Bargaining, we simulate thousands of pairwise comparisons between hypothetical agreements. This allows us to construct an approximate utility function not by asking users to do math, but by observing their implied preferences over tradeoffs.

Crucially, this approach approximates incentive compatibility. While we cannot claim formal compatibility in the strict Myerson mechanism design sense (as LLMs introduce stochasticity), the system approximates it by making deception costly. Because the system cross-references thousands of relative preference queries, maintaining a consistent lie becomes exponentially harder than stating the truth. Inconsistent bluffing results in a lower fidelity score and, ultimately, a suboptimal Nash product for the deceiver.

The Execution Layer: Genetic Algorithms & Lua

Once the preferences are mapped, we need to solve for the optimal deal. This is where we leave the probabilistic world of LLMs and return to deterministic code.

Mediator.ai uses a genetic algorithm to generate and evolve draft agreements based on the extracted utility functions. Each mutation is applied by a “mutator”—a small Lua script representing a strategy for adjusting terms. For example, a mutator might swap clauses, alter a deadline, or rebalance an exchange.

We chose Lua for its lightweight footprint and ease of sandboxing. Reinforcement learning selects the most effective mutators over time based on which ones lead to higher-scoring agreements.

Mediator.ai System Architecture

In practice, the user experience is fairly simple. You interact with a chat-based assistant that asks questions, helps clarify your priorities, and models your preferences in the background. Each assistant runs in a sandboxed environment to ensure privacy, and represents your interests during negotiation with the other party’s assistant.

Starting Small

We’re starting with low-stakes domains: roommate agreements, recurring household chores, shared parenting plans. These are the kinds of negotiations that don’t usually justify hiring a mediator or lawyer, but still benefit from structured fairness. Most negotiations cost just a few dollars—about the price of a coffee.

A necessary disclaimer: This isn’t a substitute for legal advice. If you’re drafting a legally binding agreement, you should still consult an attorney. In fact, this might eventually serve as an intake tool for lawyers—handing them clients who already have a well-articulated draft agreement based on their actual priorities.

Longer term, the same approach could scale to higher-stakes agreements—like prenups, business partnerships, or even multiparty deals. But for now, the goal is to build something that works well for ordinary people making everyday decisions.

We are trying to turn negotiation from a test of social dominance into a solvable math problem. If you’re interested in the mechanism design behind that—or just want to poke holes in our Lua sandboxing—I’d love to hear your thoughts.