There's a debate happening in science right now that anyone who has ever done a deal needs to hear.
It goes like this: Are humans fundamentally cooperative creatures who build beautiful things together? Or are we calculating opportunists who cooperate only when someone's watching?
If you've ever been on either side of a term sheet, you already know the answer. It's both. Always both.
A researcher named Jonathan Goodman just published a book called Invisible Rivals, and it's basically a scientific explanation for every deal that's ever gone sideways at the eleventh hour. His argument is elegant and, frankly, a little devastating: we didn't evolve to cooperate or compete. We evolved with the capacity for both — and with the intelligence to hide the competition when it suits us.
Read that again. Let it marinate. Then think about the last three partnerships you evaluated.
Here's the fun part.
Scientists have been running economic experiments on people for decades. One of the classics is the ultimatum game. You give one person a pot of money and let them decide how to split it with a stranger. The stranger can accept the offer or reject it — and if they reject it, nobody gets anything.
In theory, rational self-interest says the first person should offer almost nothing, and the second person should accept whatever crumbs they get. Something is better than nothing, right?
In practice, people usually offer around 40-50% of the pot. Scientists pointed to this and said, "See! Humans are wired for fairness! We're inequity averse! We're basically golden retrievers with spreadsheets!"
And then an anthropologist named Polly Wiessner ran the same experiment with one critical adjustment. She told participants, very clearly, that their identities would be completely anonymous and there would be zero consequences for any choice they made.
You can guess what happened.
People started quietly sliding more coins to their own side of the table. Some even paused to double-check: "You're sure no one will know?"
I have never read a more accurate description of a founder renegotiating terms after a handshake deal.
Goodman calls this the problem of "invisible rivals." The idea is simple but brutal: in any group, there are people who follow the rules, signal all the right values, and project pure team-player energy — right up until the moment they have enough leverage or cover to stop.
They're not defectors. They're not cooperators. They're context-dependent strategists. They cooperate when cooperation is the optimal play, and they defect the second the cost of defection drops below the benefit.
If you work in finance and this doesn't sound familiar, I'd like to know what utopian corner of the market you've been hiding in.
There's a concept in behavioral science called "moral credentialing" that should be required reading before anyone signs a partnership agreement.
It works like this: if I've done enough good things recently — donated to charity, mentored someone, served on a nonprofit board — my brain gives me a psychological permission slip to act selfishly later. I've banked enough moral capital. Time to make a withdrawal.
Researchers found that businesses who voluntarily signed a public pledge committing to create value for all stakeholders — not just shareholders — were actually more likely to violate environmental and labor laws afterward.
Let me say that differently. The companies who made the biggest show of caring about everyone were the ones most likely to cut corners when no one was looking.
If that doesn't remind you of at least three companies in your portfolio, your portfolio is too small.
Here's my favorite finding, though, because it involves AI and it's perfect.
Researchers set up a die-rolling game where higher numbers meant more money. When people reported their own rolls, they were broadly honest. Not saints, but honest enough.
Then the researchers let participants delegate the reporting to an AI agent. And they gave them the option to instruct the AI with vague directions like "maximize profits."
Honesty cratered. Less than 20% of rolls were reported accurately.
People didn't lie. They just outsourced the lying and gave themselves plausible deniability. Which, if I'm being honest, is basically how half of corporate governance works.
Now here's where I'm supposed to tell you this is all hopeless and you should never trust anyone and store gold under your mattress.
I'm not going to do that.
Because the actual takeaway from this research isn't that people are bad. It's that cooperation is a design problem, not a character assessment. The science is clear: people cooperate when the architecture of the relationship makes cooperation the rational choice. Transparency, accountability, reputation, repeat interactions — these aren't nice-to-haves. They're load-bearing walls.
Elinor Ostrom won a Nobel Prize for demonstrating that communities manage shared resources well when they build the right local norms and institutions. Not because the people in those communities are inherently more virtuous than anyone else. But because they made defection expensive and cooperation rewarding.
That's the whole game. In evolutionary biology. In community governance. And absolutely in deal-making.
So what does this mean if you're structuring a partnership, evaluating a co-investor, or deciding whether to trust the person across the table?
Stop asking, "Is this person trustworthy?" That's the wrong question. Everyone is trustworthy in the right conditions and untrustworthy in the wrong ones.
Start asking, "Have we built a structure where cooperation is the obvious play?" Are the incentives aligned? Is there transparency? Are there meaningful consequences for defection? Will we interact again?
If the answer to those questions is yes, you probably have a good deal. Not because your partner is a good person — they might be — but because you've made being a good partner the rational choice.
If the answer is no, it doesn't matter how many dinners you've had together, how firm the handshake was, or how many times they said "we're aligned."
You've just handed someone an anonymous die-rolling game and told the AI to maximize profits.
Ninth Square Capital is an Advisory Firm that thinks about deal structure, human nature, and the uncomfortable overlap between the two. We write about M&A, capital markets, AI, and occasionally, evolutionary biology.
If you're looking for a partner who builds cooperation into the architecture rather than hoping for it, we should talk. If you're looking for someone who'll just trust your handshake — well, Polly Wiessner has some coins she'd like to show you.