“Cooperation”—the ability of a large group of actors to work together for the common good—is one of the most powerful forces in the universe. It is embodied in that a king adopts an oppressive dictatorship to comfortably rule a country, while the people on the other hand can rise up and overthrow him. It is embodied in: on the one hand, let the global temperature rise by 3-5°C; on the other hand, when the temperature rises a little, we can work together to prevent the temperature from rising. Collaboration is a key to the normal operation of companies, countries, and any social organization of a certain size.
Collaboration can be improved in many ways: faster information dissemination, better regulations to determine which behaviors are classified as cheating and impose more effective punishments, stronger or more powerful organizations, tools such as smart contracts, allowing low Interactions, governance technologies (voting, shares, decision-making markets…) in the context of trust, and more. In fact, every ten years, we can make progress in collaboration issues.
But collaboration also has a philosophically very counter-intuitive dark side: Although “everyone cooperates with everyone” is much better than “everyone for themselves”, it does not mean that everyone is more One step of collaboration must be beneficial. If you improve collaboration in an unbalanced way, the results can easily be harmful.
We can present this problem on a map, but in fact this map has many “dimensions” instead of two drawn.
In the lower left corner, “Everyone is for themselves” is where we don’t want to appear. In the upper right corner, “full collaboration” is ideal, but it may not be possible. But the vast area in the middle is far from a gentle upward slope. There are many reasonably safe and effective places here. It may be our ideal place to settle down and avoid many deep and dark pits.
Note: Hobbesian Hobbesianism believes that human behavior is selfish, and society is an unrestricted, selfish and barbaric competition. From the book “Leviathan” by Thomas Hobbes, a 17th century British political philosopher.
What are the dangerous forms of “partial collaboration” where someone collaborates with a specific group but does not collaborate with others, leading to an abyss? It is best to illustrate with examples:
- Citizens of a country died heroically for the benefit of the country in a war… and this country was Germany or Japan during World War II
- Lobbyists pay bribes to politicians in exchange for politicians to adopt the lobbyist’s tendentious policies.
- Someone sold their votes in the election
- All product sellers in the market colluded to increase prices at the same time
- A large blockchain miner colluded to launch a 51% attack
In all the above cases, we have seen a group of people come together and cooperate with each other, but it has greatly harmed the group outside the cooperation circle, thus causing substantial damage to the entire world. In the first case, everyone is the victim of aggression by the above-mentioned countries. They are people outside the cooperation circle and have suffered huge losses as a result; in the second and third cases, they are corrupt voters And the people affected by the decisions made by politicians; in the fourth case, it is the customer; in the fifth case, it is the non-participating miners and users of the blockchain. This is not an individual’s betrayal of a group, but a group’s betrayal of a broader group, which is often the entire world.
This type of partial collaboration is often called “collusion” or “collusion,” but it should be noted that the scope of what we are talking about is quite wide. In a normal context, the term “conspiracy” is often used to describe relatively symmetrical relationships, but in the above cases, many have strong asymmetric characteristics. Even a blackmail relationship (“vote for the policy I like, or I will publicly expose your affair”) is a form of conspiracy in this sense. In the rest of this article, we will use the term “collusion” (or “collusion” may be more appropriate) to refer to this type of “unwelcome collaboration.”
Evaluate intent, not action (!)
An important feature of particularly minor collusion cases is that people cannot determine whether an action is an unexpected collusion by simply observing the action itself. The reason is that the actions taken by a person are the result of the combination of the person’s internal knowledge, goals and preferences and the external incentives imposed on the person. Therefore, the actions taken by people when they collude are the same as those taken voluntarily by people. Actions (or collaboration in a benign way) often overlap.
For example, consider collusion between sellers (a type of antitrust violation). If it is an independent business, the three sellers may each set the price of a certain product between 5 yuan and 10 yuan; the price difference within the range reflects the seller’s internal costs, or different salary wishes, supply chain issues and other factors. But if the sellers conspired, they may set the price between 8 yuan and 13 yuan. Again, this price range reflects different possibilities regarding internal costs and other difficult-to-see factors. If you see someone selling this product for $8.75, are they doing something wrong? Without knowing whether they are cooperating with other sellers, you can’t judge! It is not a good idea to enact a law stating that the product should be sold for more than $8. Perhaps the reason why the current price must be high is reasonable. But enacting a law against collusion, and successfully implementing it, will get the desired result-if the price must be so high to cover the seller’s cost, you can get a price of $8.75, but if the factors driving the price increase Naturally very low, you won’t get this price.
This point also applies to bribery and vote-trafficking cases: it is possible that some people voted for the “Orange Party” legally, but some people voted for the “Orange Party” because they were bought. From the perspective of those who determine the rules of the voting mechanism, they do not know in advance whether the Orange Party is good or bad. But what they know is that a voter’s voting based on their true inner feelings has pretty good results, but a vote in which a voter can freely buy and sell votes has very bad results. This is because vote trafficking is a “tragedy of the commons”: each voter only gets a small part of the benefits from the correct vote, but if they vote according to the wishes of the briber, they will get all the bribes. Therefore, the bribe needed to attract each voter will be far less than the price actually paid for any policy that the briber wanted. Therefore, the vote that allowed the sale of votes would soon collapse into Plutocracy.
Understanding game theory
We can go further and look at this issue from the perspective of game theory. In the version of “game theory” that focuses on individual choice-that is, the version that assumes that each participant makes independent decisions (the possibility of “agent groups” working for their common interests is not allowed) has mathematical proof: any game There must be at least one stable Nash equilibrium in both countries. In fact, mechanism designers have a lot of freedom to design games to achieve specific results. But in the version of game theory called “cooperative game theory” that allows alliances to cooperate (such as “collusion”), we can prove that there is a large class of games without any stable results (called “core” (game On the term: Core)). In this type of game, no matter the current situation, there are always some alliances that can profit from it.
Note: This conclusion is called the Bondareva–Shapley theorem.
An important part of this type of inherently unstable game set is the “Majority Games” (Majority Games). The majority game is formally described as a game of agents. In this game, any subset of more than half of the agents can obtain a fixed reward and distribute it to themselves-this setting and Corporate governance, politics, and many other situations in human life are strangely similar. In other words, if there is a certain fixed resource pool and a certain current established resource allocation mechanism, 51% of the participants will inevitably conspire to seize control of resources. No matter what the current configuration is, there will always be There have been some profitable conspiracies for the participants. However, this conspiracy will be susceptible to potential new conspiracies, which may include the combination of previous conspirators and victims…and so on.
This fact, that is, the instability of the majority game under the cooperative game theory, as a simplified general mathematical model, can be said to be seriously underestimated. Why is it possible that there is no “end of history” in politics, and none of them have been proven. A completely satisfactory system; I personally think it is much more useful than the more famous “Arrow’s Theorem”.
Note: Arrow’s Theorem, Arrow’s Theorem, is also known as Arrow’s Paradox, which means that there is no ideal election mechanism that satisfies the three principles of fairness at the same time, Pareto efficiency, non-dictatorship and independence.
Please note again that the core dichotomy here is not “individuals and groups”; for a mechanism designer, “individuals and groups” are surprisingly easy to handle. But “groups and broader groups” are the challenge.
Decentralization as an anti-collusion
But starting from this idea, there is another brighter and more maneuverable conclusion: if we want to create a stable mechanism, then we know that an important factor is to find a way to make collusion, especially large-scale collusion more difficult. The method of occurrence or maintenance. In the voting scenario, we have “secret voting”-to ensure that voters cannot prove their voting content to a third party, even if they want to prove it (MACI is an attempt to use cryptography to extend the principle of secret voting to an online environment Project). This undermines the trust between voters and bribers and severely limits the unwelcome conspiracy that may occur. In the case of antitrust and other corporate malfeasance, we often rely on whistleblowers and even give them rewards to explicitly encourage participants in harmful collusion to defect. As for the broader public infrastructure, we have that very important concept: decentralization.
A naive view of why decentralization is valuable is that it reduces the risk of a single point of technical failure. In traditional “enterprise-level” distributed systems, this is often the case, but in many other cases, we know that this is not enough to explain what is happening. Looking at the blockchain is very enlightening. A large mining pool publicly demonstrates how they distribute their nodes and network dependencies internally, which has no effect on quelling community members’ fear of centralization of mining. And like the picture below, it shows that 90% of the Bitcoin hashing power at the time appeared in the same meeting discussion group, which is really scary:
But why is this picture scary? From the perspective of “decentralization is fault tolerance”, large miners can talk to each other without causing any harm. But if we regard “decentralization” as a barrier to harmful collusion, then this picture becomes quite scary, because it shows that these barriers are not as strong as we think. Now, in fact, these barriers are far from zero. Those miners can easily carry out technical collaboration, and they are likely to be in the same WeChat group, but in fact, this does not mean that Bitcoin is actually more centralized than centralized The company is not much better.”
So, what are the remaining obstacles to conspiracy? Some major obstacles include:
- Moral barriers: In the book “Liars and Outsiders”, Bruce Schneier reminds us that many “security systems” (door locks, warning signs to remind people of punishment…) also have moral functions, reminding potential Unruly people, they are about to commit serious illegal acts, if they want to be a good person, they shouldn’t do it. Decentralization can be said to serve this function.
- Failure of internal negotiations: Individual companies may begin to ask for concessions in exchange for participating in a conspiracy, which may lead to a direct deadlock in the negotiations (see the “hijacking issue” in economics).
- Anti-collaboration: A system is decentralized, which makes it easy for participants who did not participate in the conspiracy to make a fork, strip off the conspiring attacker, and then continue to run the system from there. The threshold for users to join the fork is very low, and the intention of decentralization will create moral pressure for participating in the fork.
- Defection risk: It is much more difficult for five companies to unite for evil than for uncontroversial or benign purposes. The five companies do not know each other well, so it is possible that one of them refuses to participate and blows the whistle quickly, so it is difficult for participants to judge the risk. Individual employees within the company may also whistle.
In summary, these obstacles are indeed substantial-often substantial obstacles are sufficient to prevent potential attacks, even if the five companies are fully capable of quickly coordinating and doing some legal things at the same time. For example, Ethereum miners are fully capable of cooperating to increase the GAS cap, but this does not mean that they can collude so easily to attack the blockchain.
The experience of the blockchain shows that the design of the protocol as an institutional decentralized architecture is often a very valuable thing even when it is known in advance that most of the activities will be led by a few companies. This idea is not limited to blockchain, it can also be applied in other situations (for example, see antitrust applications ).
Forking as anti-cooperation
But we cannot always effectively prevent harmful conspiracy from happening. In order to deal with situations where harmful collusion does occur, it would be better if the system could be made stronger against these collusions-more expensive for the collaborators and easier for the system to recover.
We can achieve this through two core operating principles. (1) Support anti-collaboration, and (2) Take profit and risk “Skin in the game”. The idea behind anti-collaboration is this: We know that we cannot design the system to be passive and robust against conspiracy. This is largely because there are so many ways to organize conspiracy, and there is no passive mechanism to detect them. , But what we can do is proactively respond to the conspiracy and fight back.
Note: The term Skin in the game comes from horse racing. The owner of the horse has a “skin” in the game, and they have the most say in the result of the game.
In Crypto systems, such as blockchain (which can also be applied to more mainstream systems, such as DNS), a major and key form of anti-cooperation is “forking.”
If a system is taken over by a harmful coalition, people with different opinions can gather together and create an alternative version of the system that has (mostly) the same rules, except that it eliminates the power to attack the coalition control system. In the context of open source software, forks are very easy; the main challenge of creating a successful fork is usually to collect the required “legitimacy” (a kind of “common sense” in game theory) and get everyone who disagrees with the main alliance direction Of people follow you.
This is not just theoretical; it has been successfully implemented, and the most well-known is the resistance of the Steem community to hostile acquisition attempts, which led to a new blockchain called Hive, and in this blockchain, the original hostile The person has no power.
Market and Skin in the game
Another type of strategy to resist collusion is the concept of “Skin in the game”. In this case, “Skin in the game” basically refers to any mechanism that makes individual contributors in the decision-making process solely responsible for their contributions. If a group makes a wrong decision, the person who approves the decision must suffer more than the person who tries to disagree. This avoids the “tragedy of the commons” inherent in the voting system.
Forking is a powerful form of anti-coordination, precisely because it introduces “Skin in the game”. In Hive, Steem’s community fork that puts aside hostile acquisition attempts, the coins used to vote for hostile acquisitions are mostly deleted in the new fork. The key individuals involved in the attack were also affected as a result.
Markets are generally very powerful tools, precisely because they can maximize Skin in the game. Decision market (prediction market used to guide decision-making; also called futarchy ) is an attempt to extend this benefit of the market to organizational decision-making. Nevertheless, decision-making markets can only solve some problems; in particular, they cannot tell us which variables should be optimized first.
Note: Futarchy is a new form of government proposed by economist Robin Hanson. Selected officials formulate policies, while the public is betting on different policies through speculative markets, thereby generating the most effective choices. See V. Buterin’s article “On Collusion”.
All this gives us an interesting view of what the people who build the social system do. One of the goals of building an effective social system is to determine the structure of collaboration to a large extent: which groups and in what configuration can come together to advance their group goals, and which groups cannot?
Different collaboration structures, different results
Sometimes, more collaboration is beneficial: when people can work together to solve their problems collectively, things get better. At other times, more collaboration is dangerous: a small group of participants may use collaboration to deprive others of their rights. At other times, for another reason, more collaboration is necessary: to enable the wider society to “counterattack” the conspiracy to attack the system.
In these three cases, different mechanisms can be used to achieve these goals. Of course, it is very difficult to directly prevent communication, and it is also difficult to make collaboration work perfect. However, there are many choices between the two that can produce powerful effects.
Below are several possible collaborative structured techniques.
- Privacy protection technology and specifications
- Technical means (secret voting, MACI and similar techniques) that make it difficult to prove your behavior.
- Consciously decentralize, and assign control of a certain mechanism to a large group of people who are not well known to cooperate.
- The decentralization of physical space separates different functions (or different shares of the same function) into different locations (for example, see Samo Burja on the link between urban decentralization and political decentralization).
- The decentralization between role-based constituencies separates different functions (or different shares of the same function) to different types of participants (for example, in the blockchain, “core developers”, “miners”, ” “Coin holder”, “application developer”, “user”).
- Schelling points, allowing large groups of people to collaborate quickly around a path forward. Complex Schelling Points may even be implemented in code (for example, how to recover from 51% attacks).
- Use a common language (or split control between multiple supporters who use different languages).
- Use voting by person instead of voting by (coin/share) to greatly increase the number of people required to influence a decision through conspiracy.
- Encourage and rely on defectors to remind the public of impending collusion.
Note: Schelling points are put forward by American economist Thomas Schellin in the book “Strategies of Conflict”. If people know that other people are trying to do the same thing without communication, their actions tend to converge in a conspicuous Focus on. For example, if two people meet in New York without communicating in advance, they will choose Grand Central Station with a high probability, which forms a natural Schelling Point.
None of these strategies are perfect, but they can be used in various situations with varying degrees of success. In addition, these technologies can and should be combined with mechanism designs that try to make harmful collusion as unprofitable and risky as possible; in this regard, “Skin in the game” is a very Powerful tool. Which combination is most effective ultimately depends on your specific case.