Social Firewalls
I'm interested in how people can cooperate despite large disagreements. Part of this is because I believe such cooperation may be necessary for tackling issue in AI safety (e.g. some of the challenges I wrote about regarding regulatory approaches), but I also think that cooperating across large divides is important in other areas as well, like managing increasing political polarization, and even in interpersonal contexts. Disagreements create a lot of problems, and when people feel very strongly, trying to find some type of compromise position or changing of minds can also be a difficult propositions. In general, I think it would be good to have more methods of working together without needing to swim against the tide by attempting to get people to agree or compromise. This post is about an idea which I find helpful for thinking about cooperation under strong disagreements, which I'll call the social firewall.
What is a social firewall?
A social firewall is a norm that restricts how one entity interacts with other entities. Things like conflict-of-interest rules and policies are a type of formal social firewall, but social firewalls could be more informal as well. For example, let's say you're a company that has a "privacy-friendly" brand, maybe you have all sorts of social media posts about how you "don't sell customer data" and stuff like that. Such a brand can serve as an informal social firewall. You're signalling to your customers that there are things other companies do that you aren't going to do.
Politics also has social firewalls. In fact, I'd argue that political parties partially serve as a social firewall. Political parties are important not just because they define who is on your team, but also because they define who is not on your team, and place implicit social restrictions on working with people outside the party. Take the Hastert rule. Per Wikipedia[1]:
The Hastert Rule, also known as the "majority of the majority" rule, is an informal governing principle used in the United States by Republican Speakers of the House of Representatives since the mid-1990s to maintain their speakerships and limit the power of the minority party to bring bills up for a vote on the floor of the House. Under the doctrine, the Speaker will not allow a floor vote on a bill unless a majority of the majority party supports the bill.
In theory, a bill could win support in the house with a lot of votes from the minority party and only a few from the majority. Knowing such a situation might occur, a Speaker[2] might endorse the Hastert rule to address fears on the part of the majority party that the speaker would allow such bills to come up for a vote. The rule is a way of operationalizing and enforcing social firewalls around a political party, by limiting the party's members from cooperating with the other party to pass bills that aren't supported by the majority.
Social firewalls and cooperation
Now, so far, I really haven't distinguished the idea of social firewalls from norms in general. Why am I trying to invent some new term for something that is already discussed extensively under an existing name? It's true that social firewalls are just a type of norm, but the reason I want to call them out specifically is because I believe they create a special dynamic when it comes to cooperation.
Take the Hastert rule. It's literally a limitation on when members of a party can cooperate with members of another party. So, if I'm saying it's used to enforce a social firewall, I must be saying social firewalls reduce cooperation, right? The thing that I find interesting about social firewalls is that even thought they restrict cooperation, they are also sometimes necessary for cooperation.
Looking at it from a certain perspective, the Hastert rule seems silly. Its function is to shut down bills that have majority support in the house. Politicians are supposed to represent the people, but this rule seems like it's a selfish attempt by the majority party to block popular policies just because they don't agree with them. But from the viewpoint of a majority party, there is a very good reason for such a rule. It addresses a very important concern, that the speaker might bring bills for a vote that the majority doesn't like. For the majority party, avoiding certain bills is more important than cooperating with the minority party. It's critical to understand the issue of perspective here. The Hastert rule opposes things that have majority support, but that's a feature from the perspective of majority party members. The rule is a stop-gap against bills they oppose.
I think this dynamic is important to keep in mind in the presence of strong disagreements. When people strongly disagree there are going to be limits to cooperation. We have to accept this and work around it, rather than expecting people to just somehow go against their own beliefs. A majority party doesn't want to cooperate with a minority party, because they disagree with the minority party and think the things they want to do are bad! Looking at things that way, of course they don't want to cooperate. Cooperation between people with strong disagreements creates risk for everyone that the people who you disagree with will be able to do things you don't want. That framing is inherently anti-cooperative, but it also is just the reality of things. People need to feel like they're not going to help "the other side" do bad things in order for cooperation to be possible.
This is how social firewalls can paradoxically enable cooperation. They can create a situation were people who disagree are willing to risk cooperating, by helping to limit and manage the risk. Let's say a member of the majority party hears the speaker is discussing a bill with the minority party leadership. Majority members might attempt to shut down the discussions or refuse to help their own party because they're worried the speaker will bring the bill for a vote. Something like the Hastert rule allows members of the majority party to feel more safe to engage in discussions with the minority party, because there are some guardrails around what can happen as a result of those discussions.
The cooperation ecosystem
The world is a big, scary place, that contains a lot of people who don't necessarily share our views or have our best interests at heart. This creates a lot of uncertainty. Social firewalls are a way of managing that uncertainty, both with people who we consider allies, and with those we consider adversaries.
People and groups may develop social firewalls for the same reasons that organizations use network firewalls. Within the firewall, you can feel safer having interactions because the firewall helps ensure that only trusted people can interact through those channels. The firewall helps people within the organization cooperate with each other because they feel like they can trust interactions within the organization's network more. The firewall is a barrier to interactions with people outside the organization, but without the firewall, those interactions would probably be viewed as unsafe anyway. The firewall cuts off certain channels with external people and organizations, but the ones that remain have a higher degree of trust. That trust is essential for those channels to actual function as a method of communication. Without the firewall, people might refuse to engage in communication externally altogether.
In the same way, social firewalls allow people to feel a high level of safety cooperating with allies while still maintaining some level of cooperation with potential adversaries. I find it helpful to keep in mind that cooperation isn't just about a single interaction, but is in fact an entire ecosystem of potentially cooperative interactions. As a result, sometimes people might end up doing things that don't seem to make a lot of sense in the context of an individual interaction, because they are maintaining the web of complicated relationships they find themselves in.
I think a lot of things that politicians do that make people really annoyed are a result of this, but it applies outside of politics as well. For example, let's say you and a coworker begin to realize you have feeling for each other. You start dating, and everything seems to be going well. Best practice according to the employee handbook is to declare the relationship, in which case one of you will need to move to another team. But why should that be necessary? Sure, in theory there could be problems with coworker relationships, but in your case everything is going well. You and your SO talked it all out, and you're sure you can handle the situation professionally. Your close relationship probably even makes the two of you more productive! Why should you create more problems simply for the sack of following policy?
In any individual case its entirely possible that following such a policy is more trouble than its worth. But looking at the situation as an ecosystem suggests why a company might have such a policy. The policy establishes a set of firewalls between all the entities involved, the two members of the couple, as well as the company. Enforcing these firewalls creates a barrier between the company and the two employees, but having the firewalls allows the company to remain on good terms with other entities like its other employees, government agencies, and other companies it does business with. For the couple, they need to put up some barriers with each other by moving to different teams, but this allows them to continue working for the company. The key is that the availability of "move to different teams" as a social firewall is what makes the relationship possible in the first place. Without a firewall, either the romantic relationship or the employment relationship might have to end. Placing restrictions on the relationship allows it to exist within the ecosystem. Social firewalls can be good or bad for cooperation overall, but their existence creates the possibility of cooperation where it might otherwise seem unlikely.
Can a more fractured society actually be more cooperative?
When we think of how to cooperate, we implicitly think about "bringing people together". Making them see things from each other's point of view, and compromising. Perhaps moderating their own views to be closer to that of others. This is great when you can pull it off, but I'm worried that it can't scale. It's harder to bring people together when you can't rely on personal relationships. When people already have existing disagreements, and those disagreements are getting more intense and filled with animosity, its questionable whether you can rely on shared culture or values to bring people together. But what if there is a type of cooperation that relies on pushing people apart in a certain way? That's something I find interesting about this idea of social firewalls.
In my post about regulation of ML systems, a lot of the challenges I identified are about getting people to cooperate. Getting tech companies and regulators to work together, for example, seems super hard. There just seems to be a massive divide and not a lot of interests in common. That seems to make the "bring people together" approach difficult. But what if there was another approach that looks more like "build an ecosystem" where a cooperative relationship between tech and government could exist? Part of that might involve putting up certain walls instead of breaking them down.
Likewise for political polarization, when people talk about addressing it, I think there is sometimes a background assumption that people need to moderate or agree more. But what if there was some way to manage polarization that allowed people to continue to disagree, but also make progress on areas of agreement? What if we need to build the firewalls? Possible examples of policies that might achieve this would be expansions of federalism or reforms that allow more than two parties to be viable. This would allow states or parties to serve as social firewalls. These particular policies would also have other consequences, so I don't know if they would actually be good ideas, but they are the kinds of things that I could imagine working within the framework that I'm proposing. I'd be interested to see what ideas can come out of a framework that focuses on social firewalls as a tool of cooperation.
Internal links and references omitted. ↩︎
The Wikipedia article contains lots of discussion on views and actions of past speakers, including Dennis Hastert, after whom the rule is named. I'm not taking a position on the extent to which any of these people followed or should have followed this rule. My interest is in the rule as an example of how certain social and political dynamics interact. I think the fact that the rule exists and has been invoked by politicians is sufficient for this to be a reasonable thing to do, but my discussion here isn't intended as commentary on any particular politician or decision. ↩︎