What Causes Disagreements
I find it hard to accept that people have values that are different enough to cause great disagreement. Instead of people actually enjoying different things, what people appreciate usually only seems marginal: most of us agree that things like music and laughter are good, but we strongly disagree (for reasons other than values!) about what kind of music is good or what`s funny. (Which is sad. Anything that makes you laugh is funny, and you don`t get extra points if you like a certain type of humor.) Number four on your list of seven cases should, in my opinion, be minimized. More likely, when a value differential occurs, it is either the result of inadequate or false factual beliefs or a simple ancient misunderstanding. They have a disagreement in front of them. How do you deal with that? Causes of false disagreements: is the disagreement genuine? The trivial case is an obvious disagreement that occurs through a strong or low information channel. Internet chat is particularly vulnerable to failure in this way due to the lack of sound, body language, and relative location cues. People may also disagree by using different definitions with corresponding denotations and connotations. Fortunately, this cause of disagreement, when recognized, rarely leads to problems; The topic at stake is rarely the definitions themselves.
If there is a theoretical reason for the game, agents can also give the appearance of a disagreement, although they may very well agree in private. Officers could also disagree if they are victims of a man-in-the-middle attack in which someone intercepts and modifies messages exchanged between the two parties. After all, agents may disagree simply because they are in different contexts. Is the sun yellow, I ask? Yes, you say. No, say the aliens to eta Carinae. Causes of disagreements over predictions: evidenceWhen the disagreement is real, what does it give us? Most often, disagreement is about the facts that determine our actions. To deal with this, we must first examine our relationship with the other person and how they think (about superrationality); The comments made by others may not have the same weight that we would give to those observations if we had made them ourselves. After looking at this, we need to merge their evidence with ours in a controlled manner. With people, it gets a little tricky. It is rare for people to give us information that we can process in a Bayesian way of our own (a Aumann`s chord theory). Instead, we need to merge our explicit evidence with vaguely abstract probabilistic intuitions that are half speculation and half partially forgotten memories. If we still disagree after reviewing the evidence, what happens now? The officers could have “started” at different locations in the previous room or in the induction room.
While it is true that a person`s “starting point” and the evidence they have seen can be gathered, it is also possible that they really started in different places. Resource constraintsRegivation can also be caused by resource constraints and implementation details. Cognition may have a sensitive dependence on initial conditions. For example, if you answer the question “Is it red?”, slight fluctuations in lighting conditions can cause people to react differently to borderline cases. This illustrates both the sensitive trust in the initial conditions and the fact that certain types of information (which exactly you have seen) simply cannot be communicated effectively. Our mental processes are also inherently noisy, which leads to different errors in the processing of evidence and increases the need to warm up an argument several times. We suffer from constraints of computational space and time that make computer approximations necessary. We learn these approaches slowly in different situations and therefore may disagree with someone, even if the evidence relevant to the prediction is there, our other “evidence” used to develop these approximations can vary and inadvertently penetrate our responses. Our approximation methods may vary. After all, it takes time to integrate all the available evidence, and people differ in the amount of time and resources they have to do so.
Systematic errorsJuly, it is also possible that either party may simply have a deeply flawed prediction system. They could make systematic mistakes and corrective feedback loops could be interrupted or missing. They might have troubling feedback loops that draw the truth from predictions. Your prediction methods may vary invalidly with what is being considered; Their thoughts may avoid topics such as death or illness or mistakes in their favorite theory, and their minds may be drawn to what will happen after winning the lottery. irrationality and prejudice; Emotions and inability to abstract. Or worse, how is it possible to resolve a disagreement with someone who disagrees with himself and has an inconsistent opinion? Other reasons for disagreement: GoalsI say dogs are interesting, you say they are boring and yet we both agree on our predictions. How is that possible? This kind of disagreement would be a disagreement over the utility function to be used, and it is directly insoluble between the utilitarian agents of preservation of objectives; However, indirect means such as exchanging annoying dogs for interesting cats work most of the time. In addition, we are not utility agents (for example, circular preferences); Perhaps there are strategies at our disposal to resolve conflicts of this form that are not available to utilitarians? Epiphenomenonally, it is possible that agents agree on all observable predictions and yet disagree on unobservable predictions. Predictions without consequences are not predictions at all, how could it be? If disagreement still exists after recognizing that there are no observable consequences, look elsewhere for the cause, it may not be here. Why discuss worthless things? The disagreement must be caused by something; Don`t look here. To use this taxonomy:I I tried to list the above sections in the order in which you should check each type of cause if you were to use the sections as a decision tree (simple review and correction, adjustment to the definition, probability of occurrence). This taxonomy is symmetrical between the opposite parts, and many sections are of course adapted to the loop; Merge evidence piece by piece, refine calculations iteration by iteration, .
This taxonomy can also be recursively applied to meta-disagreements and disagreements found during the analysis of the original. What are the termination conditions for the analysis of a disagreement? They come in five forms: full agreement, satisfactory agreement, unconscorable, recognition of conflicts and resolution of the issue. Being a third party in a disagreement only changes the analysis to the extent that you no longer do symmetric self-analysis, but look at a disagreement with the extra distance that accompanies it. Many thanks to Eliezer Yudkowsky, Robin Hanson and the LessWrong community for lots of exciting material. (ps This is my first post and I would appreciate any comment: what I did well, what I did wrong, and what I can do to improve.) Left:1. lesswrong.com/lw/z/information_cascades/2. lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ Suppose Janine sends Chris a simple email request: “Chris, I need this document by Friday at 2pm.” Janine understands exactly what she wants, but there are potential gaps in information: just like I know what it`s like to meditate, but explaining the act of meditation is one of the most difficult tasks out there. And besides, how do you explain what blue or red is? Avoid conflict by alerting your team members to what is expected of them so they can work with confidence. Some people stay in their jobs for years, trying to guess what their manager`s expectations are. Others resign and move on.
If someone doesn`t know how to behave, they can lose confidence and become defensive. Wouldn`t it be easier to state from the beginning what you expect from the team member and avoid the frustration that often turns into conflict? It may be a better heuristic to determine whether your level of security in your position is above or below your average level of certainty on any issue on which you might have disagreements. I hope there will be less prejudice on this issue of whether you are more confident than usual. Then, if everyone adopted the policy of believing themselves when they are exceptionally confident, and believing the other person when they are less confident than usual, the average accuracy would increase. .