What are we doing here? Are we making up a new rule to try and get rid of someone who is unpopular?
No, I've raised this because I've noticed this being a persistent pattern of behaviour from someone, and realised that we do not have any way of policing disinformation.
I recognise the inherent difficulties to this task which is why I didn't propose a solution right away. But if I were to go about try constructing a rule for this, I would say there should not be a blanket rule on disinformation, but one that applies if there is either 1) specific intent to target a group of people based on generally recognised protected characteristics (such as race, religion, gender, etc.); or 2) potential for harm that could result from the disinformation. Something that might belong to the first category would be, using an example in real-life politics that we've recently seen, the disinformation about Haitian immigrants in the US killing and eating pets. Something that belongs in the second category would be, for example, the classic "vaccines cause autism" conspiracy theories - the potential for harm being that people exposed to the disinformation would choose not to vaccinate, thus weakening herd immunity and making deadly disease outbreaks more likely. The way the information is presented by the person should also be considered when it comes to determining whether it falls afoul of the rule. For example, if someone says "I heard somewhere that Haitian immigrants are eating pets, is this true?", I would lean on the side of leniency as it's difficult to establish a specific intent to target Haitians as opposed to simply seeking clarification. But if someone says "Haitian immigrants are eating other people's pets, and Democrats still think letting these people in the country is a good idea", then I would say the intent is clear.
As for the standard of establishing whether something is actually disinformtion, as opposed to merely biased opinions or embellishments, I would say we can limit ourselves to cases where there is a clear preponderence of evidence that the claims being presented are false. The examples I've cited above are two good examples where I believe this standard is satisfied.
I will also give an example of the inverse - something that I believe don't quite satisfy all of the guidelines I established above. 9/11 conspiracy theories generally do not target a specific group with protected characteristics, instead usually alleging US government involvement; neither do these conspiracy theories have much potential for harm as most people are unlikely to do anything about it even if they believe it. It would likely satisfy the prepoderence of evidence as all available evidence point to the the commonly-accepted explanations and there is very little evidence supporting any of the conspiracy theories.
Obviously there are many disinformation topics out there and many different ways to spread disinformation, and most of them are not as clear-cut as the examples I have provided here. But Moderation is something that requires a degree of human judgment, and I hope that with sufficient guidelines in place, it would enable Moderators to exercise that judgment wisely. I hope this post clarifies my position on the issue and provides some starting points for a workable solution.