• 3 Posts
  • 31 Comments
Joined 8 months ago
cake
Cake day: January 26th, 2024

help-circle
  • Reasonable. I wasn’t trying to jump down your throat about it. I was a little annoyed at the comments which are positing some sort of fantasy scenario where the bot is useful, but where people hate it for irrational reasons. But yours was a reasonable question, definitely, in particular because for at least one account, it looks like what you described is exactly what’s happening.



  • They have not. I just did some analysis of it, and there is one person whose account has downvoted almost every comment that the bot has left. They have around a thousand other votes, so it’s unlikely to be a single-issue votebot account, but they also have no posts or comments, which is suspect. It seems plausible that there’s something mechanical going on which might be concerning. On the other hand, it’s only one person. There is one other person who has given so many downvotes to the bot that it’s suspicious, also.

    Aside from those two accounts, it all looks like real downvotes. There are accounts which have given hundreds of downvotes to the bot, but they’re all recognizable as highly active real accounts, so it makes sense that they would give mass downvotes to the bot.

    People just don’t like the bot. Have you considered listening to the pretty extensive explanations they’ve given in this comments section as to why?


  • I’m saying that the bot is incorrect. Look up any pro-Palestinian or -Arab source on it, and you’ll find a pretty bald-faced statement that it is factually suspect, because its viewpoint is anti-Israel. Look up the New York Times, which regularly reports factually untrue things, including one which caused a major journalistic scandal near the beginning of the war in Gaza, and check its factual rating.

    Every report of bias is from somebody’s point of view. That part I have no issue with. Pretending that a source is or isn’t factual depending on whether it matches your particular bias is something different entirely.



  • It also has links to ground.news baked into it, despite that site being pretty useless from what I can tell. I get strong sponsorship vibes

    It all just suddenly clicked into place for me.

    I think there’s a strong possibility that you’re right. It would explain all the tortured explanations for why the bot is necessary, coupled with the absolute determination to keep it regardless of how much negative feedback it’s getting. Looking at it as a little ad included in every comments section makes the whole thing make sense in a way that, taken at face value, it doesn’t.


  • Most people don’t want the bot to be there, because they don’t agree with its opinion about what is “biased.” It claims factually solid sources are non-factual if they don’t agree with the author’s biases, and it overlooks significant editing of the truth in sources that agree with the author’s biases.

    In addition, one level up the meta, opposition to the bot has become a fashionable way to rebel against the moderation, which is always a crowd pleaser. The fact that the politics moderators keep condescendingly explaining that they’re just looking out for the best interests of the community, and the bot is obviously a good thing and the majority of the community that doesn’t want it is getting their pretty little heads confused about things, instigates a lot of people to smash the downvote button reflexively whenever they see its posts.




  • I would suggest that what is to you “correcting misinformation” can easily be received as just being cantankerous or offensive.

    If you accept that the other person has a choice whether or not to agree with what you are saying, and show respect for both their ability to make up their own mind about it and the possibility that you might be the wrong one, I think you will be more successful at correcting the misinformation. As it is, I think you’re gathering a lot of downvotes because you’re airing deliberately combative opinions in places they aren’t welcome, and often not much more than that.

    I think a better solution would be to find a way to present your opinion in a way that still preserves the health of the community as you say, and stay, rather than to either hold on to your current way or else go. I didn’t read your entire profile, just that parts of it that the bot took issue with, but even those, I agreed with your unpopular opinion a lot of the time. But I do think the bot has a point that you’re creating your own unwelcome reception by the way that you are presenting them.


  • Sure. Here are some offending comments that it picked out:

    They’re unpopular in a way that motivates a ban decision, with the last two being severely unpopular.

    I think I agree with your first two comments, so it irritates me a lot that they’re motivating a ban. That’s exactly the kind of silencing of an unpopular viewpoint that I don’t want it to do. Your last comment is different. I’ll just say that a lot of what I want to do is look at the type of discussion that particular comments cause, and in this case that last comment definitely caused a lot of yelling and not a lot of evidence-based reasoned discussion from either side.

    The reason the determination changed is that I retuned the bot such that it’s a lot easier to get banned if stuff like those comments above is all, or most, of what you post. And it does look a lot like that applies to you. Even if any one comment isn’t a deal-breaker, most of your comments are like the above, so you’re starting to look like a rabble-rouser primarily, and a political participant calmly speaking your mind only occasionally. It’s not that any one of the comments is a deal breaker, but that stuff is the majority of what you post.

    I’m not sure how to feel about it. When I looked over the whole history I did see quite a lot of controversy which usually isn’t good. But it’s hard for me to say that I agree with the bot’s determination in this case, especially because a good bit of what you say, I agree with.



  • I think I see it the opposite. There’s a population that posts normal stuff and sometimes crosses a line and posts inflammatory stuff. And there’s a population that has no interest in being normal or civil with their conversation, which can sometimes be kept in line to some degree by the moderators, or sometimes gets removed if they can’t.

    The theory behind this moderation is that it’s better to leave alone the first population, but outright remove the whole second population, while still giving them the option of coming back in if they want to change their way of interacting on a longer-term timescale. My guess is that it’s better to do that than to keep them in line by removing comments every now and then, but not intervene unless they cross certain lines, which means they can continue to make unwanted postings according to the community while skirting the lines of acceptable levels of offensiveness, according to the moderators.

    Whether that theory is accurate remains to be seen, of course.



  • I agree. As soon as I started talking to people about it, it was blatantly obvious that no one would trust it if I was trying to keep how it worked a secret. I would have loved to inhabit the future where everyone assumed it was an LLM and spent time on trying to trick the nonexistent AI, but it’s not to be.

    I agree with you about the bad state of the political discourse here. That’s why I want to do this. It looks really painful to take part in, and I thought for a long time about what could be a way to make it better. This may or may not work, but it’s what I could come up with.

    I do think there is a significant advantage to the bot being totally outside of human judgement, because that means it can be a lot more aggressive with moderation than a human could be, because it’s not personal. The solution I want to try for the muck you are talking about is setting a high bar, but it’s absurd to have a human go through comments sorting them into “high enough quality” and “not positive enough, engage better” because it’ll always be based on personal emotion and judgement. If it’s a bot then the bot can be a demanding jerk, and it’s okay.

    I think a lot of the intervention element you’re talking about can come from good transparency and giving people guidance and insight into how the bot works. The bans aren’t permanent. If someone wants to engage in a bot-protected community, they can, if they’re amenable to changing the way they are posting so that the bot likes them again. Which also means being real with people about what the bot is getting wrong when it inevitably does that, of course.


  • I really did worry about this a lot. I wasn’t kidding that your user is a perfect test case. A human moderator could look at one of those comments and say you’re just stirring up trouble, but in my opinion that’s a valid viewpoint even if I don’t agree with it, that someone should still be allowed to say.

    By the same token, if that’s all someone wants to post, then it’s a way different story and they should probably be banned. I very much like the property that you can effectively earn the right to say what you want by posting productive things. So, if you want to make a troll account, you have to post a bunch of productive content to shield your disruptive content from moderation… at which point the result is a bunch of productive content and a handful of disruptive comments, which is probably a net victory for the community anyway.


  • Yeah, the 99.5% of users who are allowed to post are really going to produce a weirdly artificial monoculture without the vital counterweight of the other 0.5%.

    In seriousness, I did worry about this. Your user is, as a matter of fact, a great test case for deciding whether it’s banning people based on unpopular opinions alone. The bot doesn’t have a problem with you, despite you posting radically unpopular opinions that it judges negatively (one, two), because you participate in discussion other than that and have enough “positive rank” to outweigh saying some things that aren’t popular.

    You’re not wrong to worry about this, but I did worry about it too. Part of what I want to watch, and why I want people to speak up if they think its decisions are unfair, is that I made a hard concerted effort to distinguish between banning real trolls, and banning people who are just speaking their mind, and do the first without doing the second.




  • I don’t want to go into any detail on how it works. Your message did inspire me, though, to offer to explain and demonstrate it for one of the admins so there isn’t this air of secrecy. The point is that I don’t want the details to be public and make it easier to develop ways around it, not that I’m the only one who is allowed to know what it is doing.

    I’ll say that it draws all its data from the live database of a normal instance, so it’s not fetching or storing any data other than what every other Lemmy instance does anyway. It doesn’t even keep its own data aside from a little stored scratch pad of its judgements, and it doesn’t feed comment data to any public APIs in a way that would give users’ comments over to be used as training data by God knows who.


  • Other things that have occurred to me in the meantime:

    1. I’m fine with explaining how it works to one of the slrpnk admins in confidence. We can get in Matrix, I can show the code and some explanation, and depending on how it goes I might even be fine giving access to the same introspection tools I use, to examine in detail going forward why it made some particular decision and if it’s on the right track. The point is not that I’m the only one who’s allowed to understand it, just that I don’t want it to become common knowledge.
    2. I’m not excited to be a “full time” moderator, for reasons of time investment and responsibility level. Just like with !inperson@slrpnk.net, I want to be able to create this community because I think it is important, not necessarily to “run it” so to speak. My preferred perfect trajectory in the long run is that it becomes a tool that people can use to automate moderation for their own communities, if it can prove useful, instead of just being used by me to run my own little empire. I just happen to think that this type of bad-actor-resistant political community would be a great thing on its own, as well as a good test of this automated approach to moderation of communities political and otherwise.

  • Perfectly reasonable. It’s not feeding any users’ comments into any LLM public API like OpenAI that might use them for training the model in the future. As a matter of face it’s not communicating with any API or web service, just self contained on the machine that runs it.

    As far as transparency, I completely get it. I would hope that the offer to point to specific reasons for any user that wants to ask questions about why they can’t post will help to alleviate that, but it won’t make it completely go away. Especially because as I said, I’m expecting that it will get its decisions wrong some small percentage of the time. I just know there’s an arms race between moderation tooling and people trying to get around the moderation tooling, and I don’t want to give the bad actors any legs up in that competition even if there are very valid reasons for it in terms of giving people reasons to trust that the system is honest.