Wikipedia:Village pump (all)
This is the Village pump (all) page which lists all topics for easy viewing. Go to the village pump to view a list of the Village Pump divisions, or click the edit link above the section you'd like to comment in. To view a list of all recent revisions to this page, click the history link above and follow the on-screen directions.
(to see recent changes on Village pump subpages)
I want... | Then go to... |
---|---|
...help using or editing Wikipedia | Teahouse (for newer users) or Help desk (for experienced users) |
...to find my way around Wikipedia | Department directory |
...specific facts (e.g. Who was the first pope?) | Reference desk |
...constructive criticism from others for a specific article | Peer review |
...help resolving a specific article edit dispute | Requests for comment |
...to comment on a specific article | Article's talk page |
...to view and discuss other Wikimedia projects | Wikimedia Meta-Wiki |
...to learn about citing Wikipedia in a bibliography | Citing Wikipedia |
...to report sites that copy Wikipedia content | Mirrors and forks |
...to ask questions or make comments | Questions |
Discussions older than 7 days (date of last made comment) are moved to a sub page of each section (called (section name)/Archive).
Policy
We need to fix the admin recall process
Right now only "recall" votes count, and those opposing recall don't count for anything, nor do any points made in the discussion. So 25 quick group-think / mob thumbs-down votes and even be best admmin can get booted. And the best (= the most active) are the ones most likely to get booted. An admin that does near zero will get zero votes to recall. And with a single regular RFA currently the only way back in (which we've seen, very few want to go through) "booted" is "booted". The fix would be to have a discussion period pror to voting, with both "recall" and "don't recall" choices. And then say that the recall has occurred (thus requiring rfa) if over 50% or 60% of those voting said "recall".
Sincerely, North8000 (talk) 20:40, 19 November 2024 (UTC)
- @North8000 Please see Wikipedia:Administrator recall/Reworkshop, where editors are already discussing potential changes. Sam Walton (talk) 20:43, 19 November 2024 (UTC)
- Thanks. I looked for something like that but I guess I didn't look hard enough. I hope others look harder than me. :-) North8000 (talk) 21:58, 19 November 2024 (UTC)
- I don't think you understand how recall works. An admin is only desysopped after the RRFA, not after the 25 signatures, unless they choose to resign on their own. You're asking to hold a vote on whether or not a vote should be held. ~~ Jessintime (talk) 20:55, 19 November 2024 (UTC)
- Yes, I understood that and that is integrated into my comment above. Unless they go through and succeed at an RFA they are gone. North8000 (talk) 21:54, 19 November 2024 (UTC)
- I've never heard of a petition that lets people sign because they don't support it. And I'll add that between the two recall petitions that were enacted to this point, both were preceded by many, many attempts to get the admin to correct course over the years despite egregious misconduct. Thebiguglyalien (talk) 21:03, 19 November 2024 (UTC)
- I'm not talking about any particular cases. Sincerely, North8000 (talk) 21:56, 19 November 2024 (UTC)
- So, the premise of your argument is pure conjecture? Regards, Goldsztajn (talk) 22:05, 19 November 2024 (UTC)
- ???? It was from an analysis of it's current structure. North8000 (talk) 14:10, 20 November 2024 (UTC)
- But you've just refused to engage in a discussion with how the structure has actually worked in practice; hence, conjecture. Regards, Goldsztajn (talk) 00:19, 21 November 2024 (UTC)
- ???? It was from an analysis of it's current structure. North8000 (talk) 14:10, 20 November 2024 (UTC)
- So, the premise of your argument is pure conjecture? Regards, Goldsztajn (talk) 22:05, 19 November 2024 (UTC)
- I'm not talking about any particular cases. Sincerely, North8000 (talk) 21:56, 19 November 2024 (UTC)
- The process at the moment does have a certain level of redundancy, with the recall and reconfirmation RFA being separate things. The reconfirmation RFA is even a standard RFA, as it has different criteria for success.
- I'm not sure if anything should be done yet, as it's still very early in its adoption. However if the situation occurs that a petition is successful but the reconfirmation RFA SNOWs, it could indicate that adjustments needs to be made so that community time isn't wasted. That speculative at the moment though. -- LCU ActivelyDisinterested «@» °∆t° 23:53, 19 November 2024 (UTC)
- The recall petition threshold is not the recall discussion - it is just a check to prevent the most frivolous recall discussions from being held. — xaosflux Talk 00:56, 20 November 2024 (UTC)
- The optics of this look alltogether terrible from my observation. I don't edit much, but I like reading a lot. Every criticism of the recall process i've seen so far just looks like old established admins thinking they might be next and having anxiety about that.
- The problem of something like this is that the optics are terrible. If anyone who doesn't know you reads that, the conclusion they will draw will likely not be "this recall process is terrible" and more likely go along the lines of "wow this is a lot of admins who don't have the community's trust anymore and want to dodge accountability".
- By being so vocally against any form of community led accountability, you're strenghtening the case for easy recalls and low thresholds, not weakening it.
- Specifically regarding Fastily, I'll make no comment on whether or not he deserves to still be an admin or not, I don't know him well enough for that and haven't reviewed enough of his contributions, but the arguments of "ANI agreed that no sanctions were appropriate" sound a lot like "our police department has investigated itself and found nothing was wrong". You have to see how this comes across, it's eroding trust in Admins on the whole project right now. Magisch talk to me 09:24, 20 November 2024 (UTC)
- Specifically, if RFA is so toxic that nobody wants to do it, that needs to be reformed. But the recent amount of vitriol towards a process that only kickstarts having to prove that you retain community trust has me convinced that there should be automatic mandatory RRFAs for every admin every 2 years or so.
- If, as of today, you don't believe the community would entrust you with admin tools, why do you think you should still have them? The criteria for losing them should not be "has clearly abused them", it should be "wouldn't be trusted with them if asked today". Magisch talk to me 09:33, 20 November 2024 (UTC)
- As an admin actively working to improve the recall process, my goal is to make it as fair as possible to all parties. That means it should not be possible to subject an admin to the process frivolously while equally making it possible to recall administrators who have lost the trust of the community, and it needs to be as non-toxic as possible, because even administrators who are actively abusing their tools are people and nobody deserves 1-2 months of abuse. It's also incorrect to describe ANI as a police department investigating itself - everybody engaging in good faith is welcome to comment there, regardless of whether they are an admin or not. Thryduulf (talk) 11:15, 20 November 2024 (UTC)
- @Thryduulf It's the Administrator's Noticeboard, naturally the vast majority of participants will be either admins or people who are involved in the same work.
- I don't think asking an admin to confirm they still retain the trust of the community (the whole basis of giving out admin tools to begin with) is ever really frivolous. The current process allows that at most once a year. If an admin had to stand for RFA every year, that might be a bit too much long term, but really, if any admin thinks they would not pass RRFA today, why should they retain their tools.
- Also, the sheer optics of it being mostly (from what i've seen) established admins calling this process toxic are terrible. Anyone who doesn't know anything about this process will see this as some kind of thin blue line mentality in the admin corps - and might conclude that it is time to desysop the majority of old admins to dissolve the clique.
- I wouldn't be surprised if we see a bunch of recall petitions for the most vocal critics of this process. Magisch talk to me 11:27, 20 November 2024 (UTC)
- I have no horse in this race, except that I regret not seeing the RFA earlier so I could have voted Support, sorry about that.
- But if your argument is optics, then having a bunch of recall petitions for the people who most vocally expressed a valid opinion on an evolving policy is absolutely awful optics. At best. Gnomingstuff (talk) 01:33, 22 November 2024 (UTC)
- As an admin actively working to improve the recall process, my goal is to make it as fair as possible to all parties. That means it should not be possible to subject an admin to the process frivolously while equally making it possible to recall administrators who have lost the trust of the community, and it needs to be as non-toxic as possible, because even administrators who are actively abusing their tools are people and nobody deserves 1-2 months of abuse. It's also incorrect to describe ANI as a police department investigating itself - everybody engaging in good faith is welcome to comment there, regardless of whether they are an admin or not. Thryduulf (talk) 11:15, 20 November 2024 (UTC)
- I took the stats from the first RRfA to test this theory:
Support | Oppose | Total | |
---|---|---|---|
Administrators | 48 | 29 | 77 |
Non-admins | 71 | 116 | 187 |
Total | 119 | 145 | 264 |
- Administrators made up 29% of the voters. If being an admin doesn't influence anyone's vote, then we can expect admins to make up roughly 29% of the supporters and 29% of the opposers. But this didn't happen. In the final results, administrators made up 40% of the supporters and 20% of the opposers. We can also look at the individual odds of supporting/opposing depending on user rights. It ended at 45% support, so you'd expect admins to have a 45% chance of supporting and a 55% chance of opposing. But this also didn't happen. If you choose any admin at random, they had a 62% chance of supporting and a 38% chance of opposing (ignoring neutrals). Non-admins were the opposite: they had a 38% chance of supporting and a 62% chance of opposing.
- So our next question should be why it was so much more likely for an admin to support the RRfA relative to a non-admin. The obvious answer is of course as you said: admins have a perverse incentive to support here, especially if they're not-so-great admins who know they probably don't have the trust of the community anymore. Also suggested during the RRfA is the comradery that comes from working alongside a fellow admin for so long. I'd be interested in seeing how account age affects likelihood of supporting, but that's not something that can be counted up in a few minutes like admin status. Thebiguglyalien (talk) 17:48, 20 November 2024 (UTC)
- I believe it may be centered on the idea that we all make mistakes, and many of us like to think we'd be given a chance to grow and learn from said mistake, instead of being forced through the RfA process again. But I recognize I may be being overly optimistic on that, and that others may not have the same thoughts on the matter that I do. Many admins I've spoken to would simply choose to give up their tools as opposed to go through an RfA again, something I've also considered despite my relatively smooth RfA. I'm also not sure Graham is the best representation of that. I voted support, recognizing that Graham87 has made mistakes, but also recognizing the significant contributions they've made and their pledge to do better. Bluntly, I did so expecting the vote to fail, and wanting to show some moral support and appreciation for their work. There's certainly a psychological aspect involved in it, but I don't think that, generally speaking, those of us who voted support or have issues with the current process are doing so out of self preservation.
- There's a lot of numbers that could be analyzed, such as the history of those admins who vote at RfA (whether they often vote support or don't vote at all), but it's hard to draw meaningful conclusions from this small of a dataset. Hey man im josh (talk) 19:14, 20 November 2024 (UTC)
- On paper, I get that. The thing is, I don't know whether you saw Levivich's comment or bradv's comment, but you'd be hard-pressed to find a less appropriate time to test the "chance to grow" theory than the absolutely deplorable behavior that we saw from Graham for many years with far too many chances to improve. If it were down to me, this should have been a block in 2023 rather than a desysop in 2024. Thebiguglyalien (talk) 19:32, 20 November 2024 (UTC)
- I'm late to the discussion, but I think it's also worth pointing that only 7 of the 25 users who signed Graham87's petition and 2 of the 25 on Fastily's were admins. ~~ Jessintime (talk) 13:16, 23 November 2024 (UTC)
- I would add that there is a potential wrinkle in this analysis. I'm an extended-confirmed user here (and thus would likely be counted as a non-admin), but I am a sysop on Commons so I would have my own perspective on the matter. Abzeronow (talk) 21:06, 22 November 2024 (UTC)
- So our next question should be why it was so much more likely for an admin to support the RRfA relative to a non-admin. The obvious answer is of course as you said: admins have a perverse incentive to support here, especially if they're not-so-great admins who know they probably don't have the trust of the community anymore. Also suggested during the RRfA is the comradery that comes from working alongside a fellow admin for so long. I'd be interested in seeing how account age affects likelihood of supporting, but that's not something that can be counted up in a few minutes like admin status. Thebiguglyalien (talk) 17:48, 20 November 2024 (UTC)
- Well, I'm not an admin and I started this thread. I'm all for having an admin recall process by the community in place. I'm also also for a process for course correction by the community in areas where and admin has drifted off course but where the problem is fixable. Administrative Action Review has the potential to become this but that has been stymied by various things. Sincerely, North8000 (talk) 14:24, 20 November 2024 (UTC)
- I think, fundamentally, the problem is that admins have a direct and concrete conflict of interest in this discussion. Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions, especially since desysops are very rare at the moment.
- I also don't really agree that the current recall process is all that toxic. You could get rid of the discussion section, as the recall is only a petition, not a consensus discussion, but that's about it. Magisch talk to me 18:33, 20 November 2024 (UTC)
Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions
– I wholeheartedly disagree with this assertion. There's a number of us that fully support a recall process, including quite a few people who have historically been open to recalls. This is an over simplification of the motives of a large group of experienced editors, many of which have legitimate and reasonable concerns about the process in its current form. Hey man im josh (talk) 19:15, 20 November 2024 (UTC)- Substantially all criticism i've seen so far of the process have boiled down to "RFA is abusive and it's unreasonable to make people go through that again". And yet, instead of attempting to change that, the only suggestions seem to be to support older admin's rights to have their permissions continue being grandfathered in. Magisch talk to me 19:27, 20 November 2024 (UTC)
- I'm sorry that that's all you've taken away from the vast amounts of criticism given by people. Perhaps consider focusing on whether the process, in its current state, makes sense instead of focusing on older admins. I'm a relatively new admin and I don't support the current iteration of the process. Hey man im josh (talk) 19:30, 20 November 2024 (UTC)
- I think it's eminently sensible to have adminship not be a lifetime appointment, both by the fact that norms change even when people dont, and that I see people in every RFA expressing reluctance over granting lifetime tools. I also think that assuming RFA isn't a big deal regular reconfirmations make sense. IFF RFA is a big deal, then the focus should be on fixing that.
- It seems to me that existing admins being immune to having to suffer RFA again has created a lack of pressure to actually make it into a functional, nontoxic process.
- Take my opinion for what it's worth though. I'm not an admin nor do I foresee myself ever having aspirations to become one. Magisch talk to me 19:43, 20 November 2024 (UTC)
- Attempting to improve RFA is a very hard problem that people have been working on since before you joined Wikipedia, and are still working on it. I would also say that
it is unreasonable to make people go through that again
is a mischaracterisation of the views expressed, which areit is unreasonable to make people go through that again unnecessarily
, which is significantly different. Thryduulf (talk) 19:31, 20 November 2024 (UTC)- I just found out about this discussion, and it looks to me like the same or similar things are being discussed in way too many different places. Anyway, I'm someone who has stated repeatedly and strongly in multiple places that I think the recall process is a disaster, and is beyond repair. And, contra some statements above, here are some other facts about me. I'm not an admin. I opposed Graham's re-RfA. And I played a central role in WP:CDARFC. --Tryptofish (talk) 20:12, 20 November 2024 (UTC)
- I'm sorry that that's all you've taken away from the vast amounts of criticism given by people. Perhaps consider focusing on whether the process, in its current state, makes sense instead of focusing on older admins. I'm a relatively new admin and I don't support the current iteration of the process. Hey man im josh (talk) 19:30, 20 November 2024 (UTC)
- Substantially all criticism i've seen so far of the process have boiled down to "RFA is abusive and it's unreasonable to make people go through that again". And yet, instead of attempting to change that, the only suggestions seem to be to support older admin's rights to have their permissions continue being grandfathered in. Magisch talk to me 19:27, 20 November 2024 (UTC)
- I would be against it for a different reason: if we allow both supports and opposes, then the recall petition becomes a mini-RfA with the same amount of pressure as the RRfA itself (especially since, given the identical threshold, the recall's result would be indicative of the RRfA's subsequent result). Since anyone can start the recall petition, it functionally means that anyone can force an admin to re-RfA, which is clearly worse.
On the other hand, having a set number of supports needed provides for a "thresholding" of who can open a RRfA, while not necessarily being as stressful. If anything, I would say the recall should become more petition-like (and thus less stressful for the recalled admin), rather than more RfA-like. Chaotic Enby (talk · contribs) 20:01, 20 November 2024 (UTC)
- The ones most likely to be booted are bad admins who are abusive toward the editor community and who negatively represent themselves as admins. Both of the recalls thus far were just exact examples of that and worked perfectly as designed and needed. The process worked exactly as desired and removed bad admins who deserved to be desysopped. Though I do think the discussion section of the petitions should be more regulated. Discussion should be about the admin's actions and conduct and nothing else. Any extraneous commentary should be removed. SilverserenC 00:23, 21 November 2024 (UTC)
- When I first started editing Wikipedia almost 20 years ago, I was struck by what, to me at least, appeared to be widespread incivility. Among a number of things which have changed for the better IMHO is an all round expectation that everyone's standards of behaviour should rise (and they have). The admin role breeds a certain "culture" (for lack of a better term) akin to a conservationist, the role is to "protect" Wikipedia from "harm" and I can certainly see why being an admin could be a deeply frustrating experience. However, what has happened, I think, in the attrition of the admin corps, and the turnover in the non-admin corps, is that the generalised culture of "regular" non-admin editors has moved further forward towards less acceptance of a culture prevalent 10-15 years ago. I think also the rise in editors from non-English speaking backgrounds and from the Global South has caused complexities for those with limited experience outside the anglosphere. The statistics above on the vote for G87's RRFA show an interesting split between admins and non-admins, and within admins. Non-admins were almost overwhelmingly (close to 2/3) of the view that G87 had been given an almost exceptionaly long period to improve, had not, and no longer held their trust. 5/8s of admins, appeared (and comments here also seem to confirm this) split between solidarity for one of their own and displeasure with the recall process. 3/8s admins were in alignment with the majority of non-admins. FWIW, I'm not trying to point to some grand schism; A 38/62 admin split on these numbers is not that profound - if just 9 admins had changed their vote from support to oppose it would have been a 50/50 split. To reiterate, I'm not suggesting that there is a great gap between admins and non-admins, but there does appear to be some gap when it comes to generalised views around the expected behaviour of admins. Regards, Goldsztajn (talk) 01:01, 21 November 2024 (UTC)
- Maybe the divide is not between admins and non-admins but between newer and longer-serving editors (who are more likely to be admins)? Hawkeye7 (discuss) 01:20, 21 November 2024 (UTC)
- I don't disagree, and in effect I was sort of saying the same thing in terms of the attrition of the admin corps and turnover in non-admin corps. FWIW, I do think there are some generalised feelings about admins among non-admins; for example, admins are less likely to face sanction than non-admins. How true that actually is I'm not sure and the point would be that a group of people already tested in commnuity trust (ie RFA) are less likely to breach that trust. However, comments in the G87 RRFA and the strength of the vote suggest there are (wrongly or rightly) widely felt perceptions of disparity. Regards, Goldsztajn (talk) 01:53, 21 November 2024 (UTC)
- I'm currently compiling the data to get some statistics about voters in Graham's re-RFA. I'm a bit less than halfway through so it might be a couple of days before I can present any results. However among the first 113 support voters the maximum account age (on the day the re-RFA started) was 7919 days (21 years), the minimum was 212 days and the average was 4785 days (13 years). I have no data yet for neutral or oppose voters so cannot say how that compares. Thryduulf (talk) 02:03, 21 November 2024 (UTC)
- Do you have a handy list of all voters for RFA? It should be simple enough to use a WP:QUARRY to find out all details about the voters if someone finds an easy enough scrape of who each user is Soni (talk) 05:51, 21 November 2024 (UTC)
- @Soni: [1]. Levivich (talk) 07:09, 21 November 2024 (UTC)
- Here's the Quarry query editcount/registration date for Supports, Neutrals, Opposes.
- I think about 6 editors were missed by the tool you linked, but it should not change overall patterns much so we can just use this as is. Soni (talk) 07:24, 21 November 2024 (UTC)
- Prepare to not be surprised. Supporters/Opposers:
- Median registration date 2008/2014 <-- Behold, Wikipedia's generational shift
- Average registration date: 2011/2014
- Median edit count: 40,293/17,363
- Average edit count: 76,125/43,683
- Thanks for doing the quarry. Teamwork makes the dream work! Levivich (talk) 05:17, 22 November 2024 (UTC)
- Prepare to not be surprised. Supporters/Opposers:
- @Soni: [1]. Levivich (talk) 07:09, 21 November 2024 (UTC)
- At a quick glance, it seemed like editors with more edits were more likely to support while editors with fewer edits (with one exception) were more likely to oppose. - Enos733 (talk) 07:54, 21 November 2024 (UTC)
- Given a single admin action may involve multiple edits, it's not so surprising the supporters' list possibly reflects a group with higher edit counts. Personally, I'd be more inclined to draw conclusions from length of registration rather than edit count. Regards, Goldsztajn (talk) 09:11, 21 November 2024 (UTC)
- my very, very rapid count - supports 35/117 (30%) less than 10 years old, opposes 67/141 (48%) less than 10 years old. In absolute numbers, 10+ year accounts were 82 supports, 74 opposes - actually quite even. What was crucial was younger accounts. It does confirm my sense of gaps between "older" and "younger" generations in regard to perceptions of tolerable admin behaviour. Regards, Goldsztajn (talk) 09:50, 21 November 2024 (UTC)
- Given a single admin action may involve multiple edits, it's not so surprising the supporters' list possibly reflects a group with higher edit counts. Personally, I'd be more inclined to draw conclusions from length of registration rather than edit count. Regards, Goldsztajn (talk) 09:11, 21 November 2024 (UTC)
- Do you have a handy list of all voters for RFA? It should be simple enough to use a WP:QUARRY to find out all details about the voters if someone finds an easy enough scrape of who each user is Soni (talk) 05:51, 21 November 2024 (UTC)
- Maybe the divide is not between admins and non-admins but between newer and longer-serving editors (who are more likely to be admins)? Hawkeye7 (discuss) 01:20, 21 November 2024 (UTC)
- When I first started editing Wikipedia almost 20 years ago, I was struck by what, to me at least, appeared to be widespread incivility. Among a number of things which have changed for the better IMHO is an all round expectation that everyone's standards of behaviour should rise (and they have). The admin role breeds a certain "culture" (for lack of a better term) akin to a conservationist, the role is to "protect" Wikipedia from "harm" and I can certainly see why being an admin could be a deeply frustrating experience. However, what has happened, I think, in the attrition of the admin corps, and the turnover in the non-admin corps, is that the generalised culture of "regular" non-admin editors has moved further forward towards less acceptance of a culture prevalent 10-15 years ago. I think also the rise in editors from non-English speaking backgrounds and from the Global South has caused complexities for those with limited experience outside the anglosphere. The statistics above on the vote for G87's RRFA show an interesting split between admins and non-admins, and within admins. Non-admins were almost overwhelmingly (close to 2/3) of the view that G87 had been given an almost exceptionaly long period to improve, had not, and no longer held their trust. 5/8s of admins, appeared (and comments here also seem to confirm this) split between solidarity for one of their own and displeasure with the recall process. 3/8s admins were in alignment with the majority of non-admins. FWIW, I'm not trying to point to some grand schism; A 38/62 admin split on these numbers is not that profound - if just 9 admins had changed their vote from support to oppose it would have been a 50/50 split. To reiterate, I'm not suggesting that there is a great gap between admins and non-admins, but there does appear to be some gap when it comes to generalised views around the expected behaviour of admins. Regards, Goldsztajn (talk) 01:01, 21 November 2024 (UTC)
We have had two recalls as of now. The people signing the recall were by and large not trolls, vandals, people blocked by that admin, ... but regular editors in good standing and without a grudge. One of these recalls has been supported by the RRFA afterwards, and the other admin decided not to go for a RRFA. There is zero evidence that the process is flawed or leads to results not wanted by the community at large. While minor issues need working out (things like "should it be closed immediately the moment it reaches 25 votes or not"), the basic principles and method have so far not produced any reason to fundamentally "fix" the issue. That the process highlights a gap between parts of the community (see e.g. the Graham RRFA) doesn't mean that the process needs fixing. The process only would need fundamental fixing if we would get successful recalls which would then be overwhelmingly reversed at RRFA, showing that the recall was frivolous, malicious, way too easy... Not now though. Fram (talk) 09:24, 22 November 2024 (UTC)
- I agree with Fram. There is not any evidence that the recall process is reaching outcomes that are not supported by the Community (I voted Oppose on the Graham RRFA; I don't know how I would have voted on a Fastily RRFA). Small fixes to the process if supported would not be indicative of the process itself being fundamentally flawed. Abzeronow (talk) 21:15, 22 November 2024 (UTC)
- I agree that it just needs fixes.North8000 (talk) 15:24, 23 November 2024 (UTC)
I believe that desysoppings for cause should only happen when there is objective evidence of misconduct. My main concern about the recall process is that it may be wielded against administrators who are willing to take actions that are controversial, yet necessary. Examples of actions that have got administrators hounded include (1) closing contentious and politically charged AFD discussions; (2) blocking an "WP:UNBLOCKABLE" editor who is being disruptive or making personal attacks; (3) stepping up to protect a politically charged article to stop an edit war. None of these actions are administrator misconduct, but in a heated dispute the side that has an admin rule in their disfavor may quickly resort to punishing said administrator by starting a recall petition, and in a dispute involving many editors, getting to 25 may be easy. Even if that petition fails, it is so unpleasant that it may have a chilling effect on admin involvement even when needed. Sjakkalle (Check!) 21:14, 23 November 2024 (UTC)
- In which case, a RRFA might be overwhelmingly in favor of the administrator and thus vindicate the administrator. I would definitely vote in support of an administrator if those any of those three were the impetus behind a recall. I also trust our editors, and so far, the recall process has worked as intended. Abzeronow (talk) 21:50, 23 November 2024 (UTC)
- ArbCom have to face re-election. Does that have a chilling effect on the arbitrators? Hawkeye7 (discuss) 21:48, 23 November 2024 (UTC)
- That's a facile argument. Arbitrators are well aware that they are standing for a fixed term period. Black Kite (talk) 21:50, 23 November 2024 (UTC)
- It's driving me up the wall that people keep saying that the process has worked as intended. Come back and tell me that, after you can link to an RRfA for Fastily that resulted in whatever result you define as working as intended. --Tryptofish (talk) 22:01, 23 November 2024 (UTC)
- Choosing not to do an RRfA was their own choice, particularly if Fastily thought it wouldn't be successful. It was also their choice to make no attempt whatsoever to defend the reams of evidence presented against them in the recall petition of their negative actions toward the editing community. So, yes, Fastily as well was an example of the process working as intended. SilverserenC 22:08, 23 November 2024 (UTC)
- Or perhaps they just thought "well, I've put XX years into this and a load of random people with rationales ranging from reasonable to utterly non-existent have told me I'm not fit to do it, so f*** you". If that's the case, I don't blame them. Black Kite (talk) 22:13, 23 November 2024 (UTC)
- Maybe, maybe not. Probably not though right? Seems kind of silly. PackMecEng (talk) 22:17, 23 November 2024 (UTC)
- I suspect that might be my reaction, to be honest. Black Kite (talk) 22:24, 23 November 2024 (UTC)
- He was going to lose if he didn't apologize, and he didn't want to apologize. That simple. As others have said, that was his choice to make, and I respect it. Levivich (talk) 22:28, 23 November 2024 (UTC)
- Except that he did apologize, although there were differing views of whether that apology was enough. This oversimplification is what's wrong with the way discussions happen in this process. --Tryptofish (talk) 22:34, 23 November 2024 (UTC)
- He woulda had to apologize more, then, including for the stuff that came out during the petition, and any other stuff that may have come out during the RRfA. He woulda had to answer questions about it, make promises, etc., basically go through what Graham went through, and realize that even that (answering questions, making promises) might not be enough (as it wasn't for Graham). It's not at all irrational for someone to choose not go through that. Being an admin isn't worth all that to some (e.g., to me), especially if you might not get it despite your best efforts. Levivich (talk) 22:44, 23 November 2024 (UTC)
- "Someone decided that it just isn't worth it" does not equal "the process worked". --Tryptofish (talk) 22:47, 23 November 2024 (UTC)
- No, those two things are not the same. If you want to know why I think the process worked, it's because it stopped disruption, did it faster than Arbcom, and I think with less drama (though admittedly the third one is purely subjective and speculative). Levivich (talk) 22:56, 23 November 2024 (UTC)
- Um, thanks for sharing? --Tryptofish (talk) 23:06, 23 November 2024 (UTC)
- No, those two things are not the same. If you want to know why I think the process worked, it's because it stopped disruption, did it faster than Arbcom, and I think with less drama (though admittedly the third one is purely subjective and speculative). Levivich (talk) 22:56, 23 November 2024 (UTC)
- "Someone decided that it just isn't worth it" does not equal "the process worked". --Tryptofish (talk) 22:47, 23 November 2024 (UTC)
- He woulda had to apologize more, then, including for the stuff that came out during the petition, and any other stuff that may have come out during the RRfA. He woulda had to answer questions about it, make promises, etc., basically go through what Graham went through, and realize that even that (answering questions, making promises) might not be enough (as it wasn't for Graham). It's not at all irrational for someone to choose not go through that. Being an admin isn't worth all that to some (e.g., to me), especially if you might not get it despite your best efforts. Levivich (talk) 22:44, 23 November 2024 (UTC)
- Except that he did apologize, although there were differing views of whether that apology was enough. This oversimplification is what's wrong with the way discussions happen in this process. --Tryptofish (talk) 22:34, 23 November 2024 (UTC)
- He was going to lose if he didn't apologize, and he didn't want to apologize. That simple. As others have said, that was his choice to make, and I respect it. Levivich (talk) 22:28, 23 November 2024 (UTC)
- I suspect that might be my reaction, to be honest. Black Kite (talk) 22:24, 23 November 2024 (UTC)
- Maybe, maybe not. Probably not though right? Seems kind of silly. PackMecEng (talk) 22:17, 23 November 2024 (UTC)
- On the petition page, I conducted a careful analysis of the evidence. Nobody refuted what I said there. --Tryptofish (talk) 22:15, 23 November 2024 (UTC)
- Linking might help though. It doesn't seem to be on Wikipedia talk:Administrator recall/Graham87, Wikipedia talk:Administrator recall/Fastily, or on Wikipedia talk:Administrator recall, so it's a bit hard to know what "the petition page" is. Do you mean your 00:39, 13 November 2024 (UTC) reply to A smart kitten? The one that ended with "Does this rise to the level of requiring, for me, a desysop? I'm leaning towards no." And others leaned towards "yes", it's not as if people couldn't draw different conclusions from your post or could disagree with things you said without actually replying directly to you. You didn't contradict the evidence, you personally didn't find it severe or convincing enough, that's all. That doesn't show that the process needs fixing though, just because enough people disagreed with your opinion and the result wasn't put to the test. Fram (talk) 09:28, 25 November 2024 (UTC)
- Fram, the context of what I said was clearer before there were all those intervening edits, but yes, you correctly identified the post I meant as the one that ended with the words that you quoted. Here's the diff: [2]. From where I'm sitting, your analysis here of how people reacted to what I posted is, well, not convincing enough. There was a lot of discussion about the evidence that I analyzed, back and forth. When the editor (A smart kitten) who originally posted the evidence came back with the additional information that I requested, the discussion was still very active. I provided a very detailed examination, point-by-point, of each individual claim made in that evidence. Yes, it was based upon my opinions, but I drew specific conclusions, and justified those conclusions. And nobody came back and said that they thought anything in my analysis was incorrect, nor did anyone who signed on the basis of that evidence before my comment come back and reaffirm their signature, rejecting my analysis. If you think somebody actually did, you can provide a diff of it, but I can assure you that you won't find one. And that wasn't because the petition discussion had come to a close, because it continued for several more days after I posted that. After a whole lot of back-and-forth about that particular evidence, nobody said that they found errors in anything that I said. But a couple more editors did sign the petition after that, with brief comments saying, in some cases, that they decided to sign after reading that particular evidence.
- So the question, in the light of your comment to me, becomes whether those later signers did so because they carefully read all of the discussion, including my critique, and decided to sign, implicitly having decided that my critique was unconvincing – or whether they signed after only a superficial read and had never really engaged with my critique. I cannot prove that it was the latter, and you cannot prove that it was the former. But given that their signatures came only with brief comments, and nobody found reason to actually mention that they had rejected my critique, I'm pretty skeptical of the former. And that's a problem. The petition process does not, of course, require that anyone had to say explicitly that they disagreed with me, either, but that's a shortcoming of the discussion process. A desysop via ArbCom makes room for careful examination of the facts. The petition did not. This is a half-assed way of driving someone off Wikipedia. And I'm arguing for a more deliberative process. --Tryptofish (talk) 18:55, 25 November 2024 (UTC)
- Linking might help though. It doesn't seem to be on Wikipedia talk:Administrator recall/Graham87, Wikipedia talk:Administrator recall/Fastily, or on Wikipedia talk:Administrator recall, so it's a bit hard to know what "the petition page" is. Do you mean your 00:39, 13 November 2024 (UTC) reply to A smart kitten? The one that ended with "Does this rise to the level of requiring, for me, a desysop? I'm leaning towards no." And others leaned towards "yes", it's not as if people couldn't draw different conclusions from your post or could disagree with things you said without actually replying directly to you. You didn't contradict the evidence, you personally didn't find it severe or convincing enough, that's all. That doesn't show that the process needs fixing though, just because enough people disagreed with your opinion and the result wasn't put to the test. Fram (talk) 09:28, 25 November 2024 (UTC)
- Or perhaps they just thought "well, I've put XX years into this and a load of random people with rationales ranging from reasonable to utterly non-existent have told me I'm not fit to do it, so f*** you". If that's the case, I don't blame them. Black Kite (talk) 22:13, 23 November 2024 (UTC)
- Choosing not to do an RRfA was their own choice, particularly if Fastily thought it wouldn't be successful. It was also their choice to make no attempt whatsoever to defend the reams of evidence presented against them in the recall petition of their negative actions toward the editing community. So, yes, Fastily as well was an example of the process working as intended. SilverserenC 22:08, 23 November 2024 (UTC)
- I have to say I don’t get the recall process either. I support admin accountability but just having an arbitrary number of “support” votes, no “oppose” votes, and I guess a time limit instead of consensus forming seems… extremely weird and out of step with how virtually everything else is done on Enwiki. Dronebogus (talk) 10:56, 24 November 2024 (UTC)
- The intended point of the recall petition is not to find consensus or to determine whether the admin has lost the trust of the community, has abused the tools or anything like that. The intended point of the petition is only to prove that a re-RFA is not frivolous. The Re-RFA is where consensus is formed from support and oppose, analysis of evidence, etc. Think of it in judicial terms, the petition is at the pre-trial stage and simply aims to answer the question "are there 25 people who think there is a case to answer?" if the answer is no, then it ends there. If the answer is yes, then you can please innocent or guilty. If you plead guilty you take the sentence (desysopping) and move on. If you plead innocent there is a trial and the jury finds you either innocent or guilty by majority verdict. This is an imperfect analogy of course, but it hopefully helps explain the concept.
- It didn't work like that in either of the two that we've had, but that's a fault with the implementation not with the concept. Thryduulf (talk) 12:57, 24 November 2024 (UTC)
- The problem is, the concept itself makes no sense. Nearly everything on Wikipedia is decided one of three ways: consensus democracy that must be approved/vetoed by an admin (most non-trivial issues); WP:BOLD editing, informal discussion, or admin fiat (trivial issues); or arbitration (extreme fringe cases). This resembles none of those. It’s like arbitration, only everyone can be an arb, and instead of voting yay or nay to take the case you collect signatures to see if there’s general support for a case? Dronebogus (talk) 13:11, 24 November 2024 (UTC)
- The request stage of arbitration is the closest analogy, but it is indeed a process not used anywhere else on Wikipedia. That doesn't mean it doesn't make sense. It's sole purpose is intended to be a check against frivolous requests so that an admin doesn't have to go through re-RFA just because they pissed off a single editor once by making an objectively correct decision. The actual decision is intended to made by consensus democracy at the Re-RFA. Thryduulf (talk) 13:33, 24 November 2024 (UTC)
- I think a limited vote based on a formula like “after 7 days a minimum of 2/3rds of people must support for re-RFA” would be less opaque than trying to start a Wiki-Minyan? Dronebogus (talk) 09:26, 25 November 2024 (UTC)
- That sounds like skipping the petition, and going right to the RRFA, or running two successive RRFA's. I have not been involved in any of this but it is not really hard to understand why there is the two-step process of: 1) calling the question, and 2) deciding the issue. Alanscottwalker (talk) 11:52, 25 November 2024 (UTC)
- Honestly I think it should just go straight to RRFA, and if there’s enough opposition fast enough it can just be WP:SNOW closed. We don’t, for example, ask for 25 signatures to start and AfD discussion in order to weed out frivolous nominations— it’s patently obvious when a nomination is garbage in most cases. RRFA is clearly a last resort, and no established, good faith user is likely to abuse this kind of process so egregiously we need a two-step failsafe. Dronebogus (talk) 12:03, 25 November 2024 (UTC)
- In other words any user should be able to start a binding RRFA on any admin at any time? No, no thank you... – Joe (talk) 12:16, 25 November 2024 (UTC)
- Not any time, there should be a policy that steps must already been taken and failed, ideally multiple times, similar to ArbCom. And not any user, since the starter should probably be autoconfirmed at the absolute minimum, and probably be required to be in goof standing, have X edits, been on WP X years, and been active during the last year. If it was unambiguously required that an RRFA follow these rules or be rejected (with filing an improper case being a sanctionable offense) I don’t think anyone would realistically start a frivolous case. Dronebogus (talk) 12:33, 25 November 2024 (UTC)
- Well, we also don't require a !vote to create an article but we do for an admin. I also don't think it is likely that 'any experienced user' has experience in making an RRFA -- Alanscottwalker (talk) 12:34, 25 November 2024 (UTC)
- An admin is essentially just voted into office; they should be voted out of office in an identical way. There’s no need for some kind of novel additional process on top of that. That’s all I’m saying. Dronebogus (talk) 12:55, 25 November 2024 (UTC)
- In other words any user should be able to start a binding RRFA on any admin at any time? No, no thank you... – Joe (talk) 12:16, 25 November 2024 (UTC)
- Honestly I think it should just go straight to RRFA, and if there’s enough opposition fast enough it can just be WP:SNOW closed. We don’t, for example, ask for 25 signatures to start and AfD discussion in order to weed out frivolous nominations— it’s patently obvious when a nomination is garbage in most cases. RRFA is clearly a last resort, and no established, good faith user is likely to abuse this kind of process so egregiously we need a two-step failsafe. Dronebogus (talk) 12:03, 25 November 2024 (UTC)
- That sounds like skipping the petition, and going right to the RRFA, or running two successive RRFA's. I have not been involved in any of this but it is not really hard to understand why there is the two-step process of: 1) calling the question, and 2) deciding the issue. Alanscottwalker (talk) 11:52, 25 November 2024 (UTC)
- I think a limited vote based on a formula like “after 7 days a minimum of 2/3rds of people must support for re-RFA” would be less opaque than trying to start a Wiki-Minyan? Dronebogus (talk) 09:26, 25 November 2024 (UTC)
- The request stage of arbitration is the closest analogy, but it is indeed a process not used anywhere else on Wikipedia. That doesn't mean it doesn't make sense. It's sole purpose is intended to be a check against frivolous requests so that an admin doesn't have to go through re-RFA just because they pissed off a single editor once by making an objectively correct decision. The actual decision is intended to made by consensus democracy at the Re-RFA. Thryduulf (talk) 13:33, 24 November 2024 (UTC)
- The problem is, the concept itself makes no sense. Nearly everything on Wikipedia is decided one of three ways: consensus democracy that must be approved/vetoed by an admin (most non-trivial issues); WP:BOLD editing, informal discussion, or admin fiat (trivial issues); or arbitration (extreme fringe cases). This resembles none of those. It’s like arbitration, only everyone can be an arb, and instead of voting yay or nay to take the case you collect signatures to see if there’s general support for a case? Dronebogus (talk) 13:11, 24 November 2024 (UTC)
- I think the basic complaint here is that the 25-vote threshold is too easy to meet, and therefore it is unfair to require an affirmative consensus for the admin to retain the tools. I think the 25-vote threshold is fine for weeding out frivolous nominations, but correspondingly I think we should make it harder to remove adminship, i.e. make 50-60% the discretionary range for removing adminship. This would make it in line with most of our other processes, where a slight supermajority is required to make changes, and no consensus defaults to the status quo. Whereas under the current recall system, 25 votes with no opportunity to object are enough to make removal of adminship the status quo, which seems a bit harsh. -- King of ♥ ♦ ♣ ♠ 19:53, 25 November 2024 (UTC)
- I think the 25-vote threshold, because it’s so easy to meet, is essentially pointless because it will only weed out extreme outlier cases that I don’t believe will ever happen enough to be a serious concern. We should just have a supermajority vote requirement, and if we must have a petition it should be a lot higher than 25. Dronebogus (talk) 16:06, 27 November 2024 (UTC)
- We don't have evidence the 25-vote threshold is easy to meet. Of the two recalls, one only hit 25 due to a bad block during the petition period. CMD (talk) 16:14, 27 November 2024 (UTC)
- One more reason I don’t like this: it’s extremely important, but we’re using it to prototype this weird system not used anywhere else on Enwiki and possibly Wikimedia (if you have examples of off-wiki precedent please share them). Dronebogus (talk) 16:18, 27 November 2024 (UTC)
- Have to try new things at some point. But CMD is right, from all the evidence we do have, it looks about right. Where as there is zero evidence that a higher number is required or helpful. PackMecEng (talk) 17:09, 27 November 2024 (UTC)
- It's usually called Approval voting when it's used, though that might not be precisely the right name. It's used all over the Wikimedia movement. At least until recently, both grant requests and the (technical) community wishlist used petition-like voting processes that encouraged support and disregarded opposition votes. That is, if there were 25 people supporting something and you showed up to say "* Oppose because WMF Legal will have a heart attack if you do this", then the request might be rejected because of the information you provided, and your comment might change the minds of potential/future supporters, but it would never be counted as a vote of 25 to 1. It's still counted as a list of 25 supporters. WhatamIdoing (talk) 18:53, 27 November 2024 (UTC)
- The original Phase I Proposal was directly written as adapting dewiki's recall policies into enwiki. I believe the Italian wikipedia also has a threshold to RRFA style process. And I think spanish too? I might be getting some projects confused. But it's directly used in recall in other projects - That's how it was recommended here (and then adapted after). Soni (talk) 18:58, 27 November 2024 (UTC)
- It's usually called Approval voting when it's used, though that might not be precisely the right name. It's used all over the Wikimedia movement. At least until recently, both grant requests and the (technical) community wishlist used petition-like voting processes that encouraged support and disregarded opposition votes. That is, if there were 25 people supporting something and you showed up to say "* Oppose because WMF Legal will have a heart attack if you do this", then the request might be rejected because of the information you provided, and your comment might change the minds of potential/future supporters, but it would never be counted as a vote of 25 to 1. It's still counted as a list of 25 supporters. WhatamIdoing (talk) 18:53, 27 November 2024 (UTC)
- Arbitration election commissioners are chosen by collecting solely supporting statements. Once upon a time, the arbitration election RFCs also consisted of proposals that commenters approved, without any option to oppose. Requests for comments on user conduct also used a format where support for expressed viewpoints were collected, without opposing statements. edited 18:32, 4 December 2024 (UTC) to add another example isaacl (talk) 19:50, 27 November 2024 (UTC)
- @Dronebogus This system was modeled after Adminwiederwahl on the German Wikipedia, which has been in place since 2009 or so. --Ahecht (TALK
PAGE) 07:34, 2 December 2024 (UTC)- Interesting. Dronebogus (talk) 13:14, 2 December 2024 (UTC)
- That being said, different wikis have radically different governance structures. For example, Spanish Wikipedia is apparently much more democratic compared to Enwiki (in the literal sense, not just in the sense of “egalitarian” or “un-tyrannical”). Dronebogus (talk) 03:26, 4 December 2024 (UTC)
- It's worth noting dewiki primarily uses the process to desysop inactive admins and has a much longer petition period. Sincerely, Dilettante 18:12, 4 December 2024 (UTC)
- Have to try new things at some point. But CMD is right, from all the evidence we do have, it looks about right. Where as there is zero evidence that a higher number is required or helpful. PackMecEng (talk) 17:09, 27 November 2024 (UTC)
- One more reason I don’t like this: it’s extremely important, but we’re using it to prototype this weird system not used anywhere else on Enwiki and possibly Wikimedia (if you have examples of off-wiki precedent please share them). Dronebogus (talk) 16:18, 27 November 2024 (UTC)
- We don't have evidence the 25-vote threshold is easy to meet. Of the two recalls, one only hit 25 due to a bad block during the petition period. CMD (talk) 16:14, 27 November 2024 (UTC)
- I think the 25-vote threshold, because it’s so easy to meet, is essentially pointless because it will only weed out extreme outlier cases that I don’t believe will ever happen enough to be a serious concern. We should just have a supermajority vote requirement, and if we must have a petition it should be a lot higher than 25. Dronebogus (talk) 16:06, 27 November 2024 (UTC)
Comparing with de.Wiki maybe apples and oranges. Disclaimer: This is what I have come up with, but a regular de.Wiki user or admin may well be able to improve or correct my findings. First there is the huge difference in scale - the de.Wiki currently runs with only 175 admins. There are nearly 400 former admins (that’s quite a high turnover but recall replaced the earlier term limit system for admins which required automatic re-election), but also there is the question of culture: en.Wiki is a lingua franca project contributed by users from many different backgrounds and regions while the de.Wiki is largely contributed to from a specific language region that shares a common culture which defines their way of doing things such as the way their RfC (Meinungsbild) are structured, voted, and commented on. Since 2009 when the de.Wiki system was rolled out :
- There have been 247 recall cases
- There was a rush of 67 cases in the first year 2009
- Since 2018 there have been 30 cases, an average of 4.29 per year
Breakdown:
- 49 handed their tools in voluntarily after being RECALLED. (zurückgetreten)
- 59 were stripped of their tools following a RECALL case and failed on a rerun (Nicht wiedergewählt)
- 96 were stripped of their tools after the rerun time limit expired (Nach Fristablauf de-administriert/Did not run after being asked to run for re-election)
These figures do not add up because they leave 43 unaccounted for. I think this is because there are several different pages with breakdowns of admin activity. The 43 could be users that passed a recall RfA or they may have handed their tools in voluntarily on recall but I can't find way to know for certain. Kudpung กุดผึ้ง (talk) 23:37, 4 December 2024 (UTC)
- Just in case anyone didn’t get the subtext of my first comment on this: I do think it’s apples and oranges, and that’s why we shouldn’t be using it. Different language editions have such vastly different systems and community cultures they might as well be on other planets half the time. You can’t import stuff between them just because it fills the same niche. Dronebogus (talk) 00:29, 5 December 2024 (UTC)
- I agree that the situations are somewhat different, but it at least means its not unprecidented. Also, I know what you mean, but I'm still amused by the phrase
en.Wiki is a lingua franca project
. --Ahecht (TALK
PAGE) 20:19, 10 December 2024 (UTC)
I'm for there being an admin recall process. But we need to recognize that RFA, at it's realistic best is an inherently rough process that few want to go through, and if they don't do so they are gone. At it's best it's like standing on a pedestal for a week in the middle of a crowd while people ask questions and make public assessments about you. Including about anything that anyone feels they might have done wrong. I just think we need a more careful thoughtful process before we subject them to "RFA or out" North8000 (talk) 19:36, 11 December 2024 (UTC)
Topics on Jehova's Witnesses - article spamming issues
Polish Wikipedia is experiencing and uptick in Jehova's Witnesses topics article spamming, surrepticious edits pushing JW terminology etc. One of current problems is the spamming of separate articles for every "convention", which is an annual (I think) event with a theme and about 100k visitors. We are discussing their notability right now, and I was wondering whether English Wikipedia already discussed and cleaned this, which would be helpful? If you remember any topic discussing notability or monitoring of Jehova's Witnesses related topics, and possibly deleted articles. (I'm not sure if there is any sensible search method of deleted articles archive/log? Can I use any wildcards in Special:Log/delete? It doesn't seem to work.) Tupungato (talk) 12:04, 25 November 2024 (UTC)
- @Tupungato, we used to have a list of conventions, but it was deleted 16 years ago at Wikipedia:Articles for deletion/List of Jehovah's Witnesses conventions. I'm not sure we would make the same decision today. Information about some conventions is in History of Jehovah's Witnesses. WhatamIdoing (talk) 02:22, 27 November 2024 (UTC)
- @Tupungato: I'm probably one of the best people you could talk to about this. I've been trying to remove the emphasis on primary sources when JWs are talked about throughout enwiki. The Jehovah's Witnesses article used to cite the denomination's magazines 100+ times. I fixed that. Unfortunately I don't speak Polish but I have an extensive book collection on secondary sources about JWs if you ever wanted me to look something up for you. Clovermoss🍀 (talk) 14:09, 4 December 2024 (UTC)
- In regards to notability, we don't really have articles on individual conventions. I think a few are (or should be) mentioned at the History of Jehovah's Witnesses if secondary sources talked about them, but otherwise that sort of thing definitely wouldn't meet our notability guideline for standalone articles. I'm not sure what the standards at the Polish Wikipedia are because I know various projects have different standards. If you're looking for AfDs, the most recent one I can think of is Wikipedia:Articles for deletion/List of Watch Tower Society publications (2nd nomination). I've mostly been focusing on improving the content we have as there's only a handful of people editing the JW topic area and a lot of what was written a decade ago uses almost exclusively primary sources. Clovermoss🍀 (talk) 14:23, 4 December 2024 (UTC)
- Thank you for your reply. I was away for a week, but I'll have a look how the matters are progressing in Polish Wikipedia, and will remember about your offer to consult. Tupungato (talk) 09:06, 10 December 2024 (UTC)
- In regards to notability, we don't really have articles on individual conventions. I think a few are (or should be) mentioned at the History of Jehovah's Witnesses if secondary sources talked about them, but otherwise that sort of thing definitely wouldn't meet our notability guideline for standalone articles. I'm not sure what the standards at the Polish Wikipedia are because I know various projects have different standards. If you're looking for AfDs, the most recent one I can think of is Wikipedia:Articles for deletion/List of Watch Tower Society publications (2nd nomination). I've mostly been focusing on improving the content we have as there's only a handful of people editing the JW topic area and a lot of what was written a decade ago uses almost exclusively primary sources. Clovermoss🍀 (talk) 14:23, 4 December 2024 (UTC)
Can we hide sensitive graphic photos?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Can we hide sensitive graphic photos? I recently came across an article with a photo of a deceased man smiling right at the top—it was deeply disturbing, traumatizing, triggering, shocking, and sickening! This kind of content discourages many people who might otherwise want to read the article and could even provoke serious medical reactions, such as seizures. Imagine if that man's family came across the article and saw him like that, right in their face! Nobody seems to favor this policy, so why do we insist on keeping it? Arabic Wikipedia uses a collapsible template that lets readers choose whether to view such photos, without censoring informative media. Shouldn't we adopt a similar approach? ☆SuperNinja2☆ TALK! 21:41, 30 November 2024 (UTC)
- Not sure where you are getting that the image subject was dead at the time the image was taken. Just Step Sideways from this world ..... today 21:49, 30 November 2024 (UTC)
- I couldn't even think. I was totally shocked. Anyhow, my point still stand. ☆SuperNinja2☆ TALK! 21:51, 30 November 2024 (UTC)
- I don't see anything in the photo, Commons description, or CDC description that states the patient is deceased. Is there a chance this person is alive? –Novem Linguae (talk) 02:05, 5 December 2024 (UTC)
- I couldn't even think. I was totally shocked. Anyhow, my point still stand. ☆SuperNinja2☆ TALK! 21:51, 30 November 2024 (UTC)
- See HELP:NOSEE Lee Vilenski (talk • contribs) 21:50, 30 November 2024 (UTC)
- The issue is that an image one editor might find “disturbing, traumatizing, triggering and shocking” is an image another editor will find informative and helpful. We have no way to know how others will react. It would indeed be censorship to hide such images. Blueboar (talk) 21:50, 30 November 2024 (UTC)
- shouldn't we choose the option that minimize the harm to readers? That's what most companies/organization (idk what is the right term, sorry) do. ☆SuperNinja2☆ TALK! 21:54, 30 November 2024 (UTC)
- We already have. The "harm" to a person seeing such useful images in an encyclopedia is insignificant. The true harm is hiding information from those looking for it.--User:Khajidha (talk) (contributions) 21:19, 1 December 2024 (UTC)
- That is debatable. Emir of Wikipedia (talk) 21:38, 1 December 2024 (UTC)
The true harm is hiding information from those looking for it
- this is exactly what shoving these gore images in people's face does. ☆SuperNinja2☆ TALK! 03:46, 4 December 2024 (UTC)
- How does showing relevant information hide information?--User:Khajidha (talk) (contributions) 11:36, 4 December 2024 (UTC)
- the users will close the page once they see the images instead of reading the information they came for (happened with me with example above), and they will even avoid visiting Wikipedia after this bad experience. ☆SuperNinja2☆ TALK! 18:25, 11 December 2024 (UTC)
- We have no reason to try and coax sensitive users to our site by hiding things they don’t like. ꧁Zanahary꧂ 18:35, 11 December 2024 (UTC)
- the users will close the page once they see the images instead of reading the information they came for (happened with me with example above), and they will even avoid visiting Wikipedia after this bad experience. ☆SuperNinja2☆ TALK! 18:25, 11 December 2024 (UTC)
- How does showing relevant information hide information?--User:Khajidha (talk) (contributions) 11:36, 4 December 2024 (UTC)
- We already have. The "harm" to a person seeing such useful images in an encyclopedia is insignificant. The true harm is hiding information from those looking for it.--User:Khajidha (talk) (contributions) 21:19, 1 December 2024 (UTC)
- shouldn't we choose the option that minimize the harm to readers? That's what most companies/organization (idk what is the right term, sorry) do. ☆SuperNinja2☆ TALK! 21:54, 30 November 2024 (UTC)
- @Super ninja2 then those are users that we gladly do not want here. ValarianB (talk) 18:49, 11 December 2024 (UTC)
- Image censoring is a perennial proposal and really won't go anywhere. And given the topic of that page, I see no real option, since any other image will also be as disturbing. We do ask editors to use the principle of least astonishment, so that same image as the lede on corpse for example would be inappropriate, but not much can be done on that page. Masem (t) 21:51, 30 November 2024 (UTC)
- we can use a collapsible template, then that won't be censoring. ☆SuperNinja2☆ TALK! 21:55, 30 November 2024 (UTC)
- That type of suggestion is part of the perennial proposal on how to deal with such images. There's nothing that can be done to properly hide it. Masem (t) 22:05, 30 November 2024 (UTC)
- We already use collapsible templates for "long" lists, such as for BRICS members.While long lists are far less harmful, the goal was to avoid annoying readers and make them comfortable, encouraging them to read. This is also why we have templates like Template:Split—to make articles easier to navigate. Similarly, graphic images make readers extremely uncomfortable, not only discouraging them from reading a single article but sometimes deterring them from using Wikipedia altogether, which goes against the ideals of an encyclopedia.
- The fact that image censoring is a perennial proposal suggests it’s a problematic topic that many, if not most, editors find uncomfortable. I suspect the primary reason it hasn’t been adopted is the lack of consensus, not because half the community opposes it outright. I propose a solution that could satisfy both groups: a collapsible template. This approach wouldn’t censor anything but would minimize harm.
- Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. Some readers may have heart conditions, PTSD, or other vulnerabilities, and we must also consider the families of deceased individuals whose photos we use. Additionally, while Wikipedia isn’t intended for children, they do use it, and we can’t ignore that reality.
- In summery, the potential harm caused by showing these images overrides any benefit to the project. And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal. ☆SuperNinja2☆ TALK! 22:28, 30 November 2024 (UTC)
- You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing. (If you have any records of Wikipedia images having provoked seizures, please put them forward.) Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information. -- Nat Gertler (talk) 00:02, 1 December 2024 (UTC)
- I know Trypophobia has been the subject of discussion of a good lede that doesn't immediately illicit a problem to readers that have that fear. Masem (t) 00:22, 1 December 2024 (UTC)
- That article has had requests to remove or hide the image for about a decade now. WhatamIdoing (talk) 00:26, 1 December 2024 (UTC)
Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information
- That would be your choice not to 'get that information.' However, forcing it on people who don't want to 'get it,' and risking a negative reaction as a result, is the real issue we should be concerned about
You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing
- That is your personal experience, but we know that at least one person had an anxiety attack from that image. As a community, it is our duty to prioritize the safety of our readers and choose the least risky option. ☆SuperNinja2☆ TALK! 13:47, 1 December 2024 (UTC)
- And you had the choice not to "get that information" that was in the picture.... you chose to go to the Wikipedia page about a disease. You claim to have been set off because it was
a deceased man smiling
... only the man wasn't deceased, he is described in the image's description as a "patient" which is not generally a term for a corpse. So what set you off was a man smiling. If you want us to police pictures based on information that you invent about them, it's hard to see how we don't have to police everything on your behalf. When it comes to safety of our viewers and medical-related images, an image can help them recognize the disease and may serve them well. The "least risky" option is simply not having Wikipedia. I hope we don't choose that path. If you think that Wikipedia provides as special danger to you, you are free not to use it. -- Nat Gertler (talk) 17:53, 1 December 2024 (UTC)- I don’t understand what you’re defending. You’re just complaining and criticizing my argument without demonstrating why leaving sensitive media as-is is a better option. Your argument essentially boils down to: “I don’t like your proposal,” which isn’t sufficient.
- Anyway, regardless of whether that man was dead or not, my point still stands.
The "least risky" option is simply not having Wikipedia.
- I don’t think that’s the goal of Wikipedia—to discourage its readers from using it. If the choice is “either read Wikipedia and risk having anxiety attacks or don’t read it at all,” then it’s clear the situation is bad and requires change. ☆SuperNinja2☆ TALK! 21:08, 1 December 2024 (UTC)
- So far, I know of one person claiming to have had a problem, and that's because he saw a picture of a man smiling. Hiding all pictures as not-obviously-problematic as that would basically mean hiding all pictures... and it's not just pictures that upset people, plenty of the text would have to be hidden under the same logic. (People might be freaked out by seeing that a ninja edits Wikipedia.) Folks have pointed you to the option that would let you turn off automatic image display for yourself, and if you wanted to make some argument that that should be a standard option, that may well be a supportable argument... but hiding everything that could possibly upset anyone would basically be hiding everything. -- Nat Gertler (talk) 21:30, 1 December 2024 (UTC)
- And you had the choice not to "get that information" that was in the picture.... you chose to go to the Wikipedia page about a disease. You claim to have been set off because it was
- I know Trypophobia has been the subject of discussion of a good lede that doesn't immediately illicit a problem to readers that have that fear. Masem (t) 00:22, 1 December 2024 (UTC)
Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. ... And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal.
I think part of the reason why no consensus was ever reached on this issue is that the editors in favour of image filtering do not acknowledge that it inherently involves an infringement on intellectual freedom, and so don't put forward a framework for how to minimize the infringement. The approach can't be "Let's just create the functionality now and then worry later about what to do when a vocal minority of editors want to be able to hide all depictions of people with disabilities, or of LGBTQ+ people, because they find those images distressing." Those considerations need to be the starting point. I don't support image filtering, but when the discussion was held back in 2011 I did put foward a framework of seven principles for approaching it from this angle.--Trystan (talk) 17:05, 1 December 2024 (UTC)infringement on intellectual freedom
- Why do you guys want to go so technical and get things so complicated when the situation isn't at all complicated? Ppl dislike seeing gore, let them choose not to? Just like that, easy peasy. ☆SuperNinja2☆ TALK! 21:15, 1 December 2024 (UTC)
- Who defines what is "gore"? There's probably only a few types of images that we universally can say are problematic to a near majority of the world population (eg when you start to get into child exploitation), but beyond that, there's no way to tell when such an image would be considered bad by a majority of the readership. Masem (t) 21:18, 1 December 2024 (UTC)
- So you're basically presuming that this discussion is destined for failure because ppl have different povs on the topic? That's not a good enough argument. When did the community ever have similar povs on anything for that matter? ☆SuperNinja2☆ TALK! 02:10, 5 December 2024 (UTC)
- Don't want to see gore? Don't go to pages about gory things. Easy peasy.--User:Khajidha (talk) (contributions) 15:25, 2 December 2024 (UTC)
- Who defines what is "gore"? There's probably only a few types of images that we universally can say are problematic to a near majority of the world population (eg when you start to get into child exploitation), but beyond that, there's no way to tell when such an image would be considered bad by a majority of the readership. Masem (t) 21:18, 1 December 2024 (UTC)
- You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing. (If you have any records of Wikipedia images having provoked seizures, please put them forward.) Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information. -- Nat Gertler (talk) 00:02, 1 December 2024 (UTC)
- That most certainly is censorship.--User:Khajidha (talk) (contributions) 21:20, 1 December 2024 (UTC)
- That type of suggestion is part of the perennial proposal on how to deal with such images. There's nothing that can be done to properly hide it. Masem (t) 22:05, 30 November 2024 (UTC)
any other image will also be as disturbing
that is what I'm arguing about. disturbing images should be collapsed at best. ☆SuperNinja2☆ TALK! 21:59, 30 November 2024 (UTC)- @Super ninja2, quite a lot of people agree with you, but a long time ago, this was formally proposed, and The Community™ rejected it. I have a lot of unhappy memories from that discussion, so you should not necessarily consider me to be an unbiased source {{(Redacted).
- The proposed approach was that a person should be able to say, in advance, that they personally don't want to see sexual images, disgusting medical images, violent images, or contested religious/cultural images, and have images tagged like that collapsed or screened somehow, with one click to reveal. The responses tended to cluster in two categories:
- Individuals should not have the freedom to control what they see, even if they are doing it for neutral reasons, like wanting to conserve bandwidth on a weak internet connection, or for safety reasons, like not wanting to risk an anxiety attack right now or not wanting to worry about the morality police looking over your shoulder at a public internet cafe. The Wikipedia editor has the right to put things on your computer screen, and your duty as a reader is to look at whatever disgusting, violent, or inappropriate image they want to shove in your face.
- It would be impossible to figure out which (few) images draw complaints. It might be impossible to do this with 100% accuracy, but we all know that the lead image at Smallpox draws complaints even though there's a FAQ at the top of the talk page to explain why it's there, every educated person knows that Depictions of Muhammad are both easily identifiable and considered inappropriate by some religious adherents, and most of us have encountered an animated gif that we'd like to cover up or turn off.
- I'm opposed to the first in principle and skeptical of the second. But that's the state of the discussion, and at this point, it will likely continue this way until multiple countries pass laws demanding that we change it. The Community™ has no empathy for people whose living situation is very different from their own. WhatamIdoing (talk) 00:10, 1 December 2024 (UTC)
- This context might help: Wikipedia was basically a spinoff from a now-defunct male-focused porn site. For years, every porn actress who was featured even once as a Playboy Playmate was automatically considered notable. If you infer from that fact something about the attitudes towards controversial content in the early days, I couldn't prove you wrong. WhatamIdoing (talk) 00:22, 1 December 2024 (UTC)
- Looking at the results on that page, it seems to say more people supported it than opposed it? Alpha3031 (t • c) 01:32, 1 December 2024 (UTC)
- There is one technically feasible solution I can come up with, although it may be complicated:
- Create a list of types of images that some will find offensive (anatomical parts typically not displayed in public, religiously offensive images, etc). Create a template to mark each type.
- Have the software mark these images, when used on other pages, in some way that scripts can use. Write scripts which individual users can self-apply to hide these images. Create a page with instructions for using these scripts, with a disclaimer that 100% results aren't guaranteed.
- These measures should be invisible to users not interested in them, except the tag on the image page. Animal lover |666| 10:59, 1 December 2024 (UTC)
- In some places a woman's hair is not typically displayed in public. Imagine if we had to hide every photo of a woman because her hair was visible, and we marked it with a template warning "Image of woman with visible hair". Valereee (talk) 18:59, 1 December 2024 (UTC)
- There is one technically feasible solution I can come up with, although it may be complicated:
not wanting to worry about the morality police looking over your shoulder at a public internet cafe.
- If you live in Saudi Arabia, Iran, or even less religious countries like Jordan, Morocco, or Egypt, and you were reading an article in a public place when a sexual photo deemed inappropriate popped up on your screen, you could literally be jailed! ☆SuperNinja2☆ TALK! 13:05, 1 December 2024 (UTC)
- And imagine if that photo was a depiction of Muhammad, then jail would be mercy. ☆SuperNinja2☆ TALK! 13:09, 1 December 2024 (UTC)
- Those might be valid points if these pictures were just inserted willy-nilly into any old page. But, for example, there is no reason NOT to expect an image of Muhammad on the Muhammad page (at least if you know that the site is not made entirely by Muslims). Articles about something having pictures of that something is not something you should be surprised by. Don't want people seeing what you are looking at? Don't do it in public. This is not hard.--User:Khajidha (talk) (contributions) 12:30, 2 December 2024 (UTC)
- Actually, these pictures (and pictures that haven't been tagged for censoring yet) can be inserted willy-nilly into any old page by vandals. We do try to catch and revert such edits, but there is no guarantee that articles will not contain completely inappropriate images (or text, or ASCII art). If something important like your freedom or livelihood depends on not looking at inappropriate content on Wikipedia in public, you should not look at any content on Wikipedia in public. —Kusma (talk) 20:18, 2 December 2024 (UTC)
- Those might be valid points if these pictures were just inserted willy-nilly into any old page. But, for example, there is no reason NOT to expect an image of Muhammad on the Muhammad page (at least if you know that the site is not made entirely by Muslims). Articles about something having pictures of that something is not something you should be surprised by. Don't want people seeing what you are looking at? Don't do it in public. This is not hard.--User:Khajidha (talk) (contributions) 12:30, 2 December 2024 (UTC)
- And imagine if that photo was a depiction of Muhammad, then jail would be mercy. ☆SuperNinja2☆ TALK! 13:09, 1 December 2024 (UTC)
- what a terribly sexist and racist comment, full of prejudiced assumptions about who might disagree with you. Fram (talk) 14:19, 1 December 2024 (UTC)
- Individuals already have control of what they see. They chose to come here. How can anyone seriously expect not to see images of such things in articles about these things? That's simply ridiculous.--User:Khajidha (talk) (contributions) 21:24, 1 December 2024 (UTC)
- we can use a collapsible template, then that won't be censoring. ☆SuperNinja2☆ TALK! 21:55, 30 November 2024 (UTC)
- See our Wikipedia:Content disclaimer. This isn't likely to be changed because you found an image that you objected too. There are ways for you to implement ways to not see images you don't want too, see WP:NOSEE. Specifically the section about the userscript that blocks all images unless you click to see them. Lee Vilenski (talk • contribs) 13:25, 1 December 2024 (UTC)
- no need to change the Content disclaimer because we will still display the offensive images but this time, the reader will choose to view them. ☆SuperNinja2☆ TALK! 14:04, 1 December 2024 (UTC)
- No, I'm not suggesting we change it. I'm suggesting that you read it and realise we aren't going to hide suitable images. Lee Vilenski (talk • contribs) 15:49, 1 December 2024 (UTC)
- no need to change the Content disclaimer because we will still display the offensive images but this time, the reader will choose to view them. ☆SuperNinja2☆ TALK! 14:04, 1 December 2024 (UTC)
- Let's not forget that WP:NOTCENSORED is a policy. - Ratnahastin (talk) 05:56, 2 December 2024 (UTC)
- The good of hiding disturbing or upsetting information, including images (which is real, and appropriate in many contexts) is completely incompatible with the good of presenting information in an educational and encyclopedic context, which is what we are doing on Wikipedia. Strongly oppose even a collapsible option or anything like it. ꧁Zanahary꧂ 19:32, 2 December 2024 (UTC)
- Blurring or collapsing that can be toggled off with a single click does not constitute censorship. Censorship would be only if images were removed or the users were somehow restricted from seeing them, e.g. by first forcing them to disclose their age or location. Giving everyone, including unregistered users, a reasonable default option to avoid inadvertently seeing explicit images is just a convenience feature in the user interface. This just follows from the principle of least astonishment, as most people expect to be warned before seeing sensitive content, and are used to that on other websites.
- Making Wikipedia more convenient for a large number of users is not equivalent to being forced to adhere to culturally contingent moral prohibitions. There is quite a distance between these two positions. NicolausPrime (talk) 02:38, 3 December 2024 (UTC)
- The reasonable default on an encyclopedia is that information is conveyed, not curtained. I’d counter your least astonishment argument with the fact that nobody is used to being warned about sensitive content in an encyclopedia. ꧁Zanahary꧂ 05:42, 3 December 2024 (UTC)
Very strong oppose on this one. Putting together a censor board to decide what is, could be, and/or is not offensive to whoever across the globe is a terrible idea, a waste of time, and does not help the site. WP:CENSOR is a crucial ingredient in Wikipedia's ability to cover everything under the sun. :bloodofox: (talk) 21:01, 1 December 2024 (UTC)
Oppose. Hurt feelings and thin skin are not a Wikipedia problem. Zaathras (talk) 04:27, 2 December 2024 (UTC)
- I recall encountering discussions about three photos on Wikipedia: profile photo of the pregnant Lina Medina, napalm girl, and Robert Peary's sunbathing inuit girlfriend Aleqasina. I believe that the napalm girl is the only one currently visible on Wikipedia. So WP:NOTCENSORED may be the stated policy, but doesn't sound like we're following it. Fabrickator (talk) 08:43, 2 December 2024 (UTC)
- There are other reasons a photo might be deleted. It could be under copyright, for instance. Valereee (talk) 13:33, 2 December 2024 (UTC)
- (replacing my erroneously entered response)
- The initial objection to the Aleqasina image was that it was "overtly exploitative pornography". This was objected to as a basis for removing the image. In response, someone removed the image on the basis that it was "a poor quality image compared to the other photos in the article." Fabrickator (talk) 16:40, 2 December 2024 (UTC)
- Is the photo at Commons, though? If not, it's possible the photo was removed from an article for that reason, but hasn't been put back under NOTCENSORED because it's not in the public domain. All of these photos could be less than 95 years old. Valereee (talk) 16:44, 2 December 2024 (UTC)
- FWIW, the photo in question is from 1896. Here is the applicable "fair use" notice:
Photo is available at commons:File:Mother of the seals.jpg. Fabrickator (talk) 18:07, 2 December 2024 (UTC)This media file is in the public domain in the United States. This applies to U.S. works where the copyright has expired, often because its first publication occurred prior to January 1, 1929, and if not then due to lack of notice or renewal.
- It's used on ruwiki. The discussion started out as a complaint from inexperienced editors that the photo was offensive, but that doesn't really seem to be what editors there removed it for. They didn't remove it because she's naked. It definitely is a low quality photo, even for the period. It definitely is a fair point that it doesn't add to the reader's understanding of Peary. I'm not sure this is censorship. To me it looks like someone complained it was offensive, other editors said "Why is this image in this article?", and there was discussion of whether removal constituted censorship. I think it could probably be included in Photos by Robert Peary or something. Valereee (talk) 19:09, 2 December 2024 (UTC)
- FWIW, the photo in question is from 1896. Here is the applicable "fair use" notice:
- If an image is not of real educational or encyclopedic value, then it being gratuitous pornography is a fine reason to exclude it. That is not censorship. ꧁Zanahary꧂ 19:35, 2 December 2024 (UTC)
- Is the photo at Commons, though? If not, it's possible the photo was removed from an article for that reason, but hasn't been put back under NOTCENSORED because it's not in the public domain. All of these photos could be less than 95 years old. Valereee (talk) 16:44, 2 December 2024 (UTC)
- Nothing against pictures of gore. But could we avoid seeing any images of this guy, who many people find very offensive? Martinevans123 (talk) 15:34, 2 December 2024 (UTC)
- I certainly understand that the person's opinions and actions are offensive, but is a mere picture of him that bad? Animal lover |666| 16:24, 2 December 2024 (UTC)
- The words "deeply disturbing, traumatizing, triggering, shocking, and sickening" spring to mind. But never mind. Martinevans123 (talk) 16:26, 2 December 2024 (UTC)
- Is a mere picture of a woman's (you name the body part, someone somewhere finds it offensive) that bad? Valereee (talk) 16:46, 2 December 2024 (UTC)
- I would not be opposed to an opt-in only tool or preferences setting or whatever that allows users to avoid seeing certain types of imagery. Would have to be entirely voluntary. I would imagine something that works by looking at an images categories could do it. Just Step Sideways from this world ..... today 20:44, 2 December 2024 (UTC)
- Is WP:NOSEE not enough? Valereee (talk) 20:50, 2 December 2024 (UTC)
- NOSEE, for all its value, requires the user (who may well be just a Wikipedia reader, not an editor) to install a script, a process that I suspect daunts some of those who are not tech-comfortable, if they even know that system exists. A "require-clicking-to-view-any-image" user option that can be turned on with just a switch would serve not just those who may be concerned about being offended or disturbed by an image, but also those for whom bandwidth may be limited or expensive, and it would be in the place where a user is likely to look for such a control.... but a "don't show offensive images" option would require a huge overhead of effort on the part of the editing base, to mark the existing images, to mark every new image, and to deal with the inevitable disagreements about which images should be marked. -- Nat Gertler (talk) 23:28, 2 December 2024 (UTC)
- Our license allows anyone to reuse our content and to filter images in any way they like. I expect that if there truly is a need for a Wikipedia version with certain censorship applied, someone will write a (possibly AI-powered) tool to deliver it. But I don't see hiding relevant information as something that could ever be part of Wikipedia's (or even Wikimedia's) mission. —Kusma (talk) 21:34, 2 December 2024 (UTC)
- Something like 17 years ago there was a child-friendly clone of WP that I made available on the computers at the elementary school where I worked. I don't know if there is anything like that around now. Donald Albury 21:55, 2 December 2024 (UTC)
- @Donald Albury::That was Wikipedia for Schools. I've never fleshed out a real proposal but the idea has been in my head for years to revive that idea, not as CD-ROMs but as a static fork of WP. A curated collection of WP articles, nothing sexually explicit but also not hosting articles on every single episode of Family Guy, and also no editing. A list would be created and maintained, a bot or something would import the articles and update them if they get major revisions, but no open editing. Schools can block the main Wikipedia altogether. They get a nice, clean kid-friendly WP and we get way less vandalism. I just don't know how to actually do any of that. Just Step Sideways from this world ..... today 22:42, 10 December 2024 (UTC)
- Something like 17 years ago there was a child-friendly clone of WP that I made available on the computers at the elementary school where I worked. I don't know if there is anything like that around now. Donald Albury 21:55, 2 December 2024 (UTC)
- Imagine the process involved in marking content as offensive or falling within certain categories. What is sacrilegious? What is pornographic? What is violent? What is disgusting? And why is it Wikipedia’s problem? ꧁Zanahary꧂ 23:00, 2 December 2024 (UTC)
- All of this was said more than a decade ago. I see nothing in this discussion that wasn't put forward by the opponents back then, from "NOTCENSORED gives me the right to force you see to see things you'd like to opt out of" to "whatabout this" to "we should prevent people from volunteering to do the necessary work". Apparently we haven't changed a bit. I am not really surprised. WhatamIdoing (talk) 23:26, 2 December 2024 (UTC)
"NOTCENSORED gives me the right to force you see to see things you'd like to opt out of"
-- I'm sorry, I can't find that quote in this discussion. If someone is actually putting forward that we should force people to look at Wikipedia, that's an editor we should be concerned about. -- Nat Gertler (talk) 23:33, 2 December 2024 (UTC)- So over a decade ago, this idea was rejected, and today people still reject it on the same basis. I’m not seeing the problem. ꧁Zanahary꧂ 01:08, 3 December 2024 (UTC)
- Nobody is forcing you to look at anything. You are the one who chose to visit this site. --User:Khajidha (talk) (contributions) 13:44, 3 December 2024 (UTC)
What is sacrilegious? What is pornographic? What is violent? What is disgusting?
Anything that would be considered WP:GRATUITOUS outside of encyclopedic use on Wikipedia. As evidenced by that content guideline, Wikipedia has been already using a notion of what content may be explicit for over a decade. Wikipedia also has been able to use its consensus processes to decide many other contentious and often outright controversial matters, such as WP:NPOV and WP:TITLE.And why is it Wikipedia’s problem?
It is Wikipedia's problem because a considerable portion of its readers expects this, as evidenced by this matter being discussed perennially. NicolausPrime (talk) 06:52, 3 December 2024 (UTC)- Unencyclopedic content shouldn’t be on Wikipedia to begin with. Offensive encyclopedic content should. Good luck with identifying the encyclopedic content that will and won’t offend anybody. ꧁Zanahary꧂ 08:48, 3 December 2024 (UTC)
It is Wikipedia's problem because a considerable portion of its readers expects this, as evidenced by this matter being discussed perennially.
Faced with the perennial problem of some users demanding warning labels on content they view as offensive, the collective response of the library profession over several decades has been to strongly oppose such systems due to the inherent infringement on intellectual freedom. From the American Library Association:Labeling as an attempt to prejudice attitudes is a censor’s tool.
There is an inherent non-neutralality in identifying groups of images that users may want to avoid. The image that started this discussion is a good example of that. It was mistakenly thought to be a dead body, but is in fact a person suffering from a disease. Identifying the appropriate categories to be warned against, and which images merit those warnings, is an exercise incompatible with free and open access to information.--Trystan (talk) 15:28, 3 December 2024 (UTC)- Sure. But contrast that with library selection policies (hmm, missing article – @The Interior, could I tempt you to write an article?) and collection development work. Libraries oppose putting labels like "this is an immoral book" on collection items. They've got no problem with putting an objective label like "pornography" on a collection item, nor any problem with deciding that they won't stock porn at all. WhatamIdoing (talk) 01:14, 4 December 2024 (UTC)
- With the vast arguments over whether, say, Gender Queer is pornography, it's hard to see it as objective. It's pretty much the Potter Stewart standard. -- Nat Gertler (talk) 01:58, 4 December 2024 (UTC)
- If a "pornography" label is a viewpoint-neutral directional aid intended to help interested users locate the resource, that would be valid. But not if it is intended to warn users away from the content:
7. Is it prejudicial to describe violent and sexual content? For example, would including "contains mild violence" on bibliographic record of a graphic novel violate the Library Bill of Rights? Yes, in any community, there will be a range of attitudes as to what is deemed offensive and contrary to moral values. Potential issues could be sexually explicit content, violence, and/or language. Including notes in the bibliographic record regarding what may be objectionable content assumes all members of the community hold the same values. No one person should take responsibility for judging what is offensive. Such voluntary labeling in bibliographic records and catalogs violates the Library Bill of Rights.
[3]--Trystan (talk) 02:04, 4 December 2024 (UTC)
- Sure. But contrast that with library selection policies (hmm, missing article – @The Interior, could I tempt you to write an article?) and collection development work. Libraries oppose putting labels like "this is an immoral book" on collection items. They've got no problem with putting an objective label like "pornography" on a collection item, nor any problem with deciding that they won't stock porn at all. WhatamIdoing (talk) 01:14, 4 December 2024 (UTC)
What is sacrilegious? What is pornographic? What is violent? What is disgusting? And why is it Wikipedia’s problem?
- Consensus would answer these questions.
- This is the main purpose of this discussion.
- ☆SuperNinja2☆ TALK! 04:01, 4 December 2024 (UTC)
- Just for logistical considerations, how many images are we talking about, and therefore how many consensus discussions, and how often could someone reopen to see if consensus had changed? I feel like there are a huge number of images that might upset someone, but very few that could get consensus for being hidden. Risus sardonicus averages 250+ views a day. The chance that image could ever gain consensus to be hidden is...well, in my mind, unlikely. But if even 1 in 100,000 people are freaked out enough and knowledgeable enough to start a discussion, we could be confirming that once a year via discussion at the talk. Valereee (talk) 13:26, 4 December 2024 (UTC)
- All of this was said more than a decade ago. I see nothing in this discussion that wasn't put forward by the opponents back then, from "NOTCENSORED gives me the right to force you see to see things you'd like to opt out of" to "whatabout this" to "we should prevent people from volunteering to do the necessary work". Apparently we haven't changed a bit. I am not really surprised. WhatamIdoing (talk) 23:26, 2 December 2024 (UTC)
I would imagine something that works by looking at an images categories could do it.
Subject categories serve a different function than warning labels, and the two functions are not compatible. A subject category about nudity should tag those images where nudity is central to the subject of the image (where it is defining), while a warning label would tag every single image containing any nudity, however trivial. Implementing image filtering that uses subject categories would distort the former into the latter. It would need to be a separate system. I agree with NatGertler above; it would be fine to introduce user-friendly functionality that hides all photos and lets user click to view based on the alt text. But flagging all images that someone, somewhere would object to is not a viable project.--Trystan (talk) 00:33, 3 December 2024 (UTC)- I’m reminded of the deleted Zionist symbol template on Commons, which was slapped all over images of Jewish stars in any context, including a chanukiah and some blue sugar cookies—which, no doubt, would be offensive images to some. ꧁Zanahary꧂ 00:57, 3 December 2024 (UTC)
- And the similar commons:Template:Chinese sensitive content. Simply: it becomes obvious that Wikipedia should not be working around people’s sensitivities as soon as you consider a common sensitivity that you consider silly or repressive. ꧁Zanahary꧂ 01:06, 3 December 2024 (UTC)
- This kind of "whataboutism" was addressed in the original report and recommendations. WhatamIdoing (talk) 01:53, 3 December 2024 (UTC)
- I recommend you try and imagine a position besides yours that isn’t fallacious or the result of an intellectual failure. Your approach is not a good one from the losing side of a debate. ꧁Zanahary꧂ 05:39, 3 December 2024 (UTC)
- I severely disagree with clarifying this as whataboutitsm. It's real, it will happen, we see it happening. —TheDJ (talk • contribs) 13:21, 9 December 2024 (UTC)
- Yes it is. No one mentioned that we would take similar approache to the Chinese and zionist templates. That's because we aren't going to hide zionist symbols or any other politically sensitive media.
- And if the problems encountered by these templates are worrying you, then plz explain them so we can address them and avoid them. ☆SuperNinja2☆ TALK! 06:20, 11 December 2024 (UTC)
- Raising an illustrative parallel is not "whataboutism"—its not even on the same spectrum as whataboutism. ꧁Zanahary꧂ 06:31, 11 December 2024 (UTC)
- This kind of "whataboutism" was addressed in the original report and recommendations. WhatamIdoing (talk) 01:53, 3 December 2024 (UTC)
- And the similar commons:Template:Chinese sensitive content. Simply: it becomes obvious that Wikipedia should not be working around people’s sensitivities as soon as you consider a common sensitivity that you consider silly or repressive. ꧁Zanahary꧂ 01:06, 3 December 2024 (UTC)
- I’m reminded of the deleted Zionist symbol template on Commons, which was slapped all over images of Jewish stars in any context, including a chanukiah and some blue sugar cookies—which, no doubt, would be offensive images to some. ꧁Zanahary꧂ 00:57, 3 December 2024 (UTC)
- I'd be happy to have a default turn on/turn off all images mode in preferences. But anything that requires judgement or consensus for which images or category of images? I'd object. Valereee (talk) 15:36, 3 December 2024 (UTC)
- Is WP:NOSEE not enough? Valereee (talk) 20:50, 2 December 2024 (UTC)
- I would not be opposed to an opt-in only tool or preferences setting or whatever that allows users to avoid seeing certain types of imagery. Would have to be entirely voluntary. I would imagine something that works by looking at an images categories could do it. Just Step Sideways from this world ..... today 20:44, 2 December 2024 (UTC)
- I certainly understand that the person's opinions and actions are offensive, but is a mere picture of him that bad? Animal lover |666| 16:24, 2 December 2024 (UTC)
- Agreed. Same with Sanctioned Suicide online forum. They removed its URL. ☆SuperNinja2☆ TALK! 02:19, 5 December 2024 (UTC)
- There are other reasons a photo might be deleted. It could be under copyright, for instance. Valereee (talk) 13:33, 2 December 2024 (UTC)
- I recall encountering discussions about three photos on Wikipedia: profile photo of the pregnant Lina Medina, napalm girl, and Robert Peary's sunbathing inuit girlfriend Aleqasina. I believe that the napalm girl is the only one currently visible on Wikipedia. So WP:NOTCENSORED may be the stated policy, but doesn't sound like we're following it. Fabrickator (talk) 08:43, 2 December 2024 (UTC)
- The simple answer: no. Long answer: The addition of a function to turn off images by default is a great idea that’s seemingly never been implemented despite its harmlessness and relative popularity, and is best taken up at some more technical-oriented forum. But we are never hiding/censoring graphic images if they serve a legitimate purpose. True, I don’t support graphic full color images of goatse on the Goatse.cx article per the Wikipedia:Principle of least astonishment and Wikipedia:GRATUITOUS, but the grey area here is very big and very grey. I’m not talking about the strawman arguments about “what if Dictator McTyrant in Dictatorstan bans pictures of goats” or something; here are some examples of things that could legitimately be considered objectionable to certain persons in a liberal Western society:
- Images or voices of deceased indigenous Australians
- Spiders
- Flashing/strobing lights
- Blackface imagery
But are we not allowed to illustrate Indigenous Australians, Spiders, Dennō Senshi Porygon, or Blackface then? Do we need warnings for these things? Do we need warnings for articles that simply discuss distressing content? These are actual, plausible issues people actually have had to address on other, equally serious platforms. But it’s literally impossible to address every conceivable issue, so Wikipedia’s longstanding policy is to simply address none of them (besides the bare minimum examples provided above). Dronebogus (talk) 03:52, 4 December 2024 (UTC)
But are we not allowed to illustrate Indigenous Australians, Spiders, Dennō Senshi Porygon, or Blackface then
- It’s up to the community to decide, and we’re all here to discuss this. What’s clear, however, is that we need to establish minimum criteria to guide us on what should be collapsed. We must draw a line to distinguish what can and cannot be collapsed.
- This isn’t a case where passing the proposal will lead to chaos and censorship, with everyone hiding images indiscriminately. We’ll be here to make the necessary adjustments and ensure it fits the community’s needs. That’s why we are here having this discussion, right? The proposal isn’t a rigid, unchangeable set of rules—it’s flexible and can adapt. Ultimately, consensus will determine what is acceptable enough to remain visible and what warrants collapsing. ☆SuperNinja2☆ TALK! 04:24, 4 December 2024 (UTC)
- You are completely missing my point. My line is not your line. Your line is not anybody else’s line. Your starting example doesn’t even come close to my, or really most people’s, lines. So you’re never going to establish a global minimum criterion here. And we shouldn’t allow people to establish local case-by-case criteria either— not only is that balkanization, it’s not going to get you what you want (medical editors have strong stomachs) Dronebogus (talk) 04:40, 4 December 2024 (UTC)
Your starting example doesn’t even come close to my, or really most people’s, lines.
What example?I never said that the "example" should be taken as a universal standard for deciding what should be collapsed. You don’t have to agree with me—or anyone else—for the proposal to work. Even if the majority decided that the "example" should not be collapsed, the process would still function. That's why discussions exist: to bring people with differing opinions together, negotiate and compromise, and form a rough consensus by analyzing what most people from both sides agree upon.- In any case, I mentioned that we would discuss what should be collapsed, and doctors and medical editors are welcome to share their perspectives like everyone else. I don’t understand your objection. ☆SuperNinja2☆ TALK! 07:53, 4 December 2024 (UTC)
- All I see here is you getting disturbed by a very particular image, wanting it collapsed, and then slowly backtracking to “well I actually just want this generally”. Basically the answer is still no. Dronebogus (talk) 17:53, 4 December 2024 (UTC)
- What is "this"? Anyways, you took it personally as it seems. And you just don't want to discuss the proposal, you're just complaining. ☆SuperNinja2☆ TALK! 02:27, 5 December 2024 (UTC)
- All I see here is you getting disturbed by a very particular image, wanting it collapsed, and then slowly backtracking to “well I actually just want this generally”. Basically the answer is still no. Dronebogus (talk) 17:53, 4 December 2024 (UTC)
My line is not your line. Your line is not anybody else’s line.
- I didn't even define the line. And I didn't say that the line has to agree with me. I only said "can we hide sensitive images?" we are supposed to draw that line together if the answer was yes. ☆SuperNinja2☆ TALK! 02:32, 5 December 2024 (UTC)
- Your line is, at least, defined at medical photos in which subjects appear to be deceased. ꧁Zanahary꧂ 04:08, 5 December 2024 (UTC)
- You are completely missing my point. My line is not your line. Your line is not anybody else’s line. Your starting example doesn’t even come close to my, or really most people’s, lines. So you’re never going to establish a global minimum criterion here. And we shouldn’t allow people to establish local case-by-case criteria either— not only is that balkanization, it’s not going to get you what you want (medical editors have strong stomachs) Dronebogus (talk) 04:40, 4 December 2024 (UTC)
True, I don’t support graphic full color images of goatse on the Goatse.cx article per the Wikipedia:Principle of least astonishment and Wikipedia:GRATUITOUS
.- Goatse.cx is a good example where Wikipedia's policies fall short on this matter. The Goatse shock image is encyclopedically relevant in that article, so WP:GRATUITOUS doesn't apply. WP:ASTONISH also doesn't seem convincing for preventing its inclusion, given that Wikipedia does include explicit content like defecation or feces in other appropriate articles, whereas there is also a fair number of users may expect that shock image to be there anyway, so not including it at all may be in fact against WP:ASTONISH.
- If you look at the closing rationale for the ultimate deletion of this image, it is stated there that the only accepted reason why it was deleted was because it had unsuitable copyright status. [4] So were the Goatse shock image licensed under a free license, there would be no basis in policy to keep it out of its article's reader sight.
- NicolausPrime (talk) 04:39, 4 December 2024 (UTC)
- I don’t really get how a picture of a man stretching his anus is really necessary to understand the concept of a shock site depicting a man stretching his anus. I’d say it is gratuitous because it doesn’t improve the viewer’s understanding. A better example I guess would be something like Coprophilia which has no graphic full-color photographs (or even graphically explicit illustrations) of people… engaging in it because it would not improve understanding of the topic and would just disgust 99% of the population. Dronebogus (talk) 04:45, 4 December 2024 (UTC)
- Seeing what the famous shock image really looked like very much increases the person's understanding of the subject. Words can convey only small parts of audiovisual content. And generally, showing the image in an article about it is helpful for people who may recognize it but not remember its name. For example, in the Lenna article I wouldn't have realized that I know this image if it wasn't shown there. NicolausPrime (talk) 05:03, 4 December 2024 (UTC)
- I agree. It should be added! ꧁Zanahary꧂ 08:33, 4 December 2024 (UTC)
- I think this is getting off topic. If you really need to see Kirk Johnson’s butthole then you should take that up at the article. This is just starting to remind me of the “I’m a visual learner” meme. Dronebogus (talk) 17:58, 4 December 2024 (UTC)
- I agree. It should be added! ꧁Zanahary꧂ 08:33, 4 December 2024 (UTC)
- Seeing what the famous shock image really looked like very much increases the person's understanding of the subject. Words can convey only small parts of audiovisual content. And generally, showing the image in an article about it is helpful for people who may recognize it but not remember its name. For example, in the Lenna article I wouldn't have realized that I know this image if it wasn't shown there. NicolausPrime (talk) 05:03, 4 December 2024 (UTC)
- Another example: Nudity has relatively few explicit images despite the subject (most of them would be considered PG-13 by American standards) because it’s mostly discussing the societal context of nudity. There are more explicit anatomical photographs on anatomy pages because those discuss biological aspects of humans that cannot be illustrated without showing the entire unclothed body. Dronebogus (talk) 04:51, 4 December 2024 (UTC)
- I don’t really get how a picture of a man stretching his anus is really necessary to understand the concept of a shock site depicting a man stretching his anus. I’d say it is gratuitous because it doesn’t improve the viewer’s understanding. A better example I guess would be something like Coprophilia which has no graphic full-color photographs (or even graphically explicit illustrations) of people… engaging in it because it would not improve understanding of the topic and would just disgust 99% of the population. Dronebogus (talk) 04:45, 4 December 2024 (UTC)
- I think this proposal is going nowhere extremely fast. It’s already been discussed. The answer is no. The reason is it fundamentally conflicts with WP:CENSOR and WP:NEUTRAL. On top of that the vast majority of people don’t support it and the few that do haven’t provided any kind of extraordinary argument necessary to overcome such a longstanding consensus built on a foundation of hard policy. Some uninvolved admin should shut it down. Dronebogus (talk) 21:45, 5 December 2024 (UTC)
- why are you angry? if you're bothered from this discussion you can just opt out. You already gave your opinion anyways, so you can leave with a clear conscience if you're bothered from us so much. but why do you want to shut us down? we didn't finish. ☆SuperNinja2☆ TALK! 02:45, 6 December 2024 (UTC)
- If you don’t want people to react strongly don’t make a controversial proposal, that’s been talked to death, that obviously runs counter to several core principles of Wikipedia. And there is no “us”; there’s you and WhatamIdoing (equally unconvincing and leaning on accusations of prejudice against women and nonwhite people or something like that) vs. everyone and years if not decades of policy and precedent, plus the de facto policy of WP:SNOW— proposals with no realistic chance of success do not have to be prolonged indefinitely. I’d like to add that none of this is personal— I am sorry if you encounter content that deeply upsets you, but I cannot support any kind of official mitigation policy for this issue on both a practical and philosophical basis. Dronebogus (talk) 08:20, 6 December 2024 (UTC)
- @Super ninja2, some editors will think it's a bit of a time-waster to bring up a perennial suggestion unless you either have a new solution or have some reason to believe consensus might have changed. You didn't suggest either of those in your original post. And the reason some editors may feel they have to go ahead and waste their time on it is that if enough people don't, the person making the perennial suggestion may assume lack of opposition is evidence consensus has changed. So, yeah, you may encounter some expressions of annoyance when people feel like they're obligated to waste their time addressing -- again -- this perennial suggestion. Valereee (talk) 13:26, 6 December 2024 (UTC)
- why are you angry? if you're bothered from this discussion you can just opt out. You already gave your opinion anyways, so you can leave with a clear conscience if you're bothered from us so much. but why do you want to shut us down? we didn't finish. ☆SuperNinja2☆ TALK! 02:45, 6 December 2024 (UTC)
- Strong support for asking the WMF to expand Help:NOSEE tools to make it easier for readers to hide content they don't want to see. Right now a reader can (if they create an account and read logged-in) take steps like installing a script, or modifying their CSS page, to hide all images (until clicked on) or images on specific pages, or specific images on any page. This is nice, but it'd be relatively easy to make things much better. Hiding images could be a simple toggle switch like V22's light/dark modes. Wikipedia could do what like the entire rest of the internet has done and have "SafeSearch"-type features where readers can choose from "unfiltered," "medium filter", "full filter", like the parental controls or content filtering features we're all familiar with thanks to its ubiquity in other software/websites. There are lots of reasons readers might want to hide certain types of content (violence, sexuality), e.g. child protection, religion, gov't, PTSD, or just not wanting to see that kind of stuff. The technology to accommodate such readers is readily at hand and widely used on the internet. Refusing to do so seems stubborn, like imposing editors' morality on readers. We should ask the WMF to implement "the usual" content filtering capabilities, a la Google's SafeSearch. Levivich (talk) 21:22, 6 December 2024 (UTC)
- No, your suggestion is “imposing morality” on readers. We cannot make arbitrary decisions about what constitutes “offense/triggering” content. I’m not going over examples ad nauseam. And this isn’t an RFC and never will be, so your “vote” is inapplicable. I actually support making it easy to hide all images by default, but that’s a purely technical matter as I already said. Dronebogus (talk) 12:59, 8 December 2024 (UTC)
- Wikipedia is not and should not be like Google Search. ꧁Zanahary꧂ 15:35, 8 December 2024 (UTC)
- No, of course not, but that’s not really the point of what we’re discussing here. What I mean is, we should consider the measures Google has implemented for their users aged 18 and above to make navigation easier, prioritize user safety, and comply with legal requirements. By using the said examples as a comparison point—and narrowing it down further if needed—we can learn from their experience. We could see how that has worked for them.
- Now, why would that be an issue? Google’s a big company with lots of experts and experience in keeping their huge user base safe and comfortable on their platform. There’s no harm in seeing what they’ve achieved. we could gain useful insights and it would help us with this discussion.
- In the same way, the wiki community should look at how to create a safer and more welcoming environment. This would help users feel comfortable engaging with the platform and encourage them to actually make use of the information they came for (like with Google) ☆SuperNinja2☆ TALK! 18:12, 10 December 2024 (UTC)
- If comfort is at odds with encyclopedically relevant information, we choose the latter, because we are an encyclopedia. ꧁Zanahary꧂ 19:46, 10 December 2024 (UTC)
- If ppl are not comfortable using the platform and don't want to use it and can't access it , then what's the point of having the information in the first place? ☆SuperNinja2☆ TALK! 22:08, 10 December 2024 (UTC)
- To deliver information to people who aren't afraid of it. ꧁Zanahary꧂ 22:33, 10 December 2024 (UTC)
- That is not mentioned in any place in Wikipedia's policies ☆SuperNinja2☆ TALK! 06:23, 11 December 2024 (UTC)
- Because it is so basic it doesn't need to be spelled out.--User:Khajidha (talk) (contributions) 11:35, 11 December 2024 (UTC)
- lol, no. Levivich (talk) 15:05, 11 December 2024 (UTC)
- Because it is so basic it doesn't need to be spelled out.--User:Khajidha (talk) (contributions) 11:35, 11 December 2024 (UTC)
- That is not mentioned in any place in Wikipedia's policies ☆SuperNinja2☆ TALK! 06:23, 11 December 2024 (UTC)
- To deliver information to people who aren't afraid of it. ꧁Zanahary꧂ 22:33, 10 December 2024 (UTC)
- If ppl are not comfortable using the platform and don't want to use it and can't access it , then what's the point of having the information in the first place? ☆SuperNinja2☆ TALK! 22:08, 10 December 2024 (UTC)
- If comfort is at odds with encyclopedically relevant information, we choose the latter, because we are an encyclopedia. ꧁Zanahary꧂ 19:46, 10 December 2024 (UTC)
- The "entire rest of the internet" is not an encyclopedia. --User:Khajidha (talk) (contributions) 12:56, 9 December 2024 (UTC)
- britanica is ☆SuperNinja2☆ TALK! 18:13, 10 December 2024 (UTC)
- I've started a follow-up discussion of opt-in image hiding at Wikipedia:Village_pump_(idea_lab)#Opt-in_content_warnings_and_image_hiding. – Joe (talk) 07:34, 11 December 2024 (UTC)
- We are literally having established users dropping “lol nope” as a rebuttal. Could someone please just close this timesink already? Dronebogus (talk) 18:12, 11 December 2024 (UTC)
- User:Simonm223 but I was preparing a draft which could have helped a lot in making a consensus if you gave me some time. This draft is supposed to point at the points that most users agree on. And propose fixes to the points they don't agree on. This draft would organize the whole chaotic discussion into a neat bullet points and get it back to an understandable route rather than this chaotic fights.
Can you give me a chance to finish it? I know it looks chaotic but I need few days to make it work not more. ☆SuperNinja2☆ TALK! 12:44, 13 December 2024 (UTC)
- Honestly I doubt another post was going to change anyone's mind. The topic was going in circles and more than one person asked for a close. I'd very gently suggest you might be whipping an expired equine.Simonm223 (talk) 12:50, 13 December 2024 (UTC)
- No, this draft was going to break this circle and summarize the whole discussion into an organized bullet points and sift users' opinions and debate each argument independently. ☆SuperNinja2☆ TALK! 12:59, 13 December 2024 (UTC)
- Which would have taken this discussion into a different track. ☆SuperNinja2☆ TALK! 13:01, 13 December 2024 (UTC)
- No, this draft was going to break this circle and summarize the whole discussion into an organized bullet points and sift users' opinions and debate each argument independently. ☆SuperNinja2☆ TALK! 12:59, 13 December 2024 (UTC)
- Honestly I doubt another post was going to change anyone's mind. The topic was going in circles and more than one person asked for a close. I'd very gently suggest you might be whipping an expired equine.Simonm223 (talk) 12:50, 13 December 2024 (UTC)
LLM/chatbot comments in discussions
|
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Just Step Sideways from this world ..... today 00:12, 2 December 2024 (UTC)
- Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ~ ToBeFree (talk) 00:32, 2 December 2024 (UTC)
- Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. Thryduulf (talk) 00:55, 2 December 2024 (UTC)
- In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. Just Step Sideways from this world ..... today 04:08, 2 December 2024 (UTC)
- The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead very strongly oppose. Thryduulf (talk) 02:45, 3 December 2024 (UTC)
- I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. Remsense ‥ 论 00:59, 2 December 2024 (UTC)
- I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. CMD (talk) 01:06, 2 December 2024 (UTC)
- Support discounting or collapsing AI-generated comments, under slightly looser conditions than those for human comments. Not every apparently-AI-generated comment is useless hallucinated nonsense – beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount clearly insubstantial !votes, and collapse clearly unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. jlwoodwa (talk) 02:11, 2 December 2024 (UTC)
- Strongly Support - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. Bluethricecreamman (talk) 02:20, 2 December 2024 (UTC)
- On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". isaacl (talk) 02:54, 2 December 2024 (UTC)
- "generated by a program" is too broad, as that would include things like speech-to-text. Thryduulf (talk) 03:08, 2 December 2024 (UTC)
- Besides what Thryduulf said, I think we should engage with editors who use translators. Aaron Liu (talk) 03:45, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning (example) but they absolutely use LLMs nowadays, even for the basic free product. Gnomingstuff (talk) 08:39, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Support. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. JoelleJay (talk) 03:07, 2 December 2024 (UTC)
- Support - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. Clayoquot (talk | contribs) 04:37, 2 December 2024 (UTC)
- Strong Support - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? Lazman321 (talk) 05:26, 2 December 2024 (UTC)
- Comment - I'm skeptical that a rule like this will be enforceable for much longer. Sean.hoyland (talk) 05:39, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. Sean.hoyland (talk) 13:52, 2 December 2024 (UTC)
- ...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. Sean.hoyland (talk) 14:14, 2 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. Sean.hoyland (talk) 03:44, 3 December 2024 (UTC)
- You might be interested in Anthropic's 'Mapping the Mind of a Large Language Model' and Chris Olah's work in general. Sean.hoyland (talk) 04:02, 3 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Support and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. Cullen328 (talk) 05:51, 2 December 2024 (UTC)
- ...assuming we can see the bridge and haven't already crossed it. Sean.hoyland (talk) 06:24, 2 December 2024 (UTC)
- Support - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - Ratnahastin (talk) 06:04, 2 December 2024 (UTC)
- Support, I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects my thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. Seraphimblade Talk to me 06:59, 2 December 2024 (UTC)
- Strong oppose. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center)The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in WP:BITING people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.And there are also, as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. Gnomingstuff (talk) 08:31, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade Talk to me 20:11, 2 December 2024 (UTC)
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- There weren't
- Multiple
- LLMs tend to use more than one subheading to reiterate points
- Subheadings
- Because they write like a middle schooler that just learned how to make an essay outline before writing.
- Multiple
- In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ScottishFinnishRadish (talk) 13:56, 3 December 2024 (UTC)
- There weren't
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade Talk to me 20:11, 2 December 2024 (UTC)
- Support - Ai-generated comments are WP:DISRUPTIVE - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. Flounder fillet (talk) 13:14, 2 December 2024 (UTC)
- Yes but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. Headbomb {t · c · p · b} 07:14, 2 December 2024 (UTC)
- They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. Just Step Sideways from this world ..... today 20:04, 2 December 2024 (UTC)
- Yes, of course, and I know that's the right answer because ChatGPT agrees with me.
What ChatGPT thinks
|
---|
|
- In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. EEng 07:47, 2 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- Thanks, I completely missed that. Trying to speed read is not my forte. CNC (talk) 20:32, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. WhatamIdoing (talk) 23:36, 2 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- 100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
Strong Oppose I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our own page on GPTZero has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ThatIPEditor They / Them 08:46, 2 December 2024 (UTC)- Strong Support To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ThatIPEditor They / Them 08:54, 2 December 2024 (UTC)
- Support at least for WP:DUCK-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. Chaotic Enby (talk · contribs) 09:17, 2 December 2024 (UTC)
- I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that gptzero and other such tools are very good at detecting this is false. Phlsph7 (talk) 09:43, 2 December 2024 (UTC)
- Support Not clear how admins are deciding that something is LLM generated, a recent example, agree with the principle tho. Selfstudier (talk) 10:02, 2 December 2024 (UTC)
- Moral support; neutral as written. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. Any editor may hat LLM-generated comments in consensus discussions is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of
only if we're sure
caveats in every discussion about LLM use on the project.If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. Folly Mox (talk) 11:20, 2 December 2024 (UTC)- Looks like a rewrite is indicated to distinguish between machine translation and LLM-generated comments, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut WP:HATGPT. Folly Mox (talk) 15:26, 8 December 2024 (UTC)
- Support per EEng charlotte 👸♥ 14:21, 2 December 2024 (UTC)
- I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted — Masem (t) 14:19, 2 December 2024 (UTC)
- Support, cheating should have no place or take its place in writing coherent comments on Wikipedia. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. Randy Kryn (talk) 14:29, 2 December 2024 (UTC)
- Support atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues The AP (talk) 15:10, 2 December 2024 (UTC)
- Support for LLM-generated content (until AI is actually intelligent enough to create an account and contribute on a human level, which may eventually happen). However, beware of the fact that some LLM-assisted content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. Animal lover |666| 16:47, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. Dronebogus (talk) 21:38, 5 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- Strong support chatbots have no place in our encyclopedia project. Simonm223 (talk) 17:14, 2 December 2024 (UTC)
- Oppose - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of course that's not ok. But that is a case by case behavioral issue. I think the better move would be to clarify that "some forms of LLM use can be considered disruptive and may be met with restrictions or blocks" without making it a black-and-white issue. — Rhododendrites talk \\ 17:32, 2 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Wikipedia. Tell me what to say that will convince people not to delete it"; (b) "I know Wikipedia deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof [...]". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the arguments and ideas themselves are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — Rhododendrites talk \\ 18:47, 2 December 2024 (UTC)
- Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. Animal lover |666| 18:47, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways from this world ..... today 20:10, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them
is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas that are my own and a tool that comes up with the ideas. That's the distinction we should be making. — Rhododendrites talk \\ 21:23, 2 December 2024 (UTC) - I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. isaacl (talk) 22:58, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways from this world ..... today 20:10, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". Skullers (talk) 08:34, 4 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Oppose Contributions to discussions are supposed to be evaluated on their merits per WP:NOTAVOTE. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. Andrew🐉(talk) 18:17, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways from this world ..... today 20:12, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
attacks the characteristics or authority of the writer without addressing the substance of the argument
". Andrew🐉(talk) 23:36, 2 December 2024 (UTC)- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- Well, not all of them, anyway. "Queer spot for the straight bot", maybe? Martinevans123 (talk) 17:51, 6 December 2024 (UTC)
- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. Aaron Liu (talk) 23:50, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
- The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,
Yes, AI assistants have good potential. My !vote stands. Andrew🐉(talk) 23:23, 2 December 2024 (UTC)Customers say
Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.
AI-generated from the text of customer reviews- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- It's better than going around in circles. EEng 03:07, 3 December 2024 (UTC)
- I asked Google's Gemini to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways from this world ..... today 20:12, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
AI summary of the RfC to date
|
---|
|
- That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
- Andrew🐉(talk) 09:26, 3 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. Thryduulf (talk) 21:40, 4 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Support Just the other day, I spent ~2 hours checking for the context of several quotes used in an RFC, only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. (My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth) - Butterscotch Beluga (talk) 19:39, 2 December 2024 (UTC)
- Oppose Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - Donald Albury 19:45, 2 December 2024 (UTC)
- Strong support If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- LCU ActivelyDisinterested «@» °∆t° 19:57, 2 December 2024 (UTC)
- Strong oppose. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. ꧁Zanahary꧂ 23:04, 2 December 2024 (UTC)
- Oppose - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | me | talk to me! 23:19, 2 December 2024 (UTC)
- Support per what others are saying. And more WP:Ducks while at it… 2601AC47 (talk·contribs·my rights) Isn't a IP anon 00:36, 3 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.
2601AC47 (talk·contribs·my rights) Isn't a IP anon 12:39, 4 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
- Very strong support. Enough is enough. If Wikipedia is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. :bloodofox: (talk) 01:13, 3 December 2024 (UTC)
- Just for an example of the types of contributions I think would qualify here under DUCK, some of User:Shawn Teller/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were flagged as likely LLM-generated troll nonsense:
Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. JoelleJay (talk) 01:18, 3 December 2024 (UTC)But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.
This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.
Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.
- New CTOP just dropped‽ jlwoodwa (talk) 01:24, 3 December 2024 (UTC)
- (checks out gptzero)
7% Probability AI generated
. Am I using it wrong? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:28, 3 December 2024 (UTC)- In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of Talk:Eurovision Song Contest 1999/GA1 are only visible to admins now. jlwoodwa (talk) 01:31, 3 December 2024 (UTC)
- For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" is the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. JoelleJay (talk) 02:17, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the Wikipedia:Consensus policy, which says In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view. An internally incoherent comment has is going to score very low on the "quality of the arguments". WhatamIdoing (talk) 03:33, 4 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- Those comments are clearly either AI generated or just horribly sarcastic. --Ahecht (TALK
PAGE) 16:33, 3 December 2024 (UTC)- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- I don't know, they seem like the kind of thing a happy dog might write. Sean.hoyland (talk) 05:49, 5 December 2024 (UTC)
- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- Very extra strong oppose - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. PackMecEng (talk) 01:21, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
- Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
- LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
- Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
- The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
- Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
- The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ScottishFinnishRadish (talk) 15:01, 3 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way
and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. PackMecEng (talk) 16:17, 3 December 2024 (UTC)- Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
- Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
- Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
- Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
- That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
- Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
- Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
- Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
- So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ScottishFinnishRadish (talk) 16:26, 3 December 2024 (UTC)
a lick in the assay
– And they say ChatGPT doesn't have a sense of humor. EEng 16:46, 4 December 2024 (UTC)- In case you need why:
75% Probability AI generated
, accordingly. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:50, 4 December 2024 (UTC)
- In case you need why:
- To make my actual point, you said,
I check and consider it as though it was completely written by a person
. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ScottishFinnishRadish (talk) 16:30, 3 December 2024 (UTC)- ScottishFinnishRadish: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. Writ Keeper ⚇♔ 16:31, 3 December 2024 (UTC)
- And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. PackMecEng (talk) 16:43, 3 December 2024 (UTC)
- I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how being respectful of other editors includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. isaacl (talk) 00:14, 4 December 2024 (UTC)
- It's a pitfall with English Wikipedia's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. isaacl (talk) 18:21, 3 December 2024 (UTC)
- I tend to agree with this.
- I think that what @ScottishFinnishRadish is pointing at is that it doesn't feel fair if one person puts a lot more effort in than the other. We don't want this:
- Editor: Spends half an hour writing a long explanation.
- Troll: Pushes button to auto-post an argument.
- Editor: Spends an hour finding sources to support the claim.
- Troll: Laughs while pushing a button to auto-post another argument.
- But lots of things are unfair, including this one:
- Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
- Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
- I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Wikipedia tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. WhatamIdoing (talk) 04:04, 4 December 2024 (UTC)
- Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Wikipedia tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. isaacl (talk) 06:01, 4 December 2024 (UTC)
- It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See this and this. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ScottishFinnishRadish (talk) 11:54, 4 December 2024 (UTC)
- People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. Thryduulf (talk) 12:13, 4 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- Oppose. If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors[5], it would simply constitute a waste of Wikipedia manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?
- 1.^ https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study Skibidilicious (talk) 15:06, 11 December 2024 (UTC)
- Oppose. Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. GeogSage (⚔Chat?⚔) 01:19, 16 December 2024 (UTC)
Nice try, wiseguy! ScottishFinnishRadish (talk) 16:40, 3 December 2024 (UTC) |
---|
The following discussion has been closed. Please do not modify it. |
|
- Oppose per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. Aaron Liu (talk) 03:02, 3 December 2024 (UTC)
- Oppose. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. Adumbrativus (talk) 05:49, 3 December 2024 (UTC)
- I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Wikipedia to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. Phil Bridger (talk) 10:32, 3 December 2024 (UTC)
|
- In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. JoelleJay (talk) 00:51, 4 December 2024 (UTC)
- Oppose. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- Parnaval (talk) 11:13, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- (Personal attack removed) Zh Wiki Jack ★ Talk — Preceding undated comment added 15:07, 3 December 2024 (UTC)
- Why not write their own ideas in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this [7] AfD):
The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.
- Useless garbage untethered to facts or policy. EEng 06:37, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
The proposal's wording can be adjusted
Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. Sincerely, Dilettante Sincerely, Dilettante 17:16, 6 December 2024 (UTC)- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). Sincerely, Dilettante 19:47, 6 December 2024 (UTC)
- If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually is AI. Thryduulf (talk) 01:02, 7 December 2024 (UTC)
- This proposal explicitly covers
comments found to have been generated by AI/LLM/Chatbots
. "AI that helped me translate something I wrote in my native language" is not the same as AI that generated a comment de novo, as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. JoelleJay (talk) 01:44, 7 December 2024 (UTC)
- This proposal explicitly covers
- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Support, more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example Selfstudier links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ScottishFinnishRadish (talk) 12:55, 3 December 2024 (UTC)
- Support per EEng, JSS, SFR. SerialNumber54129 13:49, 3 December 2024 (UTC)
- Soft support - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) Sohom (talk) 14:03, 3 December 2024 (UTC)
- COMMENT : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo Zh Wiki Jack ★ Talk — Preceding undated comment added 14:57, 3 December 2024 (UTC)
- With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—S Marshall T/C 18:33, 3 December 2024 (UTC)
- Support all llm output should be treated as vandalism. 92.40.198.139 (talk) 20:59, 3 December 2024 (UTC)
- Oppose as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. (Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.) Thebiguglyalien (talk) 02:04, 4 December 2024 (UTC)
- Oppose as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --Goldsztajn (talk) 06:12, 4 December 2024 (UTC)
- Strong support LLMs are a sufficiently advanced form of the Automatic Complaint-Letter Generator (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (LLM-talk-user or LLM-user) to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: WikiProject AI Cleanup, are they keeping up with how fast people type a prompt and click a button? Skullers (talk) 07:45, 4 December 2024 (UTC)
- I support the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. Template:Uw-ai1), which are now included in Twinkle. I oppose the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. Toadspike [Talk] 10:29, 4 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike [Talk] 10:41, 4 December 2024 (UTC)
- In other words LLMs are irrelevant - you just want current policy to be better enforced. Thryduulf (talk) 15:24, 5 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike [Talk] 10:41, 4 December 2024 (UTC)
- Oppose Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. Only in death does duty end (talk) 10:50, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- Well, for one, if they're making an argument like the one referenced by @Selfstudier and @ScottishFinnishRadish above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. JoelleJay (talk) 22:46, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- And see what ChatGPT itself had to say about that idea, at #ChaptGPT_agrees above. EEng 22:25, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- Oppose per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?
- DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. Sincerely, Dilettante 21:14, 4 December 2024 (UTC)
[W]e evaluate contributions based on their substance, not by the identity and social capital of the author
: true in theory; not reflected in practice.are ChatGPT's arguments ipso facto less than a human's?
Yes. Chatbots are very advanced predicted text engines. They do not have anargument
: they iteratively select text chunks based on probabilistic models.As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Folly Mox (talk) 14:00, 5 December 2024 (UTC)- "...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. Sean.hoyland (talk) 15:24, 5 December 2024 (UTC)
As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.
Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Wikipedia policy Disruptive editing" with "DE". I went to ChatGPT, provided it a handful of my comments on Wikipedia and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.- In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
They do not have an argument: they iteratively select text chunks based on probabilistic models.
I'm aware. If a monkey types up Othello, is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe a text is self-contained and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.true in theory; not reflected in practice
So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?- To be clear, this is my only usage of an LLM anywhere on Wikipedia. Sincerely, Dilettante 01:22, 6 December 2024 (UTC)
In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.
So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are not allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending any language should not be editing Wikipedia... JoelleJay (talk) 03:17, 6 December 2024 (UTC)- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. Aaron Liu (talk) 15:16, 6 December 2024 (UTC) So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?
I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus.What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!
It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.
This proposal wouldn't change for most people that because it's about closers.With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.
No-one's saying you should take an LLM's word for quotes from a book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?
It's a pity you're lobbying to ban most machine translators. Sincerely, Dilettante 17:08, 6 December 2024 (UTC)It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted.
...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, non-English speakers can use machine-translation, like they've done for the last two decades.This proposal wouldn't change for most people that because it's about closers.
Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.No-one's saying you should take an LLM's word for quotes from a book.
Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".It's a pity you're lobbying to ban most machine translators.
The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. JoelleJay (talk) 18:42, 6 December 2024 (UTC)- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
Since closers are supposed to consider each contribution individually and without bias to "authorship"
I'm the one arguing for this to be practice, yes.then even a shitty but shallowly policy-based position would get consensus based on numbers alone
That is why I state "per above" and "per User" !votes hold equal potential for misuse.Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.
We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run afowl of BLUDGEON, DE, SEALION, etc.If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too
. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy,Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful.
A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.Example text
generate verb to bring into existence. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. Sincerely, Dilettante 21:22, 6 December 2024 (UTC)- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. Sincerely, Dilettante 15:42, 8 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
- Support a broad bar against undisclosed LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd oppose the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on WP:DUCK issues and behavioral patterns that make it clear that someone is abusing LLMs. --Aquillion (talk) 22:08, 4 December 2024 (UTC)
- Support per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. Passengerpigeon (talk) 23:23, 4 December 2024 (UTC)
- Weak oppose in the sense that I don't think all LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal WP:PAG) is being much more liberal in hatting LLM comments when the prompter has failed to prevent WP:WALLOFTEXT/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. Mach61 03:05, 5 December 2024 (UTC)
- Oppose Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. SilverserenC 05:29, 5 December 2024 (UTC)
- Or, as ChatGPT puts it,
Why banning LLM usage in comments would be detrimental, a ChatGPT treatise
|
---|
|
- I'm honestly a bit impressed with the little guy. SilverserenC 05:39, 5 December 2024 (UTC)
- It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. Simonm223 (talk) 19:44, 6 December 2024 (UTC)
- I'm honestly a bit impressed with the little guy. SilverserenC 05:39, 5 December 2024 (UTC)
- Support those who are opposing have clearly never had to deal with trolls who endlessly WP:SEALION. If I wanted to have a discussion with a chatbot, I'd go and find one. ~~ AirshipJungleman29 (talk) 13:14, 5 December 2024 (UTC)
- What's wrong with just banning and hatting the troll? Aaron Liu (talk) 13:49, 5 December 2024 (UTC)
- Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. Thryduulf (talk) 15:22, 5 December 2024 (UTC)
- Oppose per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. Frostly (talk) 17:30, 5 December 2024 (UTC)
- Support: the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - WP:AGF takes precedence and a dim view will be taken of generative-AI inquisitors. arcticocean ■ 17:37, 5 December 2024 (UTC)
- Support If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that LLMs are inherently unethical technology and, consequently, people who rely on them should be made to feel bad. ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should stop doing that because it actually gets in the way of clear communication. I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need WP:BLP, for example, because adhering to WP:V, WP:NPOV, and WP:NOR ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. XOR'easter (talk) 18:27, 5 December 2024 (UTC)
- Strong support - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the thinking. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption. Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop. However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop BugGhost 🦗👻 19:15, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- I also concur. "I used the machine for generating endless quantities of misleading text to generate more text" is not a good joke. XOR'easter (talk) 22:46, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- Strong support if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing English Wikipedia?” That’s like a person who doesn’t understand basic physics trying to explain the second law of thermodynamics using a chatbot. Dronebogus (talk) 21:32, 5 December 2024 (UTC)
- I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
- So using an AI tool in a WP discussion is not cheating (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. ypn^2 22:36, 5 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. Thryduulf (talk) 10:39, 6 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Support; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). -sche (talk) 22:10, 5 December 2024 (UTC)
- You don't need a new policy that does nothing but duplicate a subset of existing policy. At most what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. Thryduulf (talk) 23:37, 5 December 2024 (UTC)
- Support hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. Andre🚐 23:47, 5 December 2024 (UTC)
- Comment - Here is something that I imagine we will see more often. I wonder where it fits into this discussion. A user employs perplexity's RAG based system, search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). Sean.hoyland (talk) 03:13, 6 December 2024 (UTC)
- Support per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. The Kip (contribs) 03:53, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip (contribs) 03:58, 6 December 2024 (UTC)
- "sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - Towards Understanding Sycophancy in Language Models. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. Sean.hoyland (talk) 04:26, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip (contribs) 03:58, 6 December 2024 (UTC)
- (Is this where I respond? If not, please move.) LLM-generated prose should be discounted. Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what WP:CIR means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to explain to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Wikipedia discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway Murphy's Law covers this. Yngvadottir (talk)
- I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – Joe (talk) 08:15, 6 December 2024 (UTC)
- Comment I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. A♭m (Ring!) (Notes) 11:58, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- They will have to work at some point (e.g. [8][9]). Sean.hoyland (talk) 06:27, 7 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- Good news! Queen of Hearts is already working on that in 1325. jlwoodwa (talk) 16:12, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- Comment As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning WP:LLM, which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. Preimage (talk) 15:53, 6 December 2024 (UTC)
- Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. Aaron Liu (talk) 17:34, 6 December 2024 (UTC)
- Those pesky sealion Chatbots are just the worst! Martinevans123 (talk) 18:41, 6 December 2024 (UTC)
- Some translation tools have LLM assistance, but the whole point of generative models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. JoelleJay (talk) 19:01, 6 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- I think @Bugghost has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. JoelleJay (talk) 22:54, 7 December 2024 (UTC)
- There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. Sean.hoyland (talk) 10:46, 7 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. Sean.hoyland (talk) 04:14, 8 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- Support "I Am Not A ChatBot; I Am A Free Wikipedia Editor!" Martinevans123 (talk) 18:30, 6 December 2024 (UTC)
- Comment: The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of what? — ypn^2 19:32, 6 December 2024 (UTC)
- That means that supporters support the proposal
that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. BugGhost 🦗👻 19:40, 6 December 2024 (UTC) - Doing something? At least the stage could be set for a follow on discussion. Selfstudier (talk) 19:40, 6 December 2024 (UTC)
- More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. Thryduulf (talk) 00:58, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Ok, here's a list of the main opposition reasonings, with individual responses.
- What about translations? - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
- What about accessibility? - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
- Detection tools are inaccurate - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
- False positives - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
- LLM content should be taken at face value and see if it violates some other policy - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to spam and disrupt discussion to fake opinions to astroturf political opinions. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Wikipedia discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
- I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. BugGhost 🦗👻 10:08, 7 December 2024 (UTC)
- There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. Sean.hoyland (talk) 11:00, 7 December 2024 (UTC)
- The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. Gnomingstuff (talk) 02:22, 10 December 2024 (UTC)
it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions
- by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Wikipedia (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? BugGhost 🦗👻 09:16, 10 December 2024 (UTC)- You can't. That's the point. This is kind of the whole idea of WP:AGF. Gnomingstuff (talk) 20:22, 13 December 2024 (UTC)
Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. Aaron Liu (talk) 23:46, 8 December 2024 (UTC)Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.
- While I sympathise with that hypothetical, Wikipedia isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project
- the same could be said for WP:DUCK. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. BugGhost 🦗👻 09:07, 11 December 2024 (UTC)
- I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. Aaron Liu (talk) 02:47, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
- While I sympathise with that hypothetical, Wikipedia isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- That means that supporters support the proposal
- Support strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to WP:CIR and similar (in terms of ESL communication) without risking hallucinations by AI. Soni (talk) 02:52, 7 December 2024 (UTC)
- Oppose the use of AI detection tools. False positive rates for AI-detection are dramatically higher for non-native English speakers. AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate. ~ F4U (talk • they/it) 17:53, 8 December 2024 (UTC)
- Oppose - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. Alenoach (talk) 01:12, 9 December 2024 (UTC)
- Oppose – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). Dhtwiki (talk) 10:27, 9 December 2024 (UTC)
- Comment / Oppose as written. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one [real book] excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Wikipedia talk page posts, but not fine with any recommendations of LLM-detectors. SnowFire (talk) 20:00, 9 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was that minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and forbid using alleged "AI detectors" instead. SnowFire (talk) 03:49, 11 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- Support collapsing AI (specifically, Large language model) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a Large language model" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Wikipedia editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Wikipedia, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. MolecularPilot 🧪️✈️ 05:10, 10 December 2024 (UTC)
- Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). MolecularPilot 🧪️✈️ 05:17, 10 December 2024 (UTC)
- Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline WP:BLUGEONy. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. MolecularPilot 🧪️✈️ 06:38, 10 December 2024 (UTC)
- Support (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the context, not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — HELLKNOWZ ∣ TALK 15:41, 10 December 2024 (UTC)
- Strong Support. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. Useight (talk) 18:06, 10 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. Thryduulf (talk) 00:42, 12 December 2024 (UTC)
- I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. PackMecEng (talk) 01:43, 12 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like this demonstrate that there will likely be cases where machines outperform humans in specific Wikipedia domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. Sean.hoyland (talk) 16:25, 12 December 2024 (UTC)
- But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. PackMecEng (talk) 18:53, 12 December 2024 (UTC)
- The proposal specifies content generated by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. JoelleJay (talk) 01:52, 13 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Yes in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Wikipedia. Awesome Aasim 19:55, 10 December 2024 (UTC)
- Support. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. By the way.
gptzero and other such tools are very good at detecting this
. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –Novem Linguae (talk) 03:23, 11 December 2024 (UTC) - Question @Just Step Sideways: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- Support. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. User1042💬✒️ 14:16, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. BugGhost 🦗👻 19:20, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Oppose @Just Step Sideways: The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":
The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? 172.97.141.219 (talk) 17:41, 11 December 2024 (UTC)I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
- Support with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). Horse Eye's Back (talk) 19:24, 11 December 2024 (UTC)
- Strong support, limited implementation.
Wikipedia is written by volunteer editors
, says our front page. This is who we are, and our writing is what Wikipedia is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. Retswerb (talk) 04:11, 12 December 2024 (UTC) - Support in concept, questions over implementation — I concur with Dronebogus that users who rely on LLMs should not edit English Wikipedia. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. elijahpepe@wikipedia (he/him) 22:39, 12 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I encourage using it (once again, if you aren’t competent in English please don’t edit here) Dronebogus (talk) 07:34, 13 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. Aaron Liu (talk) 00:05, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.[1]
If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching Ship of Theseus levels of recombination, to which my answer is it is the same ship.
That happens! Aaron Liu (talk) 01:29, 13 December 2024 (UTC)obviously it would be very bad for a translation tool to insert material not found in the original text!
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a LLM, because the param count is 380M not 1.8T.
the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be
No, that def would fit ancient RNN tech too. 172.97.141.219 (talk) 17:50, 13 December 2024 (UTC)- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. JoelleJay (talk) 23:56, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
- Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. Aaron Liu (talk) 00:33, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. WindTempos they (talk • contribs) 23:21, 12 December 2024 (UTC)
- Support in principle, but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides WP:ILIKEIT and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. The WordsmithTalk to me 05:17, 13 December 2024 (UTC)
- Oppose. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per WP:DUCK are good, but totally banning them is plain wrong.
- ✠ SunDawn ✠ (contact) 22:56, 15 December 2024 (UTC)
Alternate proposal
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden. ThatIPEditor They / Them 10:00, 10 December 2024 (UTC)
- Oppose as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that this is already policy so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which will happen, regardless of whether tools are used). Thryduulf (talk) 11:19, 10 December 2024 (UTC)
- I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. BugGhost 🦗👻 18:32, 10 December 2024 (UTC)
- What does blatantly generated mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that appears to be LLM-style? I don't think there's much in between. ypn^2 19:21, 10 December 2024 (UTC)
- Procedural close per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. Aaron Liu (talk) 03:00, 11 December 2024 (UTC)
I wonder, if there any wiki-volunteers, who have appeals experience, and who would be willing to stand up for the Neutral Point of View Pillar of Wikipedia.
I was banned from editing a specific topic after I stood up for WP:NPV . I do not really care much about the topic, but I care about Wiki-Policies, and I feel compelled to defend WP:NPV , when it is violated by wiki-administrators. Usually, when you go to a court/appeal court in the USA, you can get a free counselor, who helps you with the process. I wonder, if there any wiki-volunteers, who have appeals experience, and who would be willing to stand up for the 2nd of the Five Pillars - Neutral Point of View.Walter Tau (talk) 23:16, 4 December 2024 (UTC)
A short description of the case can be found here: https://en.wikipedia.org/w/index.php?title=Talk:Russian_invasion_of_Ukraine&action=edit§ion=6 — Preceding unsigned comment added by Walter Tau (talk • contribs) Analysis of the causes and results of the Russo-Ukrainian War by [[Political science| political scientists I claim, that the article as written violates Wikipedia:Neutral point of view policy= means representing fairly, proportionately, and, as far as possible, without editorial bias, ALL the significant views that have been published by reliable sources on a topic. Please note, that I do not insist on adding anything about Douglas Macgregor's and Scott Ritter's views (although I support others, if they want to write about them), but I cannot disregard John Mearsheimer, Stephen Walt and several other political scientists. I shall start with addressing the statement by Manyareasexpert on 2024-11-26T10:35:23 : “undo back to consensus version - objections raised in talk, edit war”. Let’s talk about the consensus first. Here is a citation from the Talk Page for Russian invasion of Ukraine on ca. 31 October 2024 (UTC): — Preceding unsigned comment added by Walter Tau (talk • contribs) 19:39, December 4, 2024 (UTC)
|
Guidance on illustrative use of AV, especially readings and subtitling
Hi there,
[EDIT: this is specifically requested regarding use of AV for illustrative, rather than sourcing, purposes. Compare MOS:IMAGES; there is no similar guidance for illustrative audio-video content.]
From a couple of recent conversations I think that MOS could do with a bit more guidance on the use of audio and video content. I know policy development can be difficult and tedious, so I don't say this lightly, but I have encountered some situations where guidance would be beneficial.
A option would be to amend MOS:IMAGES so explain that most of the guidance also applies to illustrative uses of Audio visual content.
Specifically:
- Where a media file is used, as a recording of an original source, what are the verification requirements? For example, if someone recorded a song, does it need comparison to the original score? How far should it deviate?
- What are the "aesthetic" considerations?
- If AV needs subtitling or translation, which is preferred? Translations once recorded, for example, are very hard to edit or correct compared to subtitles.
- How do we cater for users' needs and preferences? Subtitling seems a good way to go.
- Are there benefits to hearing the original for the user, even where they do not speak the language? Where might these occur more (eg, in literary or poetical works, hearing the original is especially useful)
- Are there preferences on audio-video length? eg, are shorter clips generally preferable, and links preferred to long form content?
Although the answers seem fairly obvious to me, I've found there is not always understanding or consensus on these points. I think some of this may be cultural - in particular many EN speakers are resistant to foreign language content, and thus to original language content where that is not English. Other elements are UX matters which is again not always obvious at a glance. Discussion and guidance might help find the right criteria and balance for assessment. Jim Killock (talk) 12:27, 6 December 2024 (UTC)
- It's longstanding policy that sources don't have to be in English but where possible English translations should be provided. Therefore subtitles seem like the policy-compliant option. Where you link to long-format media, provide a timestamp in your link which points to the part that directly supports the claim you're making.—S Marshall T/C 13:09, 6 December 2024 (UTC)
- Thank you; and apologies for not being more precise, I've edited the comment and title above to be clearer about what kind of guidance I think is missing, which is regarding illustrative usages rather than citation. Jim Killock (talk) 13:39, 6 December 2024 (UTC)
- I added a comment at MOS:IMAGES talk page. Jim Killock (talk) 17:57, 6 December 2024 (UTC)
Citations in anthroponymy lists
- User:Alalch E. has changed this section's title from the already descriptive "Removing sources", because this user disagrees how it describes the user's edits. – Editør (talk) 11:50, 8 December 2024 (UTC)
A user removed source references from a list in the good article Femke, which seems like vandalism to me. Can someone perhaps weigh in on the talk page discussion before this turns into an edit war? – Editør (talk) 02:52, 8 December 2024 (UTC)
- VPP is a good place to discuss the following portion of the widely-followed Wikipedia:WikiProject Anthroponymy/Standards, evidenced in the fact that anthroponymy lists (a type of WP:SIA, but functionally and style-wise often very similar to a dab), do not have a citation next to each entry, and that your idea to add these citations, however justified and obvious of an improvement it may seem to you, is a new idea that may not seem equally justified to everyone else ... said portion is:
Entries have certain limitations to promote consistency and usability. Not unlike Disambiguation pages, Names articles may contain lists of persons and each entry should follow a particular format.
Entries should not include External links. References are not required since the article that the entry is linked to should include citations.
- Instead of weighing in on whether to call me a vandal and forecasts of edit warring, let us conduct a review of WikiProject Anthroponymy's WP:ADVICEPAGE. —Alalch E. 10:39, 8 December 2024 (UTC)
- It's definitely not vandalism. But, Alalch E, the fact that references "aren't required" doesn't mean they're banned. I think you should let people add sources if they want.—S Marshall T/C 11:13, 8 December 2024 (UTC)
- I agree that it is not vandalism according to Wikipedia:Vandalism, but I believe @Alalch E. shows intential disruptive behaviour, including changing the heading of this post, which I have now changed back so I will against receive notification of new comments. – Editør (talk) 11:21, 8 December 2024 (UTC)
- You don't own section headings. I have changed the heading back to a descriptive heading. Stop that please. See WP:SECTIONHEADINGOWN. —Alalch E. 11:24, 8 December 2024 (UTC)
- Please stop your intentionally disruptive editing! – Editør (talk) 11:27, 8 December 2024 (UTC)
- Please take a short break from this topic of something like an hour to get some perspective. You have started from an assumption of bad faith and are seeming less and less reasonable by the minute. Kindly let a few more editors weigh in. Nothing here is urgent. —Alalch E. 11:28, 8 December 2024 (UTC)
- In addition to your "Removing sources" from the article Femke, you have reverted my edits to that article, made changes to my post here, and made changes to my comments on two talk pages. This is disruptive behaviour, even if it is not intentional. Please stop this immediately. – Editør (talk) 11:36, 8 December 2024 (UTC)
- Have you read the portions of the guidelines that I have linked in response to your attempts to enforce talk page headings and to determine the level of sections on my talk page? From the beginning of this dispute last night, you seem unusually distrustful, and more and more bent on enforcing your view of how things should be, even details that you have no control of, such as my talk page. Please step back to get a little perspective and let a few more editors weigh in. —Alalch E. 11:40, 8 December 2024 (UTC)
- With your changes to this section's heading you are effectively trying to change how I am describing your disruptive behaviour here and what I am asking help for. – Editør (talk) 11:46, 8 December 2024 (UTC)
- See the header of this page:
The policy section of the village pump is used to discuss already-proposed policies and guidelines and to discuss changes to existing policies and guidelines. Change discussions often start on other pages and then move or get mentioned here for more visibility and broader participation
(emphasis mine). If you want to discuss my purportedly disruptive behavior, you should perhaps start a section at WP:ANI. But since you have started a section here already, perhaps do not start too many discussions in quick sequence. —Alalch E. 11:50, 8 December 2024 (UTC)- Please stop trying to control my comments. – Editør (talk) 11:52, 8 December 2024 (UTC)
- That's not a reasonable remark. What do you think about my already made observation that you are seeming less and less reasonable by the minute? —Alalch E. 11:55, 8 December 2024 (UTC)
- Please stop trying to control my comments. – Editør (talk) 11:52, 8 December 2024 (UTC)
- See the header of this page:
- With your changes to this section's heading you are effectively trying to change how I am describing your disruptive behaviour here and what I am asking help for. – Editør (talk) 11:46, 8 December 2024 (UTC)
- Have you read the portions of the guidelines that I have linked in response to your attempts to enforce talk page headings and to determine the level of sections on my talk page? From the beginning of this dispute last night, you seem unusually distrustful, and more and more bent on enforcing your view of how things should be, even details that you have no control of, such as my talk page. Please step back to get a little perspective and let a few more editors weigh in. —Alalch E. 11:40, 8 December 2024 (UTC)
- In addition to your "Removing sources" from the article Femke, you have reverted my edits to that article, made changes to my post here, and made changes to my comments on two talk pages. This is disruptive behaviour, even if it is not intentional. Please stop this immediately. – Editør (talk) 11:36, 8 December 2024 (UTC)
- Please take a short break from this topic of something like an hour to get some perspective. You have started from an assumption of bad faith and are seeming less and less reasonable by the minute. Kindly let a few more editors weigh in. Nothing here is urgent. —Alalch E. 11:28, 8 December 2024 (UTC)
- Please stop your intentionally disruptive editing! – Editør (talk) 11:27, 8 December 2024 (UTC)
- You don't own section headings. I have changed the heading back to a descriptive heading. Stop that please. See WP:SECTIONHEADINGOWN. —Alalch E. 11:24, 8 December 2024 (UTC)
- @S Marshall: Even though WP:SETNOTDAB applies, anthro lists are probably the most dab-like of all lists, and their entries are intentionally styled the same as dab page entries because these lists and disambiguation pages are closely interlinked, and for a reader who wants a seamless experience of browsing for a person and/or exploring names, the appearance should be consistent. Take a look at List of people named James for example. —Alalch E. 11:23, 8 December 2024 (UTC)
- Alalch, I think that this dispute puts the disputed content over the (rather low) threshold for "challenged or likely to be challenged" within the meaning of WP:V. I think core content policy trumps "seamless" or "consistent appearance". I hope that you will consider allowing Editør to add his citations, and I also hope you will reflect on whether you ought to be editing someone else's words to retitle this VPP thread.—S Marshall T/C 13:14, 8 December 2024 (UTC)
- The original title was "Removing citations": a discussion of one editor's actions which should be at ANI if anywhere. The current title "Citations in Anthroponymy lists" reflects the fact that the discussion is about policy: whether references should be included for blue-linked name-holder-list entries in Anthroponymy articles. On the one hand we have an article failed for GA because of an uncited list; on the other hand we have the standards of the Anthroponymy project which do not include such references. PamD 13:23, 8 December 2024 (UTC)
- Alalch, I think that this dispute puts the disputed content over the (rather low) threshold for "challenged or likely to be challenged" within the meaning of WP:V. I think core content policy trumps "seamless" or "consistent appearance". I hope that you will consider allowing Editør to add his citations, and I also hope you will reflect on whether you ought to be editing someone else's words to retitle this VPP thread.—S Marshall T/C 13:14, 8 December 2024 (UTC)
- I agree that it is not vandalism according to Wikipedia:Vandalism, but I believe @Alalch E. shows intential disruptive behaviour, including changing the heading of this post, which I have now changed back so I will against receive notification of new comments. – Editør (talk) 11:21, 8 December 2024 (UTC)
- This discussion follows a discussion at Talk:Tamara (given name)#List of names removal, where an editor was keen to remove the uncited list of name-holders (without creating a free-standing list, just removing them from the encyclopedia) so that the article might reach Good Article status. The article had been quick-failed for Good Article by @Voorts: on grounds including
The notable people and fictional character sections require citations for each of the entries.
I pointed out there that there are no single-name Anthroponymy Featured Articles to use as models, but that the three Good Articles included one with an uncited list of given-name holders (Femke), one with a link to a free-standing uncited list of name-holders, and one with a fully cited list of name-holders, all of whom were red links. That may have drawn attention to Femke and inspired an editor to add sources to all its name-holders. - I do not think that references are needed in lists of name-holders in anthroponymy articles, where the information about the person is limited to name, dates and description based on the lead sentence of their article. Such unnecessary references clutter the article and should be avoided. If there needs to be an amendment to the standards followed for GA review, then this should be done, to avoid further disagreements. PamD 13:08, 8 December 2024 (UTC)
- I do not see how references at the end of lines clutter an article. GA reviews don't have specific rules for certain types of articles, but in general an entirely unsourced section is a likely cause for pause for a reviewer. CMD (talk) 13:17, 8 December 2024 (UTC)
- Like a lot of other places where we do say "references are not required" (for example, in the case of plot summaries), removing references that actually do work to support the content should not be removed. "not required" is not the same as "not allowed". Whether references should be kept or use is a talk page issue to debate but an editor should not go around removing references without consensus just because they are "not required". --Masem (t) 13:27, 8 December 2024 (UTC)
- (after edit conflict) I don't see any need to require citations for such lists. I also don't see any point in removing them if someone has gone to the trouble of providing them, but it is not vandalism. Surely we can cope with some minor inconsistencies between articles? Phil Bridger (talk) 13:30, 8 December 2024 (UTC)
- I argue that despite anthro lists specifically not being dab pages, they are functionally the closest thing to a dab page and are intentionally styled to look like one (MOS:DABPEOPLE:
... only enough descriptive information that the reader can distinguish between different people with the same name
) because of very close interrelatedness to dab pages (the difference is highly technical and imperceptible to a reader, who will seamlessly go from a people dab to an anthro list and back not knowing that they have visited different types of Wikipedia article space pages), and the age-old practice has been that MOS:DABNOLINK applies to such dab-equivalent entries (References should not appear on disambiguation pages. Dab pages are not articles; instead, incorporate the references into the target articles.
). Not spelled out anywhere and recorded as "not required" in WP:APO/S, but in evident practice, the references are not just not required, they are unwanted. The article is better without them as the experience for the reader is better without them. —Alalch E. 14:13, 8 December 2024 (UTC)- I agree. I'm actually not convinced that lists of given-name holders are particularly worthwhile, but lists of surname holders are vital. As well as possibly helping those interested in the surname in itself, they help the much more common reader who finds a reference to "Bloggs' earlier work on the topic" or "X was influenced by Davies" and needs to scan a list of surname-holders to find the person, forename and initials unknown, who is being referred to. Dates and field of occupation are important - an 18th-century botanist may be the answer where a 20th-century tennis player is not. These lists need to be as complete as possible, to help the reader.
- If we go down the path where some editors add references to these lists, then we might move seamlessly along a path of references being "expected", not least for consistency in those articles, and newly-added unsourced entries being criticised, tagged, and perhaps routinely deleted as "unsourced BLP" by enthusiastic editors. Inevitably names will be added without references, but other editors will just stop bothering to add a name to a surname list because finding a single ref, or a small number, which elegantly sources all of their dates, nationality and occupation (or occupations) may be non-trivial. The reader would lose out.
- So I argue that adding references to name-holder lists is positively unhelpful, and removing such refs is useful.
- The time spent in adding such references could so much better be spent in improving genuinely unsourced or under-referenced articles: it's alarming to think that this might become a "favourite editing job", or even a "nice simple job recommended for novice editors". PamD 16:11, 8 December 2024 (UTC)
- I want to note that I'm fine removing references, despite my QF of the Tamara GA. I was applying the GA criteria and guidance at SIA, which says that citations are required if information beyond a wikilink is provided. I also wasn't aware of that part of the WikiProject Anthroponymy standards at the time. If there's consensus that these kinds of lists don't need citations, that's fine with me. Adopting this rule might affect whether these articles are eligible for FLC (see below) or GA/FA. voorts (talk/contributions) 19:00, 8 December 2024 (UTC)
- I argue that despite anthro lists specifically not being dab pages, they are functionally the closest thing to a dab page and are intentionally styled to look like one (MOS:DABPEOPLE:
- (ec) I can see an argument for not citing bluelinked namehavers in anthroponymy lists. What guides the choice of source? In the removal diff linked in the OP, I'm seeing a lot of citations to sources that establish the existence of various Femkes. Especially for the athletes, there's no indication from these sources why the Femke attested is notable.In the diff, Femke Verstichelen is cited to https://www.uci.org/rider-details/94895, which provides her nationality, birthdate, sanctions (none), and two entries for Team History. This is a database entry that does nothing to establish notability, and accordingly isn't used as a reference in her article (it's an external link).Again in the diff, Femke Van den Driessche is supported by the source https://olympics.com/en/athletes/femke-van-den-driessche, the content of which reads in full "Cycling
<br />
Year of birth 1996". This source – another database record – doesn't even establish nationality, and isn't linked from the subject's article at all.I haven't clicked through to many of these, but the impression I'm getting is that the sources aren't really valuable here. I'm not trying to argue that bluelinks in anthroponymy lists have to establish notability in the list rather than just in the target article, but if we're going to bother adding citations for these people, why not make them informative and relevant? It's just a wasted clickthrough if a reader navigates to these database records instead of visiting the target article.In general I do feel like lists of this type are disimproved by citations. If citations must be added by choice to anthroponymy lists like this, I feel the least worst solution would be to bundle them all into a single pair of<ref>...</ref>
tags following the introductory sentence, which would make the section much easier to edit and significantly reduce bloat to the==References==
section. Folly Mox (talk) 16:13, 8 December 2024 (UTC)
I have added sources for the list of name bearers in the article Femke, because birth years and professions are sometimes listed wrongly and can be challenged. Therefore the sources are required by the Wikipedia:Good article criteria, specifically criterion #2b that states "reliable sources are cited inline. All content that could reasonably be challenged, except for plot summaries and that which summarizes cited content elsewhere in the article, must be cited no later than the end of the paragraph (or line if the content is not in prose)". So good articles should never rely on sources not cited inside the article itself. And removing sources because it is an editor's opinion they don't look nice goes against the good article criteria and against Wikipedia's core principle of verifiability. Sourcing lists of people isn't unusual as it is also common practice for articles like Births in 2000. However, as far as I'm concerned, sourcing lists doesn't need to be demanded for all lists of name bearers in articles about given names, but it should at the very least be accepted. – Editør (talk) 16:48, 8 December 2024 (UTC)
- @Hey man im josh: I believe you pointed out to me that given name articles probably shouldn't go through GA to begin with since SIAs are lists. Is that still your view? voorts (talk/contributions) 18:36, 8 December 2024 (UTC)
- I have mixed feelings on it, but I have generally felt that the name articles are often more akin to lists, depending on how many entries and the depth of the information on the name itself is. Hey man im josh (talk) 18:52, 8 December 2024 (UTC)
- Given name articles are sometimes just one sentence or paragraph with a list of names that looks like a disambiguation page. I tried to develop one given name article further and show that it can even be a good article where the list of names is just one section. I hoped that it could be an example to inspire others to improve given name articles as well. So some are set index articles, but others just have set index sections ({{given name}} using the
section=y
parameter). And in some cases the list is split off, such as the long List of people named David for David (name). There are simply different solutions possible that suit different names. – Editør (talk) 20:27, 8 December 2024 (UTC)
- Given name articles are sometimes just one sentence or paragraph with a list of names that looks like a disambiguation page. I tried to develop one given name article further and show that it can even be a good article where the list of names is just one section. I hoped that it could be an example to inspire others to improve given name articles as well. So some are set index articles, but others just have set index sections ({{given name}} using the
- I have mixed feelings on it, but I have generally felt that the name articles are often more akin to lists, depending on how many entries and the depth of the information on the name itself is. Hey man im josh (talk) 18:52, 8 December 2024 (UTC)
Should first language be included in the infobox for historical figures?
Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. In my view, somebody's first language (L1) is just a very basic and useful piece of information, especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. Kalapulla123 (talk) 11:53, 8 December 2024 (UTC)
- I don't think I agree this is a good use of infobox space:However, this is just my opinion, and the venue of discussion should probably be Wikipedia talk:WikiProject Royalty and Nobility or similar, rather than VPP. Folly Mox (talk) 12:02, 9 December 2024 (UTC)
- incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)
- for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition)
- sometimes L2 skills will be more than adequate to communicate with subject population when called for
- in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter
- prose description seems adequate
- I think this might be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. Johnbod (talk) 12:43, 9 December 2024 (UTC)
- It might also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. Thryduulf (talk) 16:29, 9 December 2024 (UTC)
- This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: Charles V, Holy Roman Emperor. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion (Revolt of the Comuneros) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for the person as a whole. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for forty years. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. SnowFire (talk) 19:47, 9 December 2024 (UTC)
- See below on this. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. Cullen328 (talk) 19:56, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? signed, Rosguill talk 15:06, 11 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. Hawkeye7 (discuss) 21:35, 9 December 2024 (UTC)
- Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking Mechelen. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. Johnbod (talk) 00:39, 10 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like Cardinal Granvelle and his father Nicolas Perrenot de Granvelle (both from Besançon, now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. Phil Bridger (talk) 21:20, 9 December 2024 (UTC)
- I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures.
- I agree that this is important when language is the basis of community identity, such as in Belgian. Tinynanorobots (talk) 10:38, 10 December 2024 (UTC)
- Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. Dronebogus (talk) 18:16, 11 December 2024 (UTC)
- I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. Barnards.tar.gz (talk) 14:33, 12 December 2024 (UTC)
- {{Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox.
- {{infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{Infobox writer}} has a
|language=
parameter, and it would be surprising if this were unused. WhatamIdoing (talk) 19:36, 12 December 2024 (UTC)- Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. Barnards.tar.gz (talk) 10:23, 13 December 2024 (UTC)
- I don't think L1 can be determined for most historical figures without a hefty dose of OR. If you look at my Babel boxes, you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and our article never says what language they spoke. Puyi even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. Nicola Tesla) and infobox bloat. Toadspike [Talk] 21:21, 12 December 2024 (UTC)
- As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for "first language was" and on the first page of results found sourced mentions of first language at Danny Driver, Cleopatra, Ruthanne Lum McCunn, Nina Fedoroff, Jason Derulo, Henry Taube and Tom Segev, and an unsourced but plausible mention at Dean Martin. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. Thryduulf (talk) 22:08, 12 December 2024 (UTC)
- I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. CMD (talk) 11:31, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)
- The question is not whether it is appropriate in any cases; the question is whether it is worth the trouble. I guarantee that this would lead to many vicious debates, despite being in most cases an irrelevant and unverifiable factoid based on inappropriate ABOUTSELF. This is the same reason we have MOS:ETHNICITY/NATIONALITY. Toadspike [Talk] 07:29, 16 December 2024 (UTC)
- It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
Restrict new users from crosswiki uploading files to Commons
I created this Phabricator ticket (phab:T370598) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus already agreed at Commons in response to the WMF study on crosswiki uploading. I created an attempted Wish at Meta-wiki, which was then rejected, i.e. "archived", as policy-related and requir[ing] alignment across various wikis to implement such a policy
. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. George Ho (talk) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC)
- Support. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. CMD (talk) 06:41, 9 December 2024 (UTC)
- Support, I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited.
- Gnomingstuff (talk) 02:14, 10 December 2024 (UTC)
- Support As this applies to images being posted to Commons, but by a method that side steps their wishes, I don't see why another wiki should stand in the way. -- LCU ActivelyDisinterested «@» °∆t° 16:54, 10 December 2024 (UTC)
- Support. I do think that disabling the ability for new editors on the English Wikipedia from engaging in crosswiki uploads to Commons would be a net positive; the Commons community has come to this conclusion several times, and the research confirms that cross-wiki uploads by new users cause more trouble than the good uploads worth. — Red-tailed hawk (nest) 00:36, 11 December 2024 (UTC)
- Support Way too low signal-to-noise ratio; most of these images are copyvios or otherwise useless. -- King of ♥ ♦ ♣ ♠ 01:12, 11 December 2024 (UTC)
- Support like the above editors. Much spam, many copyvios, few good images.—Alalch E. 15:47, 11 December 2024 (UTC)
- I don't think this should be any sort of enwiki policy. If commonswiki wants to restrict something that should be up to them. I can't possibly see how it would need to be specific to the English Wikipedia (i.e. but not about new users on dewiki, eswikt, etc). — xaosflux Talk 16:19, 11 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Wikipedia to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- This comment interests me: "Interestingly, we found that most uploaders were either marketers (editing/uploading on behalf of another entity such as their employer), or they were self-promoters (creating pages about themselves, unaware of the "notability" requirement)."
- So I wonder whether, instead of stopping this, we want a bot to look at newbies who create articles/drafts, check whether they uploaded something, and then tag both the image(s) and the pages here with a note that says something like "There is a 90% chance that this has been posted by a marketer or self-promoter", with suitable links to pages such as Wikipedia:Paid-contribution disclosure. Or maybe even a WP:STICKYPROD process.
- On the question of what to do, it should be possible to hide the cross-wiki upload button. The real question is, do we replace it with a link to c:Special:UploadWizard? The Commons POV has been that it's bad for people to upload images within the visual editor, but okay for the same person to upload the same image with the UploadWizard. I'm not sure the net result is actually any different, especially for these marketers/self-promoters (in terms of net quality/acceptability; from Commons' POV, it's better because (a lot? a little?) fewer of them will click through to upload anything at Commons). WhatamIdoing (talk) 19:49, 12 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Wikipedia to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- Support Nearly every single thing I've ever put up for deletion at Commons has been stuff uploaded to spam en.wp. It never stops. Just Step Sideways from this world ..... today 19:55, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux Talk 20:52, 11 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- It is for sure still happening, I've nominated a few in just the past week. Just Step Sideways from this world ..... today 22:26, 11 December 2024 (UTC)
- It's still happening. A lot of them go to the uncategorized backlog which has well over 100,000 things in it so they get overlooked. Gnomingstuff (talk) 19:18, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Wikipedia:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- I got another one just now [10]. This really can't happen fast enough. Just Step Sideways from this world ..... today 23:51, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Wikipedia:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux Talk 20:52, 11 December 2024 (UTC)
- Support It's honestly kinda dumb that we have to have this whole other consensus process after the prior one just because people at Meta-wiki don't want to implement it. SilverserenC 20:35, 13 December 2024 (UTC)
The Notability of Indian Universities
There is a need to better understand how the notability criteria works of Indian universities. Right now, we are looking at things like a university's rankings, research work, and its role in improving education. For academicians and vice chancellors, we consider things like research publications, fellowships, and leadership experience. However, in India, there is a big concern about the rise of educational institutions that claim to be non-profit but are run as businesses, with leadership often influenced by political connections or family ties. Also, most of these private universities including their vice chancellors' pages are just promotional, based on paid reporting in Indian news organizations, listing courses or publications, which breaks Wikipedia's WP:NOTDIRECTORY rule. They also rely heavily on rankings from multiple portals to boost their article's text. At the assessment level, there are two main opinions: one says a university is notable i.e, passes WP:GNG if it is approved by the University Grants Commission or set up by a state act or statute, while the other says universities must meet strict WP:NORG guidelines to have a Wikipedia article. Our goal is not to judge or oppose any institution. But, it is time to use different criteria to evaluate these organizations from India.
For greater clarity, please take a look at the following ongoing AfDs: Wikipedia:Articles_for_deletion/Adani_University and Wikipedia:Articles_for_deletion/Neotia_University
I am also inviting the following editors, who recently took part in the AfDs mentioned above, to join a helpful discussion: Pharaoh of the Wizards, Ratnahastin, GrabUp, Necrothesp, Sirfurboy, and CptViraj. -- Charlie (talk) 04:12, 10 December 2024 (UTC)
- WP:NSCHOOL is very clear on this :-
All universities, colleges and schools, including high schools, middle schools, primary (elementary) schools, and schools that only provide a support to mainstream education must satisfy either the notability guidelines for organizations (i.e., this page) or the general notability guideline.
(emphasis mine) - All universities whether they are Indian or not or if they have been established by a statute or not need to satisfy either the WP:NORG or WP:GNG in order to be considered notable. The rankings are merely routine coverage as they are released periodically. Also we cannot use WP:OUTCOMESBASED arguments to keep an article as it is simply a circular reasoning (i.e keep an article because we usually keep them at AfDs). I am not sure if we need a separate guideline or clause for indian universities in lieu of the fact that most Indian media coverage about any organisation is often sponsored without any disclosure per WP:NEWSORGINDIA & User:Ms Sarah Welch/sandbox/Paid news and private treaties. - Ratnahastin (talk) 04:26, 10 December 2024 (UTC)
- There is a line in the WP:SCHOOLOUTCOME:
Most independently accredited degree-awarding institutions have enough coverage to be notable, although that coverage may not be readily available online.
Should we really accept this as an argument thatMaybe there are offline sources, so Keep
—without citing any offline sources? GrabUp - Talk 04:35, 10 December 2024 (UTC)- We don't accept it. Per WP:SCHOOLOUTCOME is an argument to be avoided at AfD. That is just describing the situation generally, and does not create a presumption of notability. Sirfurboy🏄 (talk) 07:46, 10 December 2024 (UTC)
- Agree that we should never use outcome based arguments. What matters is the sourcing because how else can the page be written? In the main, I think the P&G is fine. These must meet NORG or GNG. But there is a difference. We allow public schools and non profits to meet GNG but private for-profit schools must meet NORG. As long as we do that, Charlie has raised a significant concern that Indian universities should probably be required to meet NORG when, on the face of it, they are non profits that only need to meet GNG. We have WP:NEWSORGINDIA. Do we need touch of guidance about these institutions? Also
in India, there is a big concern about the rise of educational institutions that claim to be non-profit but are run as businesses
- could we have some reference to these concerns, which we would need to justify such an additional guideline. Thanks. Sirfurboy🏄 (talk) 07:55, 10 December 2024 (UTC)- @Sirfurboy
- Here are few articles;
- 1. 2011 article: Large number of colleges are run by politicians, builders: V. Raghunathan
- 2. 2016 article: Private higher education is burgeoning in India – but millions can't afford it. There is a sentence in this article, "Private institutions keep the cost of education high, despite restrictions on generating profit."
- 3. 2018 article: Educational Institutions must earmark certain percentage of seats for poorer sections and subsidize their education: Vice President. There is a sentence in this article, "Calling for a complete overhaul of our education system, the Vice President said that majority of our colleges have become mere breeding centres for producing students with degree certificates rather than individuals with critical analytical skills."
- 4. 2021 article: 90% of India's students go to colleges where there is little research done: PSA VijayRagahvan
- CITEHIGHLIGHTER shows that some reliable sources include paid or sponsored news, sometimes disguised as ads.;
- 1. Business Standard: Bharath Institute of Higher Education and Research tops the list of Private Universities in India - Sponsored post
- 2. The Indian Express: Manipal University, Jaipur Admissions 2025: UG and PG Admissions, Eligibility and Selection process - Direct price list promotion.
- 3. ThePrint: Enhance Your Career with Manipal University’s Accredited Online Degree Programs
- 4. Business Standard: Ahmedabad University Inaugurates India's First MTech in Composites, Creating Pathways for Next Generation of Material Scientists. - Sponsored post.
- 5. The Hindu: Manav Rachna defines New Milestones | Becomes First Indian University to offer IB Educator Certificate in PYP, MYP and DP. - Sponsored post.
- 6. Business Standard: Shoolini Ranks No.1 Private University in India, Again. - Sponsored post.
- Also, it has been found some universities in India are gaming research publications;
- 1. Chemistry World: Are Indian higher education institutes gaming the ranking system?
- 2. ThePrint: India’s research crime is getting worse. Scientists are gaming peer review system
- 3. ThePrint: This Indian watchdog is cleaning up ‘mess’ in academia—falsification, fabrication & fraud
- Wikipedia is the only place on the internet where such entities try to gain legitimacy through the pseudo-promotion of their institutions. If we maintain basic vigilance, we can save many gullible parents and their children in India from being cheated. Charlie (talk) 12:58, 10 December 2024 (UTC)
- Paid news is ubiquitous in India, those that do not pay up are denied coverage. [11] - Ratnahastin (talk) 13:54, 10 December 2024 (UTC)
- @CharlieMehta, some of the complaints above have nothing to do with notability. Politicians have complained about the quality and price of education in every country. That has nothing to do with the guideline.
- Something that surprises some people is that 'non-profit' doesn't mean 'low cost' or 'poor' or even 'charitable'. Non-profit means that if expenses are lower than revenue, then nobody gets to pocket the profits as their own personal money. You can have a non-profit cigarette maker, or a non-profit gasoline producer. The difference is:
- For-profit: Spend $90 to make something (including your salary), sell it for $100, allowed (but not required) to take the $10 difference home for yourself.
- Non-profit: Spend $90 to make something (including your salary), sell it for $100, not allowed to take the $10 difference home for yourself.
- That's the only difference. These other things – the 'wrong' people are running them, the price is too high, the quality is too low – are completely irrelevant. WhatamIdoing (talk) 20:39, 12 December 2024 (UTC)
- @WhatamIdoing I intended to offer some perspective to the discussion in response to the question raised by Sirfurboy. At the same time, the points and clarifications you have provided are very helpful in steering the conversation back to the actual guidelines and criteria rather than focusing on subjective or extraneous factors. Charlie (talk) 08:47, 13 December 2024 (UTC)
- Paid news is ubiquitous in India, those that do not pay up are denied coverage. [11] - Ratnahastin (talk) 13:54, 10 December 2024 (UTC)
- Note WP:CONSENSUS. There is very definitely a consensus at AfD that fully accredited universities established by statute should be considered to be notable. I can't recall one being deleted. -- Necrothesp (talk) 08:36, 10 December 2024 (UTC)
- Where is the RFC that establishes this consensus? Is it in any policy or subject notability guidelines? What we recall is not always a reliable indication even of the consensus at our self selected engagement. For instance, you made the argument here [12] and the page was not kept. Sirfurboy🏄 (talk) 08:58, 10 December 2024 (UTC)
- There are examples where fully accredited universities were deleted via AfD or WP:CONSENSUS, such as Wikipedia:Articles for deletion/Sant Baba Bhag Singh University, which I recall as I participated in it. GrabUp - Talk 11:51, 10 December 2024 (UTC)
- @Ratnahastin, I don't think that "released periodically" is the definition of "routine coverage". WP:CORPDEPTH says "brief mentions and routine announcements". A report is not a "routine announcement", even if it happens periodically.
- Perhaps we should clarify the meaning of "routine" in the guideline. WhatamIdoing (talk) 20:27, 12 December 2024 (UTC)
- There is a line in the WP:SCHOOLOUTCOME:
- The only thing that should matter is whether there are multiple reliable independent secondary sources that provide significant coverage. That's what's necessary to write an article, and any attempts to get around this or ignore it should be discarded. Promotional and paid content do not meet the requirement of independence. Thebiguglyalien (talk) 02:40, 11 December 2024 (UTC)
- If I'm understanding CharlieMehta's post, I think the concerns are that we can't reliably identify paid news when it's coming out of India, even when it's not clearly marked as sponsored, so guidance clarifying/reminding editors of NEWSORGINDIA in the context of Indian schools might be warranted; that allegedly non-profit universities might actually be operating for profit, in which case the stronger source scrutiny required by NORG might be needed even for "public" universities; and that the often deplorable degree of research fraud, corruption, fake stats, and nepotism in regards to academic career advancement may mean NPROF's C6 guideline (VCs of major academic institutions are notable) is faulty when it comes to VCs of Indian universities. JoelleJay (talk) 03:19, 11 December 2024 (UTC)
While this doesn't fit into the tidy binary flow charts that we imagine, if it's a significant separate university facility it's tends to get a few brownie points in the evaluation for being a geographic entity. I think that a practical standard is that if it isn't a significant separate university facility, it should meet a strict interpretation of the NCORP GNG. And, given the "pay to get coverage" situation in India, what's in the source can help judge in the discussion whether it meets that standard. North8000 (talk) 20:56, 12 December 2024 (UTC)
Use of the status parameter in Infobox officeholder
For several weeks, editors involved in updating the infoboxes (Template:Infobox officeholder) on Trump's nominees have either supplied status information about a candidate's position within the title itself, e.g. Special:Permalink/1262197122, or through the status parameter, e.g. Special:Permalink/1262208196. This should be standardized. elijahpepe@wikipedia (he/him) 05:02, 10 December 2024 (UTC)
- It's an infobox for office holders. These people do not actually hold an office at this time. Therefore, the infobox shouldn't be in their articles. --User:Khajidha (talk) (contributions) 11:41, 11 December 2024 (UTC)
- Also… as an aside… technically Trump is not yet the “President Elect” … he is “President presumptive” until the electoral college reports to the Senate. Blueboar (talk) 12:55, 11 December 2024 (UTC)
- That may be factually correct, but sources are calling him "President Elect" and have been for some time. Just Step Sideways from this world ..... today 19:58, 11 December 2024 (UTC)
- Also… as an aside… technically Trump is not yet the “President Elect” … he is “President presumptive” until the electoral college reports to the Senate. Blueboar (talk) 12:55, 11 December 2024 (UTC)
Two Questions from a Deletion Review
Here are two mostly unrelated questions that came up in the course of a Deletion Review. The DRV is ready for closure, because the appellant has been blocked for advertising, but I think that the questions should be asked, and maybe answered. Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
Requests for Copies of G11 Material
At DRV, there are sometimes requests to restore to draft space or user space material from pages that were deleted as G11, purely promotional material. DRV is sometimes the second or third stop for the originator, with the earlier stops being the deleting administrator and Requests for Undeletion.
Requests for Undeletion has a list of speedy deletion codes for which deleted material is not restored, including G11. (They say that they do not restore attack pages or copyright violation. They also do not restore vandalism and spam.) Sometimes the originator says that they are trying to rewrite the article to be neutral. My question is whether DRV should consider such requests on a case-by-case basis, as is requested by the originators, or whether DRV should deny the requests categorically, just as they are denied at Requests for Undeletion. I personally have no sympathy for an editor who lost all of their work on a page because it was deleted and they didn't back it up. My own opinion is that they should have kept a copy on their hard drive (or solid-state device), but that is my opinion.
We know that the decision that a page should be speedily deleted as G11 may properly be appealed to Deletion Review. My question is about requests to restore a draft that was properly deleted as G11 so that the originator can work to make it neutral.
I am also not asking about requests for assistance in telling an author what parts of a deleted page were problematic. In those cases, the author is asking the Wikipedia community to write their promotional article for them, and we should not do that. But should we consider a request to restore the deleted material so that the originator can make it neutral? Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
- When we delete an article we should always answer reasonable questions asked in good faith about why we deleted it, and that includes explaining why we regard an article as promotional. We want neutral encyclopaedic content on every notable subject, if someone wants to write about that subject we should encourage and teach them to write neutral encyclopaedic prose about the subject rather than telling them to go away and stop trying because they didn't get it right first time. This will encourage them to become productive Wikipedians, which benefits the project far more than the alternatives, which include them trying to sneak promotional content onto Wikipedia and/or paying someone else to do that. So, to answer your question, DRV absolutely should restore (to draft or userspace) articles about notable subjects speedily deleted per G11. Thryduulf (talk) 21:09, 11 December 2024 (UTC)
- If the material is truly unambiguous advertising, then there's no point in restoring it. Unambiguous advertising could look like this:
- "Blue-green widgets are the most amazing widgets in the history of the universe, and they're on sale during the holiday season for the amazingly low, low prices of just $5.99 each. Buy some from the internet's premier distributor of widgets today!"
- If it's really unambiguous advertising to this level, then you don't need a REFUND. (You might ask an admin to see if there were any independent sources they could share with you, though.)
- When it's not quite so blatant, then a REFUND might be useful. Wikipedia:Identifying blatant advertising gives some not-so-blatant, not-so-unambiguous examples of suspicious wording, such as:
- It refers to the company or organization in the first-person ("We are a company based out of Chicago", "Our products are electronics and medical supplies").
- This kind of thing makes me suspect WP:PAID editing, but it's not irredeemable, especially if it's occasional, or that's the worst of it. But in that case, it shouldn't have been deleted as G11. WhatamIdoing (talk) 21:37, 12 December 2024 (UTC)
- Blanket permission to restore every G11 to userspace or draftspace might make sense if you're, say, an admin who's mentioned G11 only once in his delete logs over the past ten years. Admins who actually deal with this stuff are going to have a better feel for how many are deleted from userspace or draftspace to begin with (just short of 92% in 2024) and how likely a new user who writes a page espousing how "This technical expertise allows him to focus on the intricate details of design and construction, ensuring the highest standards of quality in every watch he creates" is to ever become a productive Wikipedian (never that I've seen). If it wasn't entirely unsalvageable, it wasn't a good G11 to begin with. —Cryptic 14:05, 13 December 2024 (UTC)
A Question About Administrator Accountability
Some administrators have semi-protected their talk pages due to abuse by unregistered editors. An appellant at DRV complained that they were unable to request the deleting administrator about a G11 because the talk page was semi-protected, and because WP:AN was semi-protected. An editor said that this raised Administrator Accountability issues. My question is whether they were correct about administrator accountability issues. My own thought is that administrator accountability is satisfied if the administrator has incoming email enabled, but the question was raised by an experienced editor, and I thought it should be asked. Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
- Administrators need to be reasonably contactable. Administrators explicitly are not required to have email enabled, and we do not require other editors to have email enabled either (a pre-requisite to sending an email through Wikipedia), and several processes require leaving talk pages messages for administrators (e.g. ANI). Additionally, Sending an email via the Special:EmailUser system will disclose your email address, so we cannot compel any editor to use email. Putting this all together, it seems clear to me that accepting email does not automatically satisfy administrator accountability. Protecting talk pages to deal with abuse should only be done where absolutely necessary (in the case of a single editor doing the harassing, that editor should be (partially) blocked instead for example) and for the shortest amount of time necessary, and should explicitly give other on-wiki options for those who cannot edit the page but need to leave the editor a message. Those alternatives could be to leave a message on a different page, to use pings, or some other method. Where no such alternatives are given I would argue that the editor should use {{help me}} on their own talk page, asking someone else to copy a message to the admin's talk page. Thryduulf (talk) 21:22, 11 December 2024 (UTC)
- I think this is usually done in response to persistent LTA targeting the admin. I agree it should be kept short. We've also seen PC being used to discourage LTA recently, perhaps that could be an option in these cases. Just Step Sideways from this world ..... today 21:29, 11 December 2024 (UTC)
- You can't use PC on talk pages. See Wikipedia:Pending changes#Frequently asked questions, item 3. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:30, 12 December 2024 (UTC)
- I think this is usually done in response to persistent LTA targeting the admin. I agree it should be kept short. We've also seen PC being used to discourage LTA recently, perhaps that could be an option in these cases. Just Step Sideways from this world ..... today 21:29, 11 December 2024 (UTC)
- Very few admins protect their talk pages, and for the ones I'm aware of, it's for very good reasons. Admins do not need to risk long-term harassment just because someone else might want to talk to them.
- It would make sense for us to suggest an alternative route. That could be to post on your own talk page and ping them, or it could be to post at (e.g.,) WP:AN for any admin. The latter has the advantage of working even when the admin is inactive/no longer an admin. WhatamIdoing (talk) 21:42, 12 December 2024 (UTC)
- It's covered at Wikipedia:Protection policy#User talk pages. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 22:54, 12 December 2024 (UTC)
- That says "Users whose talk pages are protected may wish to have an unprotected user talk subpage linked conspicuously from their main talk page to allow good-faith comments from users that the protection restricts editing from."
- And if they "don't wish", because those pages turn into harassment pages, then what? WhatamIdoing (talk) 19:33, 13 December 2024 (UTC)
- Then it can be dealt with. But an admin shouldn't be uncommunicative. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 19:52, 13 December 2024 (UTC)
- Would there be value is changing that to requiring users whose talk page is protected to conspicuously state an alternative on-wiki method of contacting them, giving an unprotected talk subpage as one example method? Thryduulf (talk) 21:40, 13 December 2024 (UTC)
- For admins yes. But for regular editors it could depend on the problem. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 23:01, 13 December 2024 (UTC)
- Would there be value is changing that to requiring users whose talk page is protected to conspicuously state an alternative on-wiki method of contacting them, giving an unprotected talk subpage as one example method? Thryduulf (talk) 21:40, 13 December 2024 (UTC)
- Then it can be dealt with. But an admin shouldn't be uncommunicative. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 19:52, 13 December 2024 (UTC)
- It's covered at Wikipedia:Protection policy#User talk pages. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 22:54, 12 December 2024 (UTC)
- In general user talk pages shouldn't be protected, but there may be instances when that is needed. However ADMINACCT only requires that admins respond to community concerns, it doesn't require that the talk pages of an admin is always available. There are other methods of communicating, as others have mentioned. There's nothing in ADMINACCT that says a protected user talk page is an accountability issue. -- LCU ActivelyDisinterested «@» °∆t° 22:33, 12 December 2024 (UTC)
Question(s) Stemming from Undiscussed Move
"AIM-174 air-to-air missile" was moved without discussion to "AIM-174B." Consensus was reached RE: the removal of "air-to-air missile," but no consensus was reached regarding the addition or removal of the "B." After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per pre-MRV policy. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- Asukite (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to a MRV, where an additional "no consensus" (to overturn) finding was issued. As Bobby Cohn pointed-out during the move review, "I take issue with the participating mover's interpretation of policy 'Unfortunately for you, a no consensus decision will result in this article staying here' in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr."
The issues, as I see them, are as-follows:
WP:RMUM: The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus."
Burden of Proof: The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me.
Precedent: I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus (WP:NOTAVOTE) is found?
Argument in-favor of "AIM-174:" See the aforementioned RM for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: AIM-4 Falcon, AIM-26 Falcon, AIM-47 Falcon, AIM-9 Sidewinder, AIM-7 Sparrow, AIM-54 Phoenix, AIM-68 Big Q, AIM-82, AIM-95 Agile, AIM-97 Seekbat, AIM-120 AMRAAM, AIM-132, AIM-152 AAAM, AIM-260. 174"B" is unnecessary while violating consistency.
Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?
TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, ModernDayTrilobite. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for months. I also apologize for the delay in getting this here; I originally stopped-over at the DRN but Robert McClenon kindly suggested I instead post here.MWFwiki (talk) 00:08, 12 December 2024 (UTC)
- Are you familiar with Wikipedia:Article titles#Considering changes? Do you think you understand why that rule exists? WhatamIdoing (talk) 23:31, 12 December 2024 (UTC)
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.)
- If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. WhatamIdoing (talk) 05:24, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate)
However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as Bobby_Cohn mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision. - And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past?
MWFwiki (talk) 06:29, 13 December 2024 (UTC)- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Wikipedia:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- Thanks! MWFwiki (talk) 05:09, 14 December 2024 (UTC)
- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Wikipedia:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. Adumbrativus (talk) 06:10, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) MWFwiki (talk) 05:08, 14 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing hallucinated information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to PROD the articles is inefficient. I recommend designating WP:A12 for the speedy deletion of these articles. I have created a template (User:Svampesky/Template:Db-a12) if it is successful. A recent example is the article on the Boston University Investment Office, where the author explicitly disclosed that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. Svampesky (talk) 21:13, 12 December 2024 (UTC)
- CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. --Ahecht (TALK
PAGE) 21:29, 12 December 2024 (UTC)- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, and the article content sufficiently problematic that deletion is the only appropriate course of action and it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. Thryduulf (talk) 22:11, 12 December 2024 (UTC)
- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- @Svampesky, when you say that Wikipedia:Proposed deletion is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? WhatamIdoing (talk) 23:32, 12 December 2024 (UTC)
- My view is that Wikipedia:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster.
- Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence.
- Efficiency and speed is something that many editors like. However, there has to be a balance. We're WP:HERE to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. WhatamIdoing (talk) 00:43, 13 December 2024 (UTC)
- My view is that Wikipedia:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- This part
as the subjects themselves are sometimes real and notable
is literally an inherent argument against using CSD (or PROD for that matter). WP:TNT the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. SilverserenC 00:07, 13 December 2024 (UTC)- The categorization of my proposal as
admitting to vandalism
is incorrect. WP:G11, the speedy deletion criterion I initially used for the article, specifies deleting articles thatwould need to be fundamentally rewritten to serve as encyclopedia articles
. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. Svampesky (talk) 00:42, 13 December 2024 (UTC)- Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. WhatamIdoing (talk) 00:45, 13 December 2024 (UTC)
- (Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states
If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion.
I.e. TNT it to a single sentence and problem solved. SilverserenC 00:46, 13 December 2024 (UTC)
- The categorization of my proposal as
- The venue for proposing new criteria is at Wikipedia talk:Criteria for speedy deletion. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. Graeme Bartlett (talk) 00:20, 13 December 2024 (UTC)
- Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify.
- However, Deletion can be held off as a potential second step, pending a proper BEFORE check. Blueboar (talk) 01:06, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. Graeme Bartlett (talk) 12:23, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- Isn't this covered by WP:DRAFTREASON? Gnomingstuff (talk) 20:34, 13 December 2024 (UTC)
AFD clarification
The Articles for deletion article states that:
If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.
Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? Plasticwonder (talk) 04:06, 13 December 2024 (UTC)
- Yes. If there is a contested redirect, the article is restored and it is brought to AfD. voorts (talk/contributions) 04:34, 13 December 2024 (UTC)
- I think the ideal process is:
- Have an ordinary discussion on the talk page about redirecting the page.
- If (and only if) that discussion fails to reach consensus, try again at AFD.
- I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a WP:BLAR, but maybe I'll be lucky and they'll delete it completely"). WhatamIdoing (talk) 05:31, 13 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. WhatamIdoing (talk) 19:44, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. voorts (talk/contributions) 14:58, 13 December 2024 (UTC)
- Gotcha. Plasticwonder (talk) 15:29, 13 December 2024 (UTC)
- Looking at the above comments: What is it about the Wikipedia:Proposed article mergers process that isn't working for you all? If you redirect an article and it gets reverted, why aren't you starting a PM? WhatamIdoing (talk) 21:37, 16 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I think the ideal process is:
Dispute and conflict in the Autism article, would not let add "unbalanced" tag
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- This is now on multiple different pages.[13][14][15][16] Please discuss at Talk:Autism. WhatamIdoing (talk) 20:09, 13 December 2024 (UTC)
Hello all, I can only occasionally attend Wikipedia to edit or respond. I recently went through the current version of the Wikipedia article on Autism , and I found that this article is NOT representing the reality or encyclopedic wholeness. The huge, verbose, highly technical article is biased towards medical model of disability, medical genetics, and nearly zero information regarding the anthropology, evolution, neurodiversity, accommodation, accessibility, Augmentative and alternative communications, and all that actually helps wellbeing of Autistic people. The page boldly focuses on controversial methods such as ABA, such as EIBI (Early intensive behavioral interventions), DTT (discrete trial training) etc. without any mention of the concerns or criticisms against them. I entered the talk page, but it has been turned literally into a warzone, where any dissenting viewpoint is being silenced in name of "global and unanimous scientific consensus" which is simply wrong. It is mostly a view held by biomedical and pharmaceutical majority. But outside of that, opposing viewpoints do exist in actual Autistic populations (who have the lived experience), anthropology, sociology, psychology, etc. I added an "unbalanced" tag for reader information (I did not speak for complete erasure of controversial viewpoints, just needed the reader to know that there are other views), however the "unbalanced" tag was soon reverted.
It is not possible for me to daily attend and post arguments and counter-arguments. I have to acknowledge that, if this kind of silencing continues, this time Wikipedia literally failed as an encyclopedia, as well it failed at public health and education welfare perspective.
I feel like this needs editors' attention. Autism is NOT a well-understood condition by majority, Lived experience play the ultimate role on how a person feel about their life situation, and Nothing about us without us is an important ethics rule in disability cultures.
It worth mentioning, each disabilities are unique, and their lived experiences are different. There are generally 2 paradigms:
- (1) As if there is a fixed, "normal", gold standard "healthy people", a deviation from that is a pathology, and the society is flawless and 'just'; and any outliers must be assimilated or conformed into the mainstream, or eradicated. It externally defines what is a good life.
- (2) The second paradigm says, a disability (better said disablement, or dis-abled as a verb) is that the human bodies and minds are inherently diverse, varying, and evolving, with no single fixed "one size fits all" baseline. Also a same person can vary in multiple dimensions (such as seen in Twice exceptional), and the value of a person shouldn't depend on productivity, coincidence of wants is a fallacy; society is NOT just, it needs to be accommodated.
It seems most disabilities fall between a spectrum between a medical impairment and a social incompatibility, rather than purely one end. However, Autism, being mostly a social and communication difference, falls mostly in the second type, and seem to be addressed better with the second (inside out) approach.
If we keep arguing from a narrow perspective of medical biology, we would never know the entire scenario. RIT RAJARSHI (talk) 06:26, 13 December 2024 (UTC)
- Without commenting on the actual topic, I would say this sounds like just a content dispute localised on one article, and should be undertaken at the talk page rather than here. If there are reliable relevant sources that are in scope then this topic could be added to the article, but it is your responsibility to find those sources and to defend them if questioned. BugGhost 🦗👻 11:35, 13 December 2024 (UTC)
- Thank you, but the dispute is too intense. Also some policies like "nothing about us without us" should be in Wikipedia policy, especially about when a majority voice can suppress a marginalized voice. Esp. information those affect minority groups. Or the voices not well represented, and therefore needs amplification. RIT RAJARSHI (talk) 12:39, 13 December 2024 (UTC)
- I've just had a look at the talk page, and I don't think it is by any means too intense. You said your view, with minimal sources, and @Димитрий Улянов Иванов replied cordially to you addressing your concerns. Your reply was to say "stop name calling" (I couldn't see any evidence of name calling) and not much else. Again: I'm not commenting on the actual substance of your point of view - just that, as it stands, the talk page is the right place for this discussion, and you should engage with it in earnest with sources to back your view up. (I support any editor who wants to hat this section.) BugGhost 🦗👻 14:52, 13 December 2024 (UTC)
- Thank you, but the dispute is too intense. Also some policies like "nothing about us without us" should be in Wikipedia policy, especially about when a majority voice can suppress a marginalized voice. Esp. information those affect minority groups. Or the voices not well represented, and therefore needs amplification. RIT RAJARSHI (talk) 12:39, 13 December 2024 (UTC)
RfC: Voluntary RfA after resignation
|
Should Wikipedia:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: There is an ongoing related discussion at Wikipedia:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC).
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 👸🎄 21:22, 15 December 2024 (UTC)
- Also fine with 3 charlotte 👸♥📱 22:23, 15 December 2024 (UTC)
- There is ongoing discussion about this at Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino if you're not into the whole brevity thing 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't [request the tools back are RFA]". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- I would also support option 3 if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — Jkudlick ⚓ (talk) 02:05, 16 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
- Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)
- (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)
- @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)
- 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Wikipedia:Backlog are always there. -- Tamzin[cetacean needed] (they|xe|🤷) 23:31, 15 December 2024 (UTC)
- 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)
- Option 1.
No second preference between 2 or 3.As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)- After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)
- Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)
- Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)
- 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)
- Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)
- Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)
- As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)
- I've seen this assertion made multiple times now that
[RFA] requires a lot of time from the community
, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- BN Is designed for this exact scenario. It's also clearly a less contentious process.
- Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Wikipedia is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
- If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
- The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
- Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)
- Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
- Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
- Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
- Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
- None have
- Realise your input is not necessary and this could have been done at BN
- This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Wikipedia activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)
- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- I've seen this assertion made multiple times now that
- Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 (talk) 06:23, 16 December 2024 (UTC)
- Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike [Talk] 07:24, 16 December 2024 (UTC)
- Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)
- Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)
- Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. ♠JCW555 (talk)♠ 09:13, 16 December 2024 (UTC)
- Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly,
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC) - Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)
- Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)
- Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)
- Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)
- Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)
- 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)
- 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)
- 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux Talk 18:03, 16 December 2024 (UTC)
- Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)
Discussion
- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC) - I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)
- Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)
- In re the idea that RfAs use up a lot of community time: I first started editing Wikipedia in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)
- What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)
- I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)
Audio-Video guidance
Hi there,
Per the post I made a few weeks ago regarding use of video for illustrative purposes, I think that MOS:Images might be expanded to make mention of audio-video content, as most of the same principles apply (eg aesthetics, quality, relevance, placement). There are some additional concerns, for example, if audio or video renders a primary source, eg is a recording of PD music such as Bach or similar; or is a reading of a PD text, then there might be some source validation requirements (ie, the music or text should match the original, within sensible boundaries, eg Mozart or Bach pieces may not be easily replicated with original instrumentation, or at least this should not be a requirement.
So one option would be for a simple statement at MOS:Images that these guidelines normally apply to AV, or separate guidance for AV that explains that MOS:Images contains guidance that generally applies to AV.
Is the correct process to raise an RFC? And is that done at MOS:Images, or WP:MOS, or here, or where? Jim Killock (talk) 19:38, 16 December 2024 (UTC)
- I've posted a longer request for help explaining the gap at MOS talk. It seems an RFC may not be needed but any advice would very much be appreciated. Jim Killock (talk) 20:28, 16 December 2024 (UTC)
Technical
Template-generated redlinked categories, again
Once again, Special:WantedCategories has thrown up a handful of redlinked categories that are being smuggled in via templates that have farmed their category generation out to modules that I can't edit, and thus I can't fix the redlinks.
- Category:FM-Class articles — This got renamed to Category:FM-Class pages a few days ago via a CFR discussion, but the {{Category class}} template is still module-farming the old category rather than the new one. Some, but not all, of the pages also have the new category directly declared on them alongside the redlink being carried in by the template, but the redlink is still present on over 500 talk pages.
- Category:Wikipedia dual licensed files with invalid licenses — This is being piggybacked by the licensing template on an image, but the template itself doesn't directly contain any text enabling that category. Obviously if this is actually wanted, then it should be created by somebody who knows how to create project categories like that (i.e. not me), but if it's unwanted then it needs to go away.
- Category:Wikt-lang template errors — Autogenerated on test page Template:Wikt-lang/testcases. Again, should be created if it's actually wanted, but needs to be kiboshed if it's not. If it's actually unwanted, then just fixing the errors on that page won't be enough, and it will need to be made impossible so that it doesn't come back in the future. And, of course, since I don't work with wikt-lang template gnomery, I'm not in a position to determine whether it's wanted or not.
So could somebody with module-editing privileges fix these, and/or create the latter two categories if they're actually wanted? Thanks. Bearcat (talk) 15:59, 4 December 2024 (UTC)
- I'll take care of the first item — Martin (MSGJ · talk) 16:08, 4 December 2024 (UTC)
- Can someone take a look at the FM-Class articles categories in Category:Wikipedia non-empty soft redirected categories and see if they can be moved to pages without disrupting the wider category structure for each project? Timrollpickering (talk) 17:19, 5 December 2024 (UTC)
- The FM-Class one is a textbook example of why people absolutely must consider the broadest implications when there is a proposal to rename categories that are (i) part of a system and (ii) generated by code in templates and modules. That is to say: don't action the cat rename until every template, module and associated page is ready to be suitably amended. --Redrose64 🌹 (talk) 20:38, 5 December 2024 (UTC)
- Yes, absolutely. This one took me by surprise. But I will try and get the module reworked later today. — Martin (MSGJ · talk) 08:56, 6 December 2024 (UTC)
- Special:WantedCategories is now filling up with this mess. Can someone please either apply the module changes ASAP or else reverse the category name changes? Timrollpickering (talk) 12:59, 7 December 2024 (UTC)
- Module was updated 08:28 today, so hopefully you are seeing some improvements now — Martin (MSGJ · talk) 18:48, 7 December 2024 (UTC)
- I see you updated Template:Category class, but overlooked Template:Category class/column and Template:Category class/second row column. I've now updated those and Template:Articles by Quality/up and Template:Articles by Quality/down, but the first of these is still linking to the old category via {{class}} which invokes Module:Class. (E.g. FM links at Category:20th Century Studios articles by quality and Category:FM-Class 20th Century Studios pages.) Perhaps we should instead write a custom line for FM, like you did here[17] for Unassessed. – Fayenatic London 19:59, 7 December 2024 (UTC)
- Most articles have now moved but there are a handful where the templates are stubbornly generating the old categories - see Category:Wikipedia non-empty soft redirected categories for the remaining ones. Timrollpickering (talk) 12:59, 9 December 2024 (UTC)
- I see you updated Template:Category class, but overlooked Template:Category class/column and Template:Category class/second row column. I've now updated those and Template:Articles by Quality/up and Template:Articles by Quality/down, but the first of these is still linking to the old category via {{class}} which invokes Module:Class. (E.g. FM links at Category:20th Century Studios articles by quality and Category:FM-Class 20th Century Studios pages.) Perhaps we should instead write a custom line for FM, like you did here[17] for Unassessed. – Fayenatic London 19:59, 7 December 2024 (UTC)
- Module was updated 08:28 today, so hopefully you are seeing some improvements now — Martin (MSGJ · talk) 18:48, 7 December 2024 (UTC)
- Special:WantedCategories is now filling up with this mess. Can someone please either apply the module changes ASAP or else reverse the category name changes? Timrollpickering (talk) 12:59, 7 December 2024 (UTC)
- Yes, absolutely. This one took me by surprise. But I will try and get the module reworked later today. — Martin (MSGJ · talk) 08:56, 6 December 2024 (UTC)
- The FM-Class one is a textbook example of why people absolutely must consider the broadest implications when there is a proposal to rename categories that are (i) part of a system and (ii) generated by code in templates and modules. That is to say: don't action the cat rename until every template, module and associated page is ready to be suitably amended. --Redrose64 🌹 (talk) 20:38, 5 December 2024 (UTC)
- Can someone take a look at the FM-Class articles categories in Category:Wikipedia non-empty soft redirected categories and see if they can be moved to pages without disrupting the wider category structure for each project? Timrollpickering (talk) 17:19, 5 December 2024 (UTC)
And another mess. Category:Low-impact WikiProject Wikipedia essays pages articles is a redirect being populated somehow but I'm not sure what and can't find the relevant text in the templates. Special:WantedCategories shows similar cases, as well as numerous redlinked FM pages categories. We need to stop this mess where categories are populated by code templates that are near impossible to amend but the category names can be easily changed. Timrollpickering (talk) 21:41, 10 December 2024 (UTC)
- @MSGJ: is this a result of your 7 December edit to Module:WikiProject banner? – Fayenatic London 22:52, 11 December 2024 (UTC)
- There are some comments at Module talk:WikiProject banner#Changes for FM-class — Martin (MSGJ · talk) 08:37, 12 December 2024 (UTC)
Weird problem with STN Template
Sunrise Izumo makes frequent use of the STN template, which is supposed to simplify the creation of links to train station articles. The template does what its supposed to, but it also inserts a link to a discussion about merging the template! Not sure how I should deal with this. Isaac Rabinovitch (talk) 17:13, 6 December 2024 (UTC)
- I think this update by @Primefac: put a comment in the source code that should be in the talk page? -- Verbarson talkedits 20:20, 6 December 2024 (UTC)
- He says he put it in the source deliberately. See his talk page.
- --Isaac Rabinovitch (talk) 20:25, 6 December 2024 (UTC)
- It's normal to display a notice in articles using a template which is nominated for discussion. See Template:Tfm#Display on articles. {{STN}} is used in 19,600 articles and often many times in the same article, e.g. 54 in Sunrise Izumo and Karasuma Line. That causes excessive notices. I don't think it's possible for a template to detect it has already been called on the same page so we cannot say "Only display the notice at the first call". Maybe
|type=disabled
should be used in {{STN}} to never display a notice on articles. PrimeHunter (talk) 22:53, 6 December 2024 (UTC)- Now it makes sense. How will we know if we are not told? And the disruption is pretty minimal. -- Verbarson talkedits 23:20, 6 December 2024 (UTC)
- "The service operates in conjunction with the Sunrise Seto service to ‹See TfM›Takamatsu between Tokyo and ‹See TfM›Okayama. The combined 14-car train departs from Tokyo, and stops at ‹See TfM›Yokohama, ‹See TfM›Atami, ‹See TfM›Numazu, ‹See TfM›Fuji, ‹See TfM›Shizuoka, ‹See TfM›Hamamatsu (final evening stop), ‹See TfM›Himeji (first morning stop), and arrives at ‹See TfM›Okayama, where the train splits."
- How is that "minimal"? Isaac Rabinovitch (talk) 03:12, 7 December 2024 (UTC)
- I've disabled the TfM link. Nardog (talk) 05:38, 7 December 2024 (UTC)
- If we want to only display something at the first occurrence of it on a page then what are the options? Would we have to add site-wide JavaScript which hides the other occurrences after loading the page? PrimeHunter (talk) 11:14, 7 December 2024 (UTC)
- Or just don't put a notice of a technical discussion in a place where it's mostly going to be seen by ordinary Wikipedia users. I don't see how this is "normal." I've been reading and editing Wikipedia for almost 20 years, and this is the first time I've encountered such a thing. I guarantee you that 99% of Wikipedia users will find such a notice annoying and distracting. Isaac Rabinovitch (talk) 17:31, 7 December 2024 (UTC)
- Notices concerning discussions about articles can be published at the head of the article; they are visible to all readers, but can be ignored by those not interested in WP processes. They don't disturb the flow of the article. It is hard to see how notices of discussion about templates can be published without inserting something into the flow of the article. Should there be an 'I want to see the nuts and bolts' flag that is normally off, but can be set on manually (or configured permanently as a account preference) to enable/disable such notices? -- Verbarson talkedits 18:15, 7 December 2024 (UTC)
- This in your CSS will hide tfd notices in mainspace, assuming they all use
tfd
: .ns-0 .tfd {display:none;}
- We could hide it for IP's and show by default for registered users. PrimeHunter (talk) 20:56, 7 December 2024 (UTC)
- This in your CSS will hide tfd notices in mainspace, assuming they all use
- Notices concerning discussions about articles can be published at the head of the article; they are visible to all readers, but can be ignored by those not interested in WP processes. They don't disturb the flow of the article. It is hard to see how notices of discussion about templates can be published without inserting something into the flow of the article. Should there be an 'I want to see the nuts and bolts' flag that is normally off, but can be set on manually (or configured permanently as a account preference) to enable/disable such notices? -- Verbarson talkedits 18:15, 7 December 2024 (UTC)
- Or just don't put a notice of a technical discussion in a place where it's mostly going to be seen by ordinary Wikipedia users. I don't see how this is "normal." I've been reading and editing Wikipedia for almost 20 years, and this is the first time I've encountered such a thing. I guarantee you that 99% of Wikipedia users will find such a notice annoying and distracting. Isaac Rabinovitch (talk) 17:31, 7 December 2024 (UTC)
- If we want to only display something at the first occurrence of it on a page then what are the options? Would we have to add site-wide JavaScript which hides the other occurrences after loading the page? PrimeHunter (talk) 11:14, 7 December 2024 (UTC)
- I've disabled the TfM link. Nardog (talk) 05:38, 7 December 2024 (UTC)
- @PrimeHunter You could use WP:TemplateStyles and the
:nth-child(1n+2 of .tfd){display:none}
to make it only show the first tag in a given paragraph. --Ahecht (TALK
PAGE) 17:29, 10 December 2024 (UTC)- Looks like MediaWiki can't parse the period before
.tfd
for some reason, but.tfd ~ .tfd {display: none;}
does the same thing (hides all sibling .tfds that come after another .tfd). --Ahecht (TALK
PAGE) 19:32, 10 December 2024 (UTC)- The
:nth-child(1n+2 of .tfd)
form is not in Selectors Level 3 (a W3C Recommendation) but it is in Selectors Level 4, which is a W3C Working Draft. --Redrose64 🌹 (talk) 21:43, 10 December 2024 (UTC)
- The
- Looks like MediaWiki can't parse the period before
- Now it makes sense. How will we know if we are not told? And the disruption is pretty minimal. -- Verbarson talkedits 23:20, 6 December 2024 (UTC)
- It's normal to display a notice in articles using a template which is nominated for discussion. See Template:Tfm#Display on articles. {{STN}} is used in 19,600 articles and often many times in the same article, e.g. 54 in Sunrise Izumo and Karasuma Line. That causes excessive notices. I don't think it's possible for a template to detect it has already been called on the same page so we cannot say "Only display the notice at the first call". Maybe
It could also be off for IP's but on by default for registered users?
- Primefac's edit was in accordance with WP:TFDHOW step 1, sixth bullet, except that they appear to have specified
|type=tiny
instead of|type=inline
. --Redrose64 🌹 (talk) 12:35, 7 December 2024 (UTC)|type=tiny
is an acceptable alternative to|type=inline
per Template:Template for discussion#Display on articles. But, as Template:Template for discussion#Which type should be used? goes onto say completely disabling, as Nardog has done, is ok if "the insertion of any template is deemed too detrimental to a large number of articles, or if it breaks markup". Nthep (talk) 13:41, 7 December 2024 (UTC)
- Primefac's edit was in accordance with WP:TFDHOW step 1, sixth bullet, except that they appear to have specified
off for IP's
@PrimeHunter: Have any IPs requested hiding this?99% of Wikipedia users will find such a notice annoying and distracting
The same could be said for compulsory voting in Australia. Similar to what Primefac said, I think people complaining about not being notified pose a greater threat because they could riot and demand the results be overturned.- I requested an edit to make the notification less intrusive. It will be easy to skip over like the other inline cleanup tags and references:
The service operates in conjunction with the Sunrise Seto service to [TfM]Takamatsu between Tokyo and [TfM]Okayama. The combined 14-car train departs from Tokyo, and stops at [TfM]Yokohama, [TfM]Atami, [TfM]Numazu, [TfM]Fuji, [TfM]Shizuoka, [TfM]Hamamatsu (final evening stop), [TfM]Himeji (first morning stop), and arrives at [TfM]Okayama, where the train splits.
- 172.97.141.219 (talk) 14:45, 11 December 2024 (UTC)
- If we're going to use
{{fix}}
, could we put it at the end of the template so as to match other inline cleanup templates' usage? E.g.:
— Daℤyzzos (✉️ • 📤) Please do ping on reply. 22:23, 11 December 2024 (UTC)The service operates in conjunction with the Sunrise Seto service to Takamatsu[TfM] between Tokyo and Okayama[TfM]. The combined 14-car train departs from Tokyo, and stops at Yokohama[TfM], Atami[TfM], Numazu[TfM], Fuji[TfM], Shizuoka[TfM], Hamamatsu[TfM] (final evening stop), Himeji[TfM] (first morning stop), and arrives at Okayama[TfM], where the train splits.
- @DaZyzzogetonsGotDaLastWord: {{subst:Tfd}}/{{Tfd/dated}} is transcluded first in the template to discuss, and I wouldn't know how to delay output. Ahecht replaced {{fix}} with templatestyles, saying TfD is not cleanup, but kept [square brackets] while <angle brackets> confuse non-template-editors. I also proposed {{topicon}}. 172.97.141.219 (talk) 12:27, 12 December 2024 (UTC)
- If we're going to use
Add new category: articles in mainspace that contain template "Draft article"
{{AfC submission}} uses Module:AfC submission catcheck so it can list AfC submissions with categories automatically in Category:AfC submissions with categories.
It looks like {{Draft article}} also uses Module:AfC submission catcheck but it does not appear to be listing articles in mainspace that contain {{Draft article}} in a category. Can we do that? I have asked @Tol: to add removing {{Draft article}} from articles in mainspace to TolBots list of tasks. It would be nice if the bot could work from a category, just like the existing task to remove {{Draft categories}} from mainspace articles.
Note that there are currently no articles in mainspace that contain {{Draft article}} but that is because I used AWB to remove it. Thank you, Polygnotus (talk) 06:33, 7 December 2024 (UTC)
- @Polygnotus You're basically asking for https://en.wikipedia.org/wiki/Special:WhatLinksHere?target=Template%3ADraft+article&namespace=0&hidelinks=1&hideredirs=1&limit=50. It's been a while since I've used the desktop AWB, but in WP:JWB it's pretty easy to generate a list of mainspace pages that transclude a template. You can import the JSON file below to do it for {{Draft article}}:
- TolBot should be able to do something similar. --Ahecht (TALK
{ "Draft article template in mainspace": {"string":{"namespacelist":["0"],"linksto-title":"Template:Draft article"},"bool":{"linksto":true,"backlinks":false,"embeddedin":true,"imageusage":false},"replaces":[]} }
PAGE) 17:21, 10 December 2024 (UTC)- That would also be a way to achieve the same goal, but that would be inconsistent, less elegant, and a waste of dev time. AWB and JWB are intended for tasks that require human supervision, which this does not. Polygnotus (talk) 15:56, 13 December 2024 (UTC)
- @Polygnotus PyWikiBot, or whatever TolBot is using on the backend, should be able to perform a similar search. --Ahecht (TALK
PAGE) 19:28, 16 December 2024 (UTC)
- @Polygnotus PyWikiBot, or whatever TolBot is using on the backend, should be able to perform a similar search. --Ahecht (TALK
- That would also be a way to achieve the same goal, but that would be inconsistent, less elegant, and a waste of dev time. AWB and JWB are intended for tasks that require human supervision, which this does not. Polygnotus (talk) 15:56, 13 December 2024 (UTC)
Typing "Template:gl" (lower-case G, lower-case L) in the search box takes me to an unexpected page
When I type "Template:gl" (lower-case G, lower-case L) in the search box at the top of my page (in Vector 2022), and then click Search, I am automatically taken to Template:GL (upper-case G, upper-case L). There is not a redirect at Template:gl, so I do not understand why this happens. I believe that I should end up at this search result page, telling me that "The page "Template:Gl" does not exist., etc."
This also happens if I type "Template:gin", so it is not limited to two-letter names.
I thought that after the first character, case was significant in page names. What is happening here? – Jonesey95 (talk) 19:20, 10 December 2024 (UTC)
- The search box allows very near matches. This query matches "Now try all upper case" or another type of near match. 172.97.141.219 (talk) 19:51, 10 December 2024 (UTC)
- Thanks for that. I suppose this (to me) inconsistent behavior is helpful for nearly everyone, but not for template editors and gnomes trying to investigate and fix specific problems. I find it a bit frustrating that the Search box at the top of the page behaves differently from the Search page. I guess that's why one has a white-background button that is the same height as the text box, and the other has a blue-background button that is taller than the text box. Maybe that will help me remember. – Jonesey95 (talk) 19:56, 10 December 2024 (UTC)
- The big search box at Special:Search always makes a search and never goes directly to a matching page name. The normal search box on every page always goes directly to a page which only differs by captizalition, unless you select "Search for pages contaning" in the dropdown. PrimeHunter (talk) 21:15, 10 December 2024 (UTC)
- Thanks for that. I suppose this (to me) inconsistent behavior is helpful for nearly everyone, but not for template editors and gnomes trying to investigate and fix specific problems. I find it a bit frustrating that the Search box at the top of the page behaves differently from the Search page. I guess that's why one has a white-background button that is the same height as the text box, and the other has a blue-background button that is taller than the text box. Maybe that will help me remember. – Jonesey95 (talk) 19:56, 10 December 2024 (UTC)
- Jonesey95, if you append a tilde to any search in the top-right box, it will force a search result page, regardless if a page exists matching your search string or not. This is actually documented somewhere, and not some kind of klugey thing that might go away next version. Try
Template:Ambox~
or similar. Mathglot (talk) 09:02, 13 December 2024 (UTC)- Interesting. Strangely, it doesn't tell me that "The page Template:gin~ does not exist", as I might expect, but I'll file that tip away for future use. – Jonesey95 (talk) 15:03, 13 December 2024 (UTC)
Transliteration error at Ninurta
Hi. I just chanced upon a Good Article, Ninurta, which now has a red-linked Error on the first line. It's some kind of problem with transliteration, it seems because it's using non-Latin alphabet or characters. It is using the "transl" template and I don't know how to correct it. Would somebody please fix this? ProfGray (talk) 14:02, 11 December 2024 (UTC)
- I fixed it by replacing the non-Latin "𒅁" with "Ib (cuneiform)" in the wikilink. Please improve the help text if was not clear to you. – Jonesey95 (talk) 14:50, 11 December 2024 (UTC)
- @Jonesey95 thank you! ProfGray (talk) 00:20, 12 December 2024 (UTC)
[wikibase-conflict-patched] Your edit was patched into the latest version
I encountered this warning message while running wbeditentity. I wonder if anyone can tell me how to avoid it, or who I should ask to get a solution to the problem? Kanashimi (talk) 23:22, 11 December 2024 (UTC)
Fun problem with Improved Syntax Highlighting (beta feature)
Was editing a page (2014 Gaza War) with the Improved Syntax Highlighting beta feature when I noticed that the text I was editing was all purple
. Scrolled up to find where the problem started, and it was first completely unnhighlighted
, then all purple except for [[where it should be different]]
, then it was just completely off kilter. E.g. As part of its crackdown and concurrent to rocket fire from Gaza, Israel conducted air strikes against Hamas facilities in the Gaza Strip.
I guess that's beta features for you. – Daℤyzzos (✉️ • 📤) Please do not ping on reply. 23:32, 11 December 2024 (UTC)
- The talk page for that beta feature is mw:Help talk:Extension:CodeMirror if you want to report a problem there. It helps to describe exactly what you clicked on and what you saw. For example, were you using the Visual Editor, and were you editing a section or the whole article? – Jonesey95 (talk) 00:56, 12 December 2024 (UTC)
- Thank you... not sure how I would get syntax highlighting in Visual Editor though... :-) – Daℤyzzos (✉️ • 📤) Please do ping on reply. 01:46, 12 December 2024 (UTC)
- Phab:T366035 析石父 (talk) 14:26, 12 December 2024 (UTC)
- Thank you. – Daℤyzzos (✉️ • 📤) Please do ping on reply. 21:28, 12 December 2024 (UTC)
Does the Japanese Wikipedia allow English edit summaries?
IP tried to ask the Japanese Wikipedia if English edit summaries is allowed but ended up with receiving no consensus. So, i'm gonna mirror his discussion here on the English Wikipedia's village pump. 67.209.130.128 (talk) 03:14, 12 December 2024 (UTC)
- The English Wikipedia has no authority over the Japanese Wikipedia. We would probably not want Japanese edit summaries here, but we don't have a "help for non-English speakers" page either so make of that what you will. * Pppery * it has begun... 04:32, 12 December 2024 (UTC)
- It appears to have no filter to stop non-Japanese edit summaries. I suggest that you supply an edit summary in English that is helpful when editing. Without knowing the language, perhaps you can usefully edit images, or numbers on a page. Graeme Bartlett (talk) 06:37, 12 December 2024 (UTC)
- I really wouldn't recommend editing a Wikipedia in a language you don't speak for anything beyond the most perfunctory of edits, e.g. maybe replacing images with technically superior versions. For that, machine translation (perhaps with a courtesy note explaining you don't speak the language) should suffice. Remsense ‥ 论 06:39, 12 December 2024 (UTC)
- Unironically unironically the highest quality tip. Thank you. 67.209.130.66 (talk) 08:49, 12 December 2024 (UTC)
- If you log in, Japanese Wikipedia might send you a welcome message - they sent me one some years ago, see ja:利用者‐会話:Redrose64, which includes one line of English:
- Hello, Redrose64! Welcome to Japanese Wikipedia. If you are not a Japanese speaker, you can ask a question in Help. Enjoy!
- which may help here. I see that an IP has posted a similar question at 04:10, 3 December 2024 (UTC). --Redrose64 🌹 (talk) 18:50, 12 December 2024 (UTC)
- If you log in, Japanese Wikipedia might send you a welcome message - they sent me one some years ago, see ja:利用者‐会話:Redrose64, which includes one line of English:
- Or SWMT. JJPMaster (she/they) 20:50, 12 December 2024 (UTC)
- Unironically unironically the highest quality tip. Thank you. 67.209.130.66 (talk) 08:49, 12 December 2024 (UTC)
- I sometimes perform file moves on Commons, which generates a copy of my edit summary (in English) copied to all languages where the file is renamed pursuant to the file move. I have never had a problem result from this in any language Wiki, including Japanese, where I have some 250 of these. BD2412 T 20:54, 12 December 2024 (UTC)
Redirects to anchors
Redirects to anchors don't seem to work.
If I go to Special pages it redirects to MediaWiki at the top of the page. But if I click the link in "Redirected from Special pages" it shows a link to MediaWiki#Installation and configuration. And if I click that link, I get the anchor jump.
Is the failure to do the jump on redirect peculiar to Firefox or do I need to file a bug report with Wikimedia? Or is this a known issue they won't be able to fix?
Thisisnotatest (talk) 03:06, 13 December 2024 (UTC)
- What version of Firefox are you using?
- You can find it under help > About firefox. Snævar (talk) 03:35, 13 December 2024 (UTC)
- Redirects to a section require scripting to be enabled. Johnuniq (talk) 08:13, 13 December 2024 (UTC)
- Works correctly for me. Firefox 133.0.3 (64 bits) @ Windows 11 Home. --CiaPan (talk) 09:00, 13 December 2024 (UTC)
- Works for me in Firefox with JavaScript enabled, bot not disabled as Johnuniq said. Does https://www.whatismybrowser.com/detect/is-javascript-enabled/ say JavaScript is enabled? What is the url in the address bar after clicking Special pages? With JavaScript enabled and working correctly it should be rewritten to https://en.wikipedia.org/wiki/MediaWiki#Installation_and_configuration and jump to the section. Without JavaScript the url remains https://en.wikipedia.org/wiki/Special_pages. It does display the MediaWiki article but doesn't jump to the section. This is an effect of MediaWiki using "pseudoredirects" and not real HTTP redirects. PrimeHunter (talk) 12:56, 13 December 2024 (UTC)
Data sorting in tables
Hi there, I've created a page List of Neo-Latin authors which has sortable lists.
In the first column, I've added data sorting via either |data-sort="Lastname, firstname"|
or with {{sortname|Firstname|Lastname}}
, or variations on these. The seem to be outputting to the table, but it doesn't seem always to sort on these values. In particular, cells which have sort values, but do not contain data, are treated as blanks.
It is necessary to have some data-less name cells, because the table contains columns for the author's original names, and their Latin names; but either of these can be absent for different authors.
I've tried adding nsbsp; to make browsers think there is content, in case that is the issue, but that doesn't seem to help. Any ideas? Jim Killock (talk) 17:12, 13 December 2024 (UTC)
- @JimKillock: It's called
data-sort-value
.[18] PrimeHunter (talk) 18:23, 13 December 2024 (UTC)- Ah great - thanks! Jim Killock (talk) 20:29, 13 December 2024 (UTC)
Module editor needed, again
Another two redlinks generated by the move of template-generated maintenance categories again, this time relating to {{Infobox road}}:
- Category:Infobox road instances in Cabo Verde → Category:Infobox road instances in Cape Verde
- Category:Infobox road instances in Georgia → Category:Infobox road instances in Georgia (country)
But yet again, the template isn't directly declaring these categories itself in any place I could fix them myself, but is smuggling them in via a module I can't edit, so I need somebody with module-editing privileges to clean them up. Thanks. Bearcat (talk) 17:20, 13 December 2024 (UTC)
- This is beyond me, too. And I tested and it doesn't follow redirects. Posted at Template talk:Infobox road in the hope that one of the editors watching that knows how this works. * Pppery * it has begun... 17:30, 13 December 2024 (UTC)
- I think exceptions to ISO names need to be added at Template:Infobox road/meta/mask/category. — Jts1882 | talk 18:10, 13 December 2024 (UTC)
- I agree, it is safer than adding it to the ISO, less templates using the subtemplate than the module. Snævar (talk) 20:19, 13 December 2024 (UTC)
- I've made this edit and it seems to make the change. One road that I null edited is there at the moment. — Jts1882 | talk 08:04, 14 December 2024 (UTC)
- I think exceptions to ISO names need to be added at Template:Infobox road/meta/mask/category. — Jts1882 | talk 18:10, 13 December 2024 (UTC)
- The trace here is Template:Infobox_road > Template:Infobox road/meta/mask/category > Template:Country name > Module:ISO 3166 > Module:ISO 3166/data/National. The last module "Module:ISO 3166/data/National" mentions "Cabo Verde" as the main name and "Cape Verde" as the alterntive, hence the category gets thee "Cabo Verde" name. Snævar (talk) 19:40, 13 December 2024 (UTC)
- Thanks, gang. I followed up Jts's Cape Verde edit above with another one that used the same format to deal with the Georgia category, and that also worked, so that one's now clean as well. Thanks again for figuring this out. Bearcat (talk) 15:35, 14 December 2024 (UTC)
File:01 Burqa (cropped).tif
When I hover over the "reply" link on WP:VP/P policy I see File:01 Burqa (cropped).tif. Any particular reason for that? CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 23:00, 13 December 2024 (UTC)
- Not sure which "reply" link you're hovering over (there are far too many to try all of them), but neither hovering nor clicking yielded the file in question for the two I tried. – Daℤyzzos (✉️ • 📤) Please do not ping on reply. 23:29, 13 December 2024 (UTC)
- It's all the reply links. Only hovering shows the image and click on the reply link just opens the page to reply. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 04:54, 14 December 2024 (UTC)
- @CambridgeBayWeather: I guess you have enabled "Navigation popups" at Special:Preferences#mw-prefsection-gadgets. The reply links are made by "Enable quick replying" at Special:Preferences#mw-prefsection-editing. The links points to the page itself and File:01 Burqa (cropped).tif is displayed in Wikipedia:Village pump (policy)#Can we hide sensitive graphic photos? Popups can display an image outside the lead, unlike the default feature Page previews at Special:Preferences#mw-prefsection-rendering. PrimeHunter (talk) 23:36, 13 December 2024 (UTC)
- I do have the navigation popup enabled. It just seemed an odd choice of image for the VP/P page as I didn't realise that was the only image on the page. I see that File:718smiley.svg is showing at Wikipedia:Village pump (proposals). CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 05:00, 14 December 2024 (UTC)
- Maybe the icons at Wikipedia:Village pump should also be added to the top of the pages. PrimeHunter (talk) 12:12, 14 December 2024 (UTC)
- There's not really a good place to add only the relevant icon, and hovering over a link to WP:VP (no particular section) yields no image, despite the WP:VP/P one being in the header, so I'm not quite sure where at all one would put a relevant image. – Daℤyzzos (✉️ • 📤) 15:42, 14 December 2024 (UTC)
- Popups looks at the source text in Wikipedia:Village pump and doesn't discover the icons which are transcluded from {{Village pump}}. Hovering on the template link shows the first icon File:Edit-find-replace.svg. PrimeHunter (talk) 20:15, 14 December 2024 (UTC)
- Ah. Still doesn't solve the question of where one would put the WP:VP icons. – Daℤyzzos (✉️ • 📤) Please do not ping on reply. 22:52, 14 December 2024 (UTC)
- Huh. I thought that at the top might work because when I hover over my talk page link above I see File:ANEWSicon.png and on my user page, File:CambridgeBayWeather logo.svg. On PrimeHunter's I see a barnstar and his talk page link shows File:Information.svg. But for some reason hovering over the links to Daℤyzzos and his talk page show no images at all. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:18, 15 December 2024 (UTC)
- Actually, hovering over a link to your talk page displays File:Wikipedia Administrator.svg, but that's still provided (albeit smaller than File:ANEWSicon.png) by the Adminidstrators' newsletter. – Daℤyzzos (✉️ • 📤) 18:35, 15 December 2024 (UTC)
- And that's what I'm seeing now. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 18:39, 15 December 2024 (UTC)
- I did some testing and I found... (drumroll please)
...that I have absolutely no idea why my talk page (or normal userpage for that matter) gets no image! But at least we know now that it can't be something to do with the image or its syntax . — Daℤyzzos (✉️ • 📤) Please do not ping on reply. 19:19, 15 December 2024 (UTC)
- Actually, hovering over a link to your talk page displays File:Wikipedia Administrator.svg, but that's still provided (albeit smaller than File:ANEWSicon.png) by the Adminidstrators' newsletter. – Daℤyzzos (✉️ • 📤) 18:35, 15 December 2024 (UTC)
- Huh. I thought that at the top might work because when I hover over my talk page link above I see File:ANEWSicon.png and on my user page, File:CambridgeBayWeather logo.svg. On PrimeHunter's I see a barnstar and his talk page link shows File:Information.svg. But for some reason hovering over the links to Daℤyzzos and his talk page show no images at all. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:18, 15 December 2024 (UTC)
- Ah. Still doesn't solve the question of where one would put the WP:VP icons. – Daℤyzzos (✉️ • 📤) Please do not ping on reply. 22:52, 14 December 2024 (UTC)
- Popups looks at the source text in Wikipedia:Village pump and doesn't discover the icons which are transcluded from {{Village pump}}. Hovering on the template link shows the first icon File:Edit-find-replace.svg. PrimeHunter (talk) 20:15, 14 December 2024 (UTC)
- There's not really a good place to add only the relevant icon, and hovering over a link to WP:VP (no particular section) yields no image, despite the WP:VP/P one being in the header, so I'm not quite sure where at all one would put a relevant image. – Daℤyzzos (✉️ • 📤) 15:42, 14 December 2024 (UTC)
- Maybe the icons at Wikipedia:Village pump should also be added to the top of the pages. PrimeHunter (talk) 12:12, 14 December 2024 (UTC)
- I do have the navigation popup enabled. It just seemed an odd choice of image for the VP/P page as I didn't realise that was the only image on the page. I see that File:718smiley.svg is showing at Wikipedia:Village pump (proposals). CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 05:00, 14 December 2024 (UTC)
Cursor jumping
For a month or longer now, my cursor has been jumping to the beginning of my sentence when I'm writing a message in places like the Help Desk or an article's Talk page — but interestingly, not here at Technical Help — and try to type capital letters or certain common symbols such as colons, semicolons, parentheses, quotation marks, exclamation points, and question marks. This happens ONLY when I'm working in Wikipedia, nowhere else.
It's really maddening, because it means I waste a lot of time going back to the start of a line and copying the letter or symbol to pasted back down where I was typing. Can you help me stop this? Augnablik (talk) 12:08, 14 December 2024 (UTC)
Another mystery
When I go to the talk page for a Wiki article entitled "Ramendra Kumar" and click on History, sometimes I see the entire history as I'd expect, with all messages in descending order ... other times I see selected revisions (there's a box saying "Compare selected revisions," so I'm calling what I see that same way). I never know what to expect when I click on History. I assume this would happen at other article Talk pages.
Of course I want to see the entire history. Please help me stop the selected revisions from coming up when I click on History. Augnablik (talk) 12:43, 14 December 2024 (UTC)
- Augnablik, what is the URL, in both cases? — Qwerfjkltalk 13:38, 15 December 2024 (UTC)
- It's https://en.wikipedia.org/wiki/Ramendra_Kumar, @Qwerfjkl. But now I see the history as it should look. I've noticed this has happened before with that history ... but now I've discovered this is happening with other histories as well. One day, I see selected revisions — another day, everything.
- I checked several more edits that I made to other articles and the History tab is bringing up all the revisions correctly. Let me check on this again tomorrow and see if it goes back to seeing just selected revisions. Stay tuned, please.
- I'm intrigued by your User name, as it's certainly an interesting version of the Qwerty keyboard! Augnablik (talk) 15:36, 15 December 2024 (UTC)
- Augnablik, I mean the URL when you only see certain versions, not the URL of the page.
As far as I know there is no Qwerfjkl keyboard; I just started on Qwerty and got bored halfway through. — Qwerfjkltalk 15:56, 15 December 2024 (UTC)- Oh, sorry, that’s what I thought I’d copied for you. It’s https://en.wikipedia.org/w/index.php?title=Ramendra_Kumar&action=history .
- But again, I’ve now found that the selected version/entire version changes happen elsewhere as well as at that page. And by the way when I just checked at the RK page, I found the edits were now showing in their entirety. So, then, they changed twice in one day.
- As for your Wiki name, lyes, I know there’s no keyboard that uses it. I was just having a little fun with you, Augnablik (talk) 18:26, 15 December 2024 (UTC)
- Augnablik, I mean the URL when you only see certain versions, not the URL of the page.
Yet another mystery
When I add topics in places like the article Talk pages and the Help Desk, perhaps elsewhere too, I'm finding a lot of times that square-shaped "sticky notes" have begun to pop up with brief dictionary definitions of words. No idea why. I don't ask for them, they just seem to come on their own. They get in the way of my typing. Is there a way to stop this? Augnablik (talk) 12:46, 14 December 2024 (UTC)
- Do the "sticky notes" look something like this?noteA brief record of facts, topics, or thoughts, written down as an aid to memory.— Daℤyzzos (✉️ • 📤) 15:59, 14 December 2024 (UTC)More »
- Yes, except mine are square.
- By the way, @DaZyzzogetonsGotDaLastWord, please tell me how you inserted that image. That's exactly what I wanted to do in this message but didn't know how. Augnablik (talk) 16:22, 14 December 2024 (UTC)
- Okay. If the "sticky notes" look like that, you probably have some sort of dictionary extension installed. If you're using Google Chrome, check here to see if you have that installed. If you're not using Google Chrome, I doubt I can help any further.I made the diagram using the {{box}} template—it's not an image. Documentation for using the {{box}} template can be found here. Information on uploading a screenshot (image) of Wikipedia to show your problem can be found here. — Daℤyzzos (✉️ • 📤) Please do not ping on reply 18:53, 14 December 2024 (UTC)
- 1- I am using Chrome. :) I followed your link and ended up on a page entitled Google Dictionary, so I suppose that means the dictionary is installed. Now what?
- 2- A box template, interesting. I look forward to learning about this. Augnablik (talk) 08:14, 15 December 2024 (UTC)
- @Augnablik: Everybody sees a page called Google Dictionary at [19]. The question is whether you see a button to add or remove the extension. It may be another extension. See https://support.google.com/chrome_webstore/answer/2664769#uninstall-extension. PrimeHunter (talk) 11:15, 15 December 2024 (UTC)
- @PrimeHunter, I see an Add button. Augnablik (talk) 12:17, 15 December 2024 (UTC)
- @Augnablik: Then look for another installed extension as described at my link. PrimeHunter (talk) 12:27, 15 December 2024 (UTC)
- I did what you asked, looking for another installed extension. Two came up. One was clearly an extension, and it didn't look important, so I deleted it. But the second is Acrobat! I can't imagine why that would appear as an extension. As you can guess, I didn't uninstall it.
- Perhaps for the uninstallation to work, or the sticky notes to stop (if that's supposed to happen now), I'll restart my computer and come back to see what happens. Augnablik (talk) 15:09, 15 December 2024 (UTC)
- @Augnablik: Then look for another installed extension as described at my link. PrimeHunter (talk) 12:27, 15 December 2024 (UTC)
- @PrimeHunter, I see an Add button. Augnablik (talk) 12:17, 15 December 2024 (UTC)
- @Augnablik: Everybody sees a page called Google Dictionary at [19]. The question is whether you see a button to add or remove the extension. It may be another extension. See https://support.google.com/chrome_webstore/answer/2664769#uninstall-extension. PrimeHunter (talk) 11:15, 15 December 2024 (UTC)
- Okay. If the "sticky notes" look like that, you probably have some sort of dictionary extension installed. If you're using Google Chrome, check here to see if you have that installed. If you're not using Google Chrome, I doubt I can help any further.I made the diagram using the {{box}} template—it's not an image. Documentation for using the {{box}} template can be found here. Information on uploading a screenshot (image) of Wikipedia to show your problem can be found here. — Daℤyzzos (✉️ • 📤) Please do not ping on reply 18:53, 14 December 2024 (UTC)
Christmas message error
Urgh I just sent out a load of Christmas messages and forgot to add a </div> at the end. So responses will spew onto the background. Can somebody use AWB or a bot to quickly fix it and add it like this, it would take an hour to do manually! ♦ Dr. Blofeld 10:29, 15 December 2024 (UTC)
- Some have been fixed already. Each one will require checking manually. @Dr. Blofeld: What is the original that you used? --Redrose64 🌹 (talk) 12:35, 15 December 2024 (UTC)
- Blowers, you are guilty of having too many wiki-friends! Looks like RedRose64 is very kindly helping you out. Martinevans123 (talk) 12:47, 15 December 2024 (UTC)
- A number of them are contributors to the challenges who deserve to be shown that they are appreciated Martin! ♦ Dr. Blofeld 12:58, 15 December 2024 (UTC)
- The only challenge I generally ever attempt is this one, and the results aren't usually very impressive. Martinevans123 (talk) 13:25, 15 December 2024 (UTC)
- Redrose64, or use AWB to alert them to add </div> at the end if they've not already fixed it! ♦ Dr. Blofeld 12:52, 15 December 2024 (UTC)
- That would be spamming. But what is the original that you used? Presumably it was a template; if I can fix the problem at source, it shouldn't occur again. It seems that every year, somebody sends out Christmas greetings with unclosed markup of some kind - in this case there were both a missing
'''''
and a missing</div>
but in the past I've seen cases of unclosed tables, or where closing tags are transposed. --Redrose64 🌹 (talk) 12:59, 15 December 2024 (UTC)
- That would be spamming. But what is the original that you used? Presumably it was a template; if I can fix the problem at source, it shouldn't occur again. It seems that every year, somebody sends out Christmas greetings with unclosed markup of some kind - in this case there were both a missing
- A number of them are contributors to the challenges who deserve to be shown that they are appreciated Martin! ♦ Dr. Blofeld 12:58, 15 December 2024 (UTC)
- Please let me know if I can help with this. I have a bot task approved for fixing typos and issues in mass messages. – DreamRimmer (talk) 13:21, 15 December 2024 (UTC)
- Is it possible something could be coded to fix the ones Redrose hasn't done yet? It's just it'll take over an hour to fix manually. Perhaps if this is a common problem at Christmas something could be coded to fix them? Only if it wouldn't take long to do Dream. ♦ Dr. Blofeld 16:56, 15 December 2024 (UTC)
- Yes, I can fix it. It is bedtime here where I live, so I will take care of it tomorrow. – DreamRimmer (talk) 17:14, 15 December 2024 (UTC)
- All Done now, including fixing up some half-fixes by others - do people really think that
</div style>
is valid?. --Redrose64 🌹 (talk) 19:59, 15 December 2024 (UTC)
- All Done now, including fixing up some half-fixes by others - do people really think that
- Yes, I can fix it. It is bedtime here where I live, so I will take care of it tomorrow. – DreamRimmer (talk) 17:14, 15 December 2024 (UTC)
- Is it possible something could be coded to fix the ones Redrose hasn't done yet? It's just it'll take over an hour to fix manually. Perhaps if this is a common problem at Christmas something could be coded to fix them? Only if it wouldn't take long to do Dream. ♦ Dr. Blofeld 16:56, 15 December 2024 (UTC)
URGENT - more category template mess
A mass nomination has been listed at Wikipedia:Categories for discussion/Working for processing with hundreds and categories and hundreds of thousands of articles. However these are generated by convoluted code in templates and it's not clear how to change WikiProject & taskforce "articles" to "pages" without causing chaos.
Can some please URGENTLY look at the templates and sort this out. Once again we've had a mass renaming pushed through without stopping to check it can be easily done. Timrollpickering (talk) 00:04, 16 December 2024 (UTC)
- Module talk:WikiProject banner has some discussion about the topic. Izno (talk) 00:12, 16 December 2024 (UTC)
- I see at the top of Wikipedia:Categories for discussion/Working#Bot work it states
If the category needs to be split among multiple destination categories, requires template editing, or requires editing the documentation subpage of templates, or any other special circumstances that require manual review, list it at Wikipedia:Categories for discussion/Working/Manual rather than here.
Perhaps that should be done, and the person who didn't do that in the first place informed of their mistake? Anomie⚔ 00:14, 16 December 2024 (UTC)
- I've moved the list to Wikipedia:Categories for discussion/Working/Large and will try blocking the bot for a couple of hours to see if that resets it. I have asked the editor who put the list on the main processing page to remember to fix templates at the same time. But more generally this whole renaming mess has caused chaos, not least because of the absurdly complicated way these categories are generated without being easy to amend. Timrollpickering (talk) 00:22, 16 December 2024 (UTC)
Infobox radio station issues
In many articles at Category:CS1 errors: URL regarding radio stations have a common problem and its about a citation error that too in same place. It's something with {{Infobox radio station}}.––kemel49(connect)(contri) 17:38, 16 December 2024 (UTC)
- Not a WP:VPT issue.
- I only looked at one article (WALC) but in that article there is this:
| facility_id = WALC: 72377 <br />WZLC: 173901
- The value assigned to that parameter completes an incomplete url.
- If one is to believe the template documentation, the only value that should be assigned to that parameter is the 'numeric Facility ID' – whatever that is. As currently written, the value assigned to
|facility_id=
looks like a mishmash of callsigns and facility IDs for two different radio stations. Perhaps the other radio station articles in Category:CS1 errors: URL suffer from similarly malformed input. - —Trappist the monk (talk) 18:16, 16 December 2024 (UTC)
- Is it appropriate to remove
WALC:
&<br />
and only put one line of numerical rather than two.––kemel49(connect)(contri) 18:22, 16 December 2024 (UTC)- You should probably discuss this issue with editors at Wikipedia:WikiProject Radio Stations. Editors there should be able to tell you how to properly handle two (related) radio stations in a single article/infobox. Perhaps that discussion will result in changes to
{{Infobox radio station}}
. - —Trappist the monk (talk) 18:56, 16 December 2024 (UTC)
- You should probably discuss this issue with editors at Wikipedia:WikiProject Radio Stations. Editors there should be able to tell you how to properly handle two (related) radio stations in a single article/infobox. Perhaps that discussion will result in changes to
- Is it appropriate to remove
Account creation limit for administrators
I'm trying to process WP:ACC requests and I'm getting the message that I've exceeded the "6 accounts in the last 24 hours" limit (when I tried it via the API, I got "acct_creation_throttle_hit") despite the fact that I am an administrator have the noratelimit
userright. Reading WP:Account creator and WP:Event coordinator it seems like admins shouldn't be subject to that limit. I've verified via the API that I am properly logged in and have noratelimit
. Any idea why I'm not able to create further accounts? --Ahecht (TALK
PAGE) 19:14, 16 December 2024 (UTC)
- Special:ListGroupRights#sysop confirms you should have
noratelimit
. You have created 9 accounts today.[20]wgAccountCreationThrottle
is set to 6 in https://noc.wikimedia.org/conf/highlight.php?file=InitialiseSettings.php. If the problem started after the 9th then I really don't know why. PrimeHunter (talk) 20:55, 16 December 2024 (UTC)- @PrimeHunter Some were created directly with the ACC tool, so they may appear to come from a toolforge IP address as opposed to my own, and others were created manually. At least that's all I can think of. --Ahecht (TALK
PAGE) 21:11, 16 December 2024 (UTC)- @PrimeHunter I just tried creating some other accounts both manually and via the tool and they both worked, but the specific username I tried before still gives me the "6 accounts" error. Does that rate limit follow the username somehow? --Ahecht (TALK
PAGE) 21:18, 16 December 2024 (UTC)- I don't know. PrimeHunter (talk) 21:35, 16 December 2024 (UTC)
- @PrimeHunter I just tried creating some other accounts both manually and via the tool and they both worked, but the specific username I tried before still gives me the "6 accounts" error. Does that rate limit follow the username somehow? --Ahecht (TALK
- @PrimeHunter Some were created directly with the ACC tool, so they may appear to come from a toolforge IP address as opposed to my own, and others were created manually. At least that's all I can think of. --Ahecht (TALK
- You could be hitting a special upstream mitigation, is there anything unusual about the username you are trying to create? — xaosflux Talk 22:16, 16 December 2024 (UTC)
Highlight function of Interactive Pathways Map not displaying content correctly
when you try to use the function Highlight as in
GlycolysisGluconeogenesis_WP534|highlight=Glucose-6-phosphate_isomerase
the thumbimage is not displayed correctly: it is centered on the highlighted objcet as intended but not displayed, leaving a void where the highlighted object should be.
div style="position: relative; top: -204.445378151261px; left: -239.5px; width: {{{bSize}}}px" the problem is in width:{{{bSize}}}. it should be fit-content
the problem affects every interactive pathways map i have seen. A.garofalo32 (talk) 20:37, 16 December 2024 (UTC)
- The highlight box when clicking on a notification linking to this post is also way oversized: it extends just past the bottom of the text in the previous post an well below the bottom of the footer. (Wait—is this reply also going to be way off to the side? Only one way to find out!) – Daℤyzzos (✉️ • 📤) Please do not ping on reply. 20:49, 16 December 2024 (UTC)
Tech News: 2024-51
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Weekly highlight
- Interested in improving event management on your home wiki? The CampaignEvents extension offers organizers features like event registration management, event/wikiproject promotion, finding potential participants, and more - all directly on-wiki. If you are an organizer or think your community would benefit from this extension, start a discussion to enable it on your wiki today. To learn more about how to enable this extension on your wiki, visit the deployment status page.
Updates for editors
- Users of the iOS Wikipedia App in Italy and Mexico on the Italian, Spanish, and English Wikipedias, can see a personalized Year in Review with insights based on their reading and editing history.
- Users of the Android Wikipedia App in Sub-Saharan Africa and South Asia can see the new Rabbit Holes feature. This feature shows a suggested search term in the Search bar based on the current article being viewed, and a suggested reading list generated from the user’s last two visited articles.
- The global reminder bot is now active and running on nearly 800 wikis. This service reminds most users holding temporary rights when they are about to expire, so that they can renew should they want to. See the technical details page for more information.
- The next issue of Tech News will be sent out on 13 January 2025 because of the end of year holidays. Thank you to all of the translators, and people who submitted content or feedback, this year.
- View all 27 community-submitted tasks that were resolved last week. For example, a bug was fixed in the Android Wikipedia App which had caused translatable SVG images to show the wrong language when they were tapped.
Updates for technical contributors
- There is no new MediaWiki version next week. The next deployments will start on 14 January. [21]
Tech news prepared by Tech News writers and posted by bot • Contribute • Translate • Get help • Give feedback • Subscribe or unsubscribe.
MediaWiki message delivery 22:21, 16 December 2024 (UTC)
Seeking bot that checks for duplicate sources
Is there any bot on Wikipedia that will check an article for sources that are used multiple times (and could be combined)? ▶ I am Grorp ◀ 22:26, 16 December 2024 (UTC)
Proposals
RfC: Extended confirmed pending changes (PCECP)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Should a new pending changes protection level - extended confirmed pending changes (hereby abbreviated as PCECP) - be added to Wikipedia? Awesome Aasim 19:58, 5 November 2024 (UTC)
Background
WP:ARBECR (from my understanding) encourages liberal use of EC protection in topic areas authorized by the community or the arbitration committee. However, some administrators refuse to protect pages unless if there is recent disruption. Extended confirmed pending changes would allow non-XCON users to propose changes for them to be approved by someone extended confirmed, and can be applied preemptively to these topic areas.
It is assumed that it is technically possible to have PCECP. That is, we can have PCECP as "[auto-accept=extended confirmed users] [review=extended confirmed users]" Right now it might not be possible to have extended confirmed users review pending changes with this protection with the current iteration of FlaggedRevs, but maybe in the future.
Survey (PCECP)
Support (PCECP)
- Support for multiple reasons: WP:ARBECR only applies to contentious topics. Correcting typos is not a contentious topic. Second, WP:ARBECR encourages the use of pending changes when protection is not used. Third, pending changes effectively serves to allow uncontroversial edit requests without needing to create a new talk page discussion. And lastly, this is within line of our protection policy, which states that protection should not be applied preemptively in most cases. Awesome Aasim 19:58, 5 November 2024 (UTC)
- Support (per... nom?) PC is the superior form of uncontroversial edit requests. Aaron Liu (talk) 20:09, 5 November 2024 (UTC)
- It's better than EC, which already restricts being the free encyclopedia more. As I've said below, the VisualEditor allows much more editing from new people than edit requesting, which forces people to use the source editor. Aaron Liu (talk) 03:52, 6 November 2024 (UTC)
- This is not somehow less or more restrictive as ECR. It's exactly the same level of protection, just implemented in a different way. I do not get the !votes from either side who either claim that this will be more restriction or more bureaucracy. I understand neither, and urge them to explain their rationales. Aaron Liu (talk) 12:32, 12 November 2024 (UTC)
- By creating a difference between what non logged-in readers (that is, the vast majority of them) see versus logged-in users, there is an extra layer of difficulty for non-confirmed and non-autoconfirmed editors, who won't see the actual page they're editing until they start the editing process. Confirmed and autoconfirmed editors may also be confused that their edits are not being seen by non-logged in readers. Because pending changes are already submitted into the linear history of the article, unwinding a rejected edit is potentially more complicated than applying successive edit requests made on the talk page. (This isn't a significant issue when there aren't many pending changes queued, which is part of the reason why one of the recommended criteria for applying pending changes protection is that the page be infrequently edited.) For better or worse, there is no deadline to process edit requests, which helps mitigate issues with merging multiple requests, but there is pressure to deal with all pending changes expediently, to reduce complications in editing. isaacl (talk) 19:54, 12 November 2024 (UTC)
- Do you think this would be fixed with "branching" (similar to GitHub branches)? In other words, instead of PC giving the latest edit, PC just gives the edit of the stable revision and when "Publish changes" is clicked it does something like put the revision in a separate namespace (something like Review:PAGENAME/#######) where ####### is the revision ID. If the edit is accepted, then that page is merged and the review deleted. If the edit is rejected the review is deleted, but can always be restored by a Pending Changes Reviewer or administrator. Awesome Aasim 21:01, 12 November 2024 (UTC)
- Technically, that would take quite a bit to implement. Aaron Liu (talk) 23:18, 12 November 2024 (UTC)
- There are a lot of programmers who struggle with branching; I'm not certain it's a great idea to make it an integral part of Wikipedia editing, at least not in a hidden, implicit manner. If an edit to an article always proceeded from the last reviewed version, editors wouldn't be able to build changes on top of their previous edits. I think at a minimum, an editor would have to be able to do the equivalent of creating a personal working branch. For example, this could be done by working on the change as a subpage of the user's page (or possibly somewhere else (perhaps in the Draft namespace?), using some standard naming hierarchy), and then submitting an edit request. That would be more like how git was designed to enable de-centralized collaboration: everyone works in their own repository, rebasing from a central repository (*), and asks an integrator to pull changes that they publish in their public repository.
- (*) Anyone's public repository can act as a central repository. It just has to be one that all the collaborators agree upon using, and thus agree with the decisions made by the integrator(s) merging changes into that repository. isaacl (talk) 23:22, 12 November 2024 (UTC)
- That makes sense. This has influenced me to amend my Q2 answer slightly, but I still support the existence of this protection and the preemptive PC protecting of low-traffic pages. (Plus, it's still not more restriction.) Aaron Liu (talk) 23:20, 12 November 2024 (UTC)
- Do you think this would be fixed with "branching" (similar to GitHub branches)? In other words, instead of PC giving the latest edit, PC just gives the edit of the stable revision and when "Publish changes" is clicked it does something like put the revision in a separate namespace (something like Review:PAGENAME/#######) where ####### is the revision ID. If the edit is accepted, then that page is merged and the review deleted. If the edit is rejected the review is deleted, but can always be restored by a Pending Changes Reviewer or administrator. Awesome Aasim 21:01, 12 November 2024 (UTC)
- By creating a difference between what non logged-in readers (that is, the vast majority of them) see versus logged-in users, there is an extra layer of difficulty for non-confirmed and non-autoconfirmed editors, who won't see the actual page they're editing until they start the editing process. Confirmed and autoconfirmed editors may also be confused that their edits are not being seen by non-logged in readers. Because pending changes are already submitted into the linear history of the article, unwinding a rejected edit is potentially more complicated than applying successive edit requests made on the talk page. (This isn't a significant issue when there aren't many pending changes queued, which is part of the reason why one of the recommended criteria for applying pending changes protection is that the page be infrequently edited.) For better or worse, there is no deadline to process edit requests, which helps mitigate issues with merging multiple requests, but there is pressure to deal with all pending changes expediently, to reduce complications in editing. isaacl (talk) 19:54, 12 November 2024 (UTC)
- Support, functionally a more efficient form of edit requests. The volume of pending changes is still low enough for this to be dealt with, and it could encourage the pending changes reviewer right to be given to more people currently reviewing edit requests, especially in contentious topics. Chaotic Enby (talk · contribs) 20:25, 5 November 2024 (UTC)
- Support having this as an option. I particularly value the effect it has on attribution (because the change gets directly attributed to the individual who wanted it, not to the editor who processed the edit request). WhatamIdoing (talk) 20:36, 5 November 2024 (UTC)
- Support: better and more direct system than preemptive extended-confirmed protection followed by edit requests on the talk page. Cremastra (u — c) 20:42, 5 November 2024 (UTC)
- Support, Pending Changes has the capacity to take on this new task. PC is much better than the edit request system for both new editors and reviewers. It also removes the downsides of slapping ECP on everything within contentious topic areas. Toadspike [Talk] 20:53, 5 November 2024 (UTC)
- I've read the opposes below and completely disagree that this would lead to more gatekeeping. The current edit request system is extremely complicated and inaccessible to new users. I've been here for half a decade and I still don't really know how it works. The edit requests we do get are a tiny fraction of the edits people want to make to ECP pages but can't. PCECP would allow them to make those edits. And many (most?) edit requests are formatted in a way that they can't be accepted (not clear what change should be made, where, based on what souce), a huge issue which would be entirely resolved by PCECP.
- The automatic EC protection of all pages in certain CTOPs is not the point of this proposal. Whether disruption is a prerequisite to protection is not altered by the existence of PCECP and has to be decided in anther RfC at another venue, or by ArbCom. PCECP is solely about expanding accessibility to editing ECP pages for new and unregistered editors, which is certainly a positive move.
- I, too, hate the PC system at dewiki, and I appreciate that Kusma mentioned it. However, what we're looking at here is lowering protection levels and reducing barriers to editing, which is the opposite of dewiki's PC barriers. Toadspike [Talk] 10:24, 16 November 2024 (UTC)
- Support (Summoned by bot): per above. C F A 💬 23:34, 5 November 2024 (UTC)
- Support : Per above. PC is always at a low or very low backlog, therefore is completely able to take this change. ~/Bunnypranav:<ping> 11:26, 6 November 2024 (UTC)
- Support: I would be happy to see it implemented. GrabUp - Talk 15:14, 6 November 2024 (UTC)
- Support Agree with JPxG's principle that it is better to "have drama on a living project than peace on a dead one," but this is far less restrictive than preemptively setting EC protection for all WP:ARBECR pages. From a new editor's perspective, they experience a delay in the positive experience of seeing their edit implemented, but as long as pending changes reviewers are equipped to minimize this delay, then this oversight seems like a net benefit. New users will get feedback from experienced editors on how to operate in Wikipedia's toughest content areas, rather than stumbling through. ViridianPenguin 🐧 ( 💬 ) 08:57, 8 November 2024 (UTC)
- Support * Pppery * it has begun... 05:17, 11 November 2024 (UTC)
- Support Idk what it's like in other areas but in mine, of edit requests that I see, a lot, maybe even most of them are POV/not actionable/nonsense/insults so if it is already ECR only, then yea, more filtering is a good thing.Selfstudier (talk) 18:17, 11 November 2024 (UTC)
- Support assuming this is technically possible (which I'm not entirely sure it is), it seems like a good idea, and would definitely make pending changes more useful from my eyes. Zippybonzo | talk | contribs (they/them) 20:00, 12 November 2024 (UTC)
- Strong support per @JPxG:'s reasoning—I think it's wild that we're willing to close off so many articles to so many potential editors, and even incremental liberalization of editing restrictions on these articles should be welcomed. This change would substantially expand the number of potential editors by letting non-EC contributors easily suggest edits to controversial topic areas. It would be a huge win for contributions if we managed to replace most ECP locks with this new PCECP.– Closed Limelike Curves (talk) 02:07, 14 November 2024 (UTC)
- Yes, in fact, somebody read my mind here (I was thinking about this last night, though I didn't see this VP thread...) Myrealnamm (💬Let's talk · 📜My work) 21:38, 14 November 2024 (UTC)
- Support in principle. Edit requests are a really bad interface for new users; if discouraging people from editing is the goal, we've succeeded. Flagged revisions aren't the best, but they are better than edit request templates. Toadspike's reasoning hasn't been refuted. Right now, it seems like opposers aren't aware that the status quo for many Palestine-Israel related articles is ECP. Both Israeli cuisine and Palestinian cuisine are indefinitely under WP:ECP due to gastronationalist arguments about the politics of food in the Arab–Israeli conflict (a page not protected), so editors without 500/30 status cannot add information about falafels to Wikipedia.
That being said, this proposal would benefit from more detail. For example, the current edit request policy requires the proposed change to be uncontroversial and puts the burden on the proposer to show that it is uncontroversial. On the other hand, the current review policy assumes a change is correct unless it's obvious vandalism or the like, which would be a big change to the edit request workflow. Likewise, what counts as WP:INVOLVED for reviewers? Right now, there's a big firewall between editors involved in content in an area like Israel-Palestine and admins using their powers in that area. Can reviewers edit in the area and use their tools? This needs to be clarified, as it seems like editing in PIA doesn't disqualify one from answering edit requests. Chess (talk) (please mention me on reply) 21:06, 18 November 2024 (UTC)
@Chess That's true, but reviewers are also currently expected to accept and revert if the change is correct but also irky for a revert. Below, Aasim clarified that reviewers should only reject edits that fail the existing PC review guidelines plusthe current review policy assumes a change is correct unless it's obvious vandalism or the like
edits made in violation of an already well-established consensus
.
As for Involved, since there's no guidance about edit request reviewers yet either, I think that should be asked in a separate RfC. Aaron Liu (talk) 21:35, 18 November 2024 (UTC)
- Support. The number of sysops is ever decreasing and so we will need to take drastic action to ensure maintenance and vandalism prevention can keep up. Stifle (talk) 17:29, 19 November 2024 (UTC)
- Support in principle. While I understand objections from others based on the technical downsides and design of the current Flagged Revisions extension, I support making it easier for users to suggest changes with a GUI rather than a difficult-to-understand edit request template, which creates a barrier to entry. Frostly (talk) 05:24, 26 November 2024 (UTC)
- Support - It seems to be entirely preferable to ECR. It would be interesting if any current or former Arbcom members were to see it as more problematic. — Charles Stewart (talk) 04:12, 28 November 2024 (UTC)
Oppose (PCECP)
- Oppose There's a lot of history here, and I've opposed WP:FPPR/FlaggedRevs consistently since ~2011. Without reopening the old wounds over how the initial trial was implemented/ended, nothing that's happened since has changed my position. I believe that proceeding with an expansion of FlaggedRevs would be a further step away from our commitment to being the free encyclopedia that anyone can edit without actually solving any critical problems that our existing tools aren't already handling. While the proposal includes
However, some administrators refuse to protect pages unless if there is recent disruption
as a problem, I see that as a positive. In fact that's the entire point; protection should be preventative and there should be evidence of recent disruption. If a page is experiencing disruption, protection can handle it. If not, there's no need to limit anyone's ability to edit. The WordsmithTalk to me 03:45, 6 November 2024 (UTC)- The Wordsmith, regarding "
However, some administrators refuse to protect pages unless if there is recent disruption
as a problem, I see that as a positive.", for interest, I see it as a negative for a number of reasons, at least in the WP:PIA topic area, mostly because it is subjective/non-deterministic.- The WP:ARBECR rules have no dependency on subjective assessments of the quality of edits. Non-EC editors are only allowed to make edit requests. That is what we tell them.
- If it is the case that non-EC editors are only allowed to make edit requests, there is no reason to leave pages unprotected.
- If it is not the case that non-EC editors can only allowed to make edit requests, then we should not be telling them that via talk page headers and standard notification messages.
- There appears to be culture based on an optimistic faith-based belief that the community can see ARBECR violations, make reliable subjective judgements based on some value system and deal with them appropriately through action or inaction. This is inconsistent with my observations.
- Many disruptive violations are missed when there are hundreds of thousands of revisions by tens of thousands of actors.
- The population size of editors/admins who try to address ARBECR violations is very small, and their sampling of the space is inevitably an example of the streetlight effect.
- The PIA topic area is largely unprotected and there are thousands of articles, templates, categories, talk pages etc. Randomness plays a large part in ARBECR enforcement for all sorts of reasons (and maybe that is good to some extent, hard to tell).
- Wikipedia's lack of tools to effectively address ban evasion in contentious topic areas means that it is not currently possible to tell whether a revision by a non-EC registered account or IP violating WP:ARBECR that resembles an okay edit (to me personally with all of my biases and unreliable subjectivity) is the product of a helpful person or a ban evading recidivist/member of an off-site activist group exploiting a backdoor.
- The WP:ARBECR rules have no dependency on subjective assessments of the quality of edits. Non-EC editors are only allowed to make edit requests. That is what we tell them.
- Sean.hoyland (talk) 08:00, 6 November 2024 (UTC)
- The Wordsmith, regarding "
- Oppose I am strongly opposed to the idea of getting yet another level of protection for the sole purpose of using it preemtively, which has never been ok and should not be ok. Just Step Sideways from this world ..... today 21:25, 6 November 2024 (UTC)
- Oppose, I hate pending changes. Using them widely will break the wiki. We need to be as welcoming as possible to new editors, and the instant gratification of wiki editing should be there on as many pages as possible. —Kusma (talk) 21:47, 6 November 2024 (UTC)
- @Kusma Could you elaborate on "using them widely will break the wiki", especially as we currently have the stricter and less-friendly EC protection? Aaron Liu (talk) 22:28, 6 November 2024 (UTC)
- Exhibit A is dewiki's 53-day Pending Changes backlog. —Kusma (talk) 23:03, 6 November 2024 (UTC)
- We already have a similar and larger backlog at CAT:EEP. All this does is move the backlog into an interface handled by server software that allows newcomers to use VE for their "edit requests", where currently they must use the source editor due to being confined to talk pages. Aaron Liu (talk) 23:06, 6 November 2024 (UTC)
- The dewiki backlog is over 18,000 pages. CAT:EEP has 54. The brokenness of optional systems like VE should not be a factor in how we make policy. —Kusma (talk) 09:40, 7 November 2024 (UTC)
- The backlog will not be longer than the EEP backlog. (Also, I meant that EEP's top request was over 3 months ago, sorry.) Aaron Liu (talk) 12:23, 7 November 2024 (UTC)
- ... if the number of protected pages does not increase. I expect an increase in protected pages from the proposal, even if the terrifying proposal to protect large classes of articles preemptively does not pass. —Kusma (talk) 13:08, 7 November 2024 (UTC)
- Why so? Aaron Liu (talk) 13:33, 7 November 2024 (UTC)
- Most PCECP pages should be ECP pages (downgraded?) as they have lesser traffic/disruption. So, the number of pages that will be increase should not be that much. ~/Bunnypranav:<ping> 13:35, 7 November 2024 (UTC)
- ... if the number of protected pages does not increase. I expect an increase in protected pages from the proposal, even if the terrifying proposal to protect large classes of articles preemptively does not pass. —Kusma (talk) 13:08, 7 November 2024 (UTC)
- The backlog will not be longer than the EEP backlog. (Also, I meant that EEP's top request was over 3 months ago, sorry.) Aaron Liu (talk) 12:23, 7 November 2024 (UTC)
- The dewiki backlog is over 18,000 pages. CAT:EEP has 54. The brokenness of optional systems like VE should not be a factor in how we make policy. —Kusma (talk) 09:40, 7 November 2024 (UTC)
- We already have a similar and larger backlog at CAT:EEP. All this does is move the backlog into an interface handled by server software that allows newcomers to use VE for their "edit requests", where currently they must use the source editor due to being confined to talk pages. Aaron Liu (talk) 23:06, 6 November 2024 (UTC)
- Exhibit A is dewiki's 53-day Pending Changes backlog. —Kusma (talk) 23:03, 6 November 2024 (UTC)
- @Kusma Isn't the loss of instant gratification of editing better than creating a request on the talk page of an ECP page, and having no idea by when will it be reviewed and implemented. ~/Bunnypranav:<ping> 11:25, 7 November 2024 (UTC)
- With PC you also do not know when or whether your edit will be implemented. —Kusma (talk) 13:03, 7 November 2024 (UTC)
- @Kusma Could you elaborate on "using them widely will break the wiki", especially as we currently have the stricter and less-friendly EC protection? Aaron Liu (talk) 22:28, 6 November 2024 (UTC)
- Oppose — Feels unnecessary and will only prevent other good faith editors from editing, not to mention the community effort required to monitor and review pending changes requests given that some areas like ARBIPA apply to hundreds of thousands of pages. Ratnahastin (talk) 01:42, 7 November 2024 (UTC)
- @Ratnahastin Similar to my above question, won't this encourage more good faith editors compared to a literal block from editing of an ECP page? ~/Bunnypranav:<ping> 11:32, 7 November 2024 (UTC)
- There is a very good reason I reference Community Resources Against Street Hoodlums in my preferred name for the protection scheme, and the answer is generally no since the topic area we are primarily talking about is an ethno-political contentious topic, which tend to draw partisans interested only in "winning the war" on Wikipedia. This is not limited to just new users coming in, but also established editors who have strong opinions on the topic and who may be put into the position of reviewing these edits, as a read of any random Eastern Europe- or Palestine-Israel-focused Arbitration case would make clear just from a quick skim. —Jéské Couriano v^_^v threads critiques 18:21, 7 November 2024 (UTC)
- Aren't these problems that can also be seen to the same extent in edit requests if they exist? Aaron Liu (talk) 19:10, 7 November 2024 (UTC)
- A disruptive/frivolous edit request can be summarily reverted off to no damage as patently disruptive/frivolous without implicating the 1RR in the area. As long as it's not vandalism or doesn't introduce BLP violations, an edit committed to an article that isn't exactly helpful is constrained by the 1RR, with or without any sort of protection scheme. —Jéské Couriano v^_^v threads critiques 16:21, 8 November 2024 (UTC)
- Patently disruptive and frivolous edits are vandalism, emphasis on "patently". Aaron Liu (talk) 16:28, 8 November 2024 (UTC)
- POV-pushing is not prima facie vandalism. —Jéské Couriano v^_^v threads critiques 16:32, 8 November 2024 (UTC)
- POV-pushing isn't patently disruptive/frivolous and any more removable in edit requests. Aaron Liu (talk) 16:45, 10 November 2024 (UTC)
- But edit requests make it harder to actually push that POV to a live article. —Jéské Couriano v^_^v threads critiques 17:22, 11 November 2024 (UTC)
- Same with pending changes. Aaron Liu (talk) 17:36, 11 November 2024 (UTC)
- Maybe in some fantasy land where the edit didn't need to be committed to the article's history. —Jéské Couriano v^_^v threads critiques 18:08, 11 November 2024 (UTC)
- Except that is how pull requests work on GitHub. You make the edit, and someone with reviewer permissions approves it to complete the merge. Here, the "commit" happens, but the revision is not visible until reviewed and approved. Edit requests are not pull requests, they are the equivalent of "issues" on GitHub. Awesome Aasim 19:03, 11 November 2024 (UTC)
- It may come as a surprise, but Wikipedia is not GitHub. While they are both collaborative projects, they are very different in most other respects. Thryduulf (talk) 19:20, 11 November 2024 (UTC)
- With Git, submitters make a change in their own branch (which can even be in their own repository), and then request that an integrator pull that change into the main branch. So the main branch history remains clean: it only has changes that were merged in. (It's one of the guiding principles of Git: allow the history tree of any branch to be simplified to improve clarity and performance.) isaacl (talk) 22:18, 11 November 2024 (UTC)
- Edit requests are supposed to be pull requests.
Aaron Liu (talk) 22:51, 11 November 2024 (UTC)Clearly indicate which sections or phrases should be replaced or added to, and what they should be replaced with or have added.
— WP:ChangeXY- Yeah that is what they are supposed to be but in practice they are not. As anyone who has answered edit requests before, there are often messages that look like this:
- Except that is how pull requests work on GitHub. You make the edit, and someone with reviewer permissions approves it to complete the merge. Here, the "commit" happens, but the revision is not visible until reviewed and approved. Edit requests are not pull requests, they are the equivalent of "issues" on GitHub. Awesome Aasim 19:03, 11 November 2024 (UTC)
- Maybe in some fantasy land where the edit didn't need to be committed to the article's history. —Jéské Couriano v^_^v threads critiques 18:08, 11 November 2024 (UTC)
- Same with pending changes. Aaron Liu (talk) 17:36, 11 November 2024 (UTC)
- But edit requests make it harder to actually push that POV to a live article. —Jéské Couriano v^_^v threads critiques 17:22, 11 November 2024 (UTC)
- POV-pushing isn't patently disruptive/frivolous and any more removable in edit requests. Aaron Liu (talk) 16:45, 10 November 2024 (UTC)
- POV-pushing is not prima facie vandalism. —Jéské Couriano v^_^v threads critiques 16:32, 8 November 2024 (UTC)
- Patently disruptive and frivolous edits are vandalism, emphasis on "patently". Aaron Liu (talk) 16:28, 8 November 2024 (UTC)
- A disruptive/frivolous edit request can be summarily reverted off to no damage as patently disruptive/frivolous without implicating the 1RR in the area. As long as it's not vandalism or doesn't introduce BLP violations, an edit committed to an article that isn't exactly helpful is constrained by the 1RR, with or without any sort of protection scheme. —Jéské Couriano v^_^v threads critiques 16:21, 8 November 2024 (UTC)
- Aren't these problems that can also be seen to the same extent in edit requests if they exist? Aaron Liu (talk) 19:10, 7 November 2024 (UTC)
- There is a very good reason I reference Community Resources Against Street Hoodlums in my preferred name for the protection scheme, and the answer is generally no since the topic area we are primarily talking about is an ethno-political contentious topic, which tend to draw partisans interested only in "winning the war" on Wikipedia. This is not limited to just new users coming in, but also established editors who have strong opinions on the topic and who may be put into the position of reviewing these edits, as a read of any random Eastern Europe- or Palestine-Israel-focused Arbitration case would make clear just from a quick skim. —Jéské Couriano v^_^v threads critiques 18:21, 7 November 2024 (UTC)
- @Ratnahastin Similar to my above question, won't this encourage more good faith editors compared to a literal block from editing of an ECP page? ~/Bunnypranav:<ping> 11:32, 7 November 2024 (UTC)
Extended content
| ||
---|---|---|
The reference is wrong. Please fix it. 192.0.0.1 (talk) 23:19, 11 November 2024 (UTC) |
- Which is not in practice WP:CHANGEXY. Awesome Aasim 23:19, 11 November 2024 (UTC)
- I don't see how that's much of a problem, especially as edits are also committed to the talk page's history. Aaron Liu (talk) 22:50, 11 November 2024 (UTC)
- Do the words "Provoke edit wars" mean anything? Talk page posts are far less likely to be the locus of an edit war than article edits. —Jéské Couriano v^_^v threads critiques 18:05, 14 November 2024 (UTC)
- As an editor who started out processing edit requests, including ECP edit requests, I disagree. Aaron Liu (talk) 18:08, 14 November 2024 (UTC)
- Oppose, per what JSS has said. I am a little uncomfortable at the extent to which we've seemingly accepted preemptive protection of articles in contentious areas. It may be a convenient way of reducing the drama us admins and power users have to deal with... but only at the cost of giving up on the core principle that anybody can edit. I would rather have drama on a living project than peace on a dead one. jp×g🗯️ 18:16, 7 November 2024 (UTC)
- Oppose I am one of those admins who likes to see disruption before protecting. Lectonar (talk) 08:48, 8 November 2024 (UTC)
- Oppose as unnecessary, seems like a solution in search of a problem. Furthermore, this *is* Wikipedia, the encyclopedia anyone can edit; preemptively protecting pages discourages contributions from new editors. -Fastily 22:36, 8 November 2024 (UTC)
- Weak Oppose I do understand where this protection would be helpful. But I just think something is EC-protectable or not. Don't necessarily think adding another level of bureaucracy is particularly helpful. --Takipoint123 (talk) 05:14, 11 November 2024 (UTC)
- Oppose. I'm inclined to agree that the scenarios where this tool would work a benefit as technical solution would be exceedingly niche, and that such slim benefit would probably be outweighed by the impact of having yet one more tool to further nibble away at the edges of the open spaces of the project which are available to new editors. Frankly, in the last few years we have already had an absurdly aggressive trend towards community (and ArbCom fiat) decisions which have increasingly insulated anything remotely in the vain of controversy from new editors--with predictable consequences for editor recruitment and retention past the period of early involvement, further exacerbating our workloads and other systemic issues. We honestly need to be rolling back some of these changes, not adding yet one more layer (however thin and contextual) to the bureaucratic fabric/new user obstacle course. SnowRise let's rap 11:23, 12 November 2024 (UTC)
- Oppose. The more I read this discussion, the more it seems like this wouldn't solve the majority of what it sets out to solve but would create more problems while doing so, making it on balance a net negative to the project. Thryduulf (talk) 21:43, 12 November 2024 (UTC)
- Oppose and Point of Order Oppose because pending changes is already too complicated and not very useful. I'm a pending changes reviewer and I've never rejected one on PC grounds (basically vandalism). But I often revert on normal editor grounds after accepting on PC grounds. (I suspect that many PC rejections are done for non-PC reasons instead of doing this) "Point of Order" is because the RFC is unclear on what exactly is being opposed. Sincerely, North8000 (talk) 22:15, 12 November 2024 (UTC)
- Pretty sure that what happens is that when vandals realize they will have to submit their edit for review before it goes live, that takes all the fun out of it for them because it will obviously be rejected, and they don't bother. That's pretty much how it was supposed to work. Just Step Sideways from this world ..... today 22:22, 12 November 2024 (UTC)
- This is a very good point, and I ask for @Awesome Aasim's clarification on whether reviewers will be able to reject edits on grounds for normal reverts combined with the EC restriction. I think there's enough rationale to apply this here beyond the initial rationale for PC as explained by JSS above. Aaron Liu (talk) 23:24, 12 November 2024 (UTC)
- Reviewers are given specific reasons for accepting edits (see Wikipedia:Pending changes § Reviewing pending edits) to avoid overloading them with work while processing pending changes expeditiously. If the reasons are opened up to greater evaluation of the quality of edits, then expectations may shift towards this being a norm. Thus some users are concerned this will create a hierarchy of editors, where edits by non-reviewers are gated by reviewers. isaacl (talk) 23:44, 12 November 2024 (UTC)
- I understand that and wonder how the reviewer proposes to address this. I would still support this proposal if having reviewers reject according to whether they'd revert and "ostensibly" to enforce EC is to be the norm, albeit to a lesser extent for the reasons you mentioned (though I'd replaced "non-reviewers" with "all non–auto-accepted"). Aaron Liu (talk) 00:13, 13 November 2024 (UTC)
- I'm not sure to whom you are referring when you say "the reviewer" – you're the one suggesting there's a rationale to support more reasons for rejecting a pending change beyond the current set. Since any pending change in the queue will prevent subsequent changes by non-reviewers from being visible to most readers, their edits too will get evaluated by a single reviewer before being generally visible. isaacl (talk) 00:59, 13 November 2024 (UTC)
- Sorry, I meant Aasim, the nominator. I made a thinko.
Currently, reviewers can undo just the edits that aren't good and then approve the revision of their own revert. I thought that was what we were supposed to do. Aaron Liu (talk) 02:13, 13 November 2024 (UTC)
- Sorry, I meant Aasim, the nominator. I made a thinko.
- I'm not sure to whom you are referring when you say "the reviewer" – you're the one suggesting there's a rationale to support more reasons for rejecting a pending change beyond the current set. Since any pending change in the queue will prevent subsequent changes by non-reviewers from being visible to most readers, their edits too will get evaluated by a single reviewer before being generally visible. isaacl (talk) 00:59, 13 November 2024 (UTC)
- I understand that and wonder how the reviewer proposes to address this. I would still support this proposal if having reviewers reject according to whether they'd revert and "ostensibly" to enforce EC is to be the norm, albeit to a lesser extent for the reasons you mentioned (though I'd replaced "non-reviewers" with "all non–auto-accepted"). Aaron Liu (talk) 00:13, 13 November 2024 (UTC)
- Yes. Anything that is obvious vandalism or a violation of existing Wikipedia's policies can still be rejected. However, for edits where there is no other problem, the edit can still be accepted. In other words, a user not being extended confirmed shall not be sufficient grounds for rejecting an edit under PCECP, since the extended confirmed user takes responsibility for the edit. If the extended confirmed user accepts a bad edit, it is on them, not whoever made it. That is the whole idea.
- Of course obviously helpful changes such as fixing typos and adding up-to-date information should be accepted sooner, while more controversial changes should be discussed first. Awesome Aasim 17:37, 13 November 2024 (UTC)
- By
or a violation of existing Wikipedia's policies
, do you only mean violations of BLP, copyvio, and "other obviously inappropriate content" that may be very-quickly checked, which is the current scope of what to reject? Aaron Liu (talk) 17:41, 13 November 2024 (UTC)- Yeah, but also edits made in violation of an already well-established consensus. Edits that enforce a clearly-established consensus (proven by previous talk page discussion), are, from my understanding, exempt from all WP:EW restrictions. Awesome Aasim 18:38, 13 November 2024 (UTC)
- By
- Reviewers are given specific reasons for accepting edits (see Wikipedia:Pending changes § Reviewing pending edits) to avoid overloading them with work while processing pending changes expeditiously. If the reasons are opened up to greater evaluation of the quality of edits, then expectations may shift towards this being a norm. Thus some users are concerned this will create a hierarchy of editors, where edits by non-reviewers are gated by reviewers. isaacl (talk) 23:44, 12 November 2024 (UTC)
- Oppose per Thryduulf and SnowRose. Also regardless of whether this is a good idea as a policy, FlaggedRevs has a large amount of technical debt, to the extent that deployment to any additional WMF wikis is prohibited, so it seems unwise to expand its usage. novov talk edits 19:05, 13 November 2024 (UTC)
- Oppose I have never found the current pending changes system easily to navigate as a reviewer. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)
- Oppose the more productive approach would be to reduce the overuse of extended-confirmed protection. We have come to rely on it too much. This would be technically difficult and complex for little real gain. —Ganesha811 (talk) 18:30, 16 November 2024 (UTC)
- That's the goal of this proposal (reducing the overuse of ECP), and it provides a plausible mechanism for that (replacing it with the much-less stringent PCECP). How would you go about reducing overuse of ECP instead? – Closed Limelike Curves (talk) 23:29, 29 November 2024 (UTC)
- Would you support a version in which the reviewers remain PC patrollers? Aaron Liu (talk) 00:58, 30 November 2024 (UTC)
- Oppose there might be a need for this but not preemptive. Andre🚐 01:31, 17 November 2024 (UTC)
- Wouldn't that be a support here for question #1, and an oppose in question #2? – Closed Limelike Curves (talk) 23:34, 29 November 2024 (UTC)
- Indeed, but as I've said below, it appears the rationale in the background section has confused many. Aaron Liu (talk) 00:58, 30 November 2024 (UTC)
- Wouldn't that be a support here for question #1, and an oppose in question #2? – Closed Limelike Curves (talk) 23:34, 29 November 2024 (UTC)
- Oppose. The pending changes system is awful and this would make it awfuler (that wasn't a word but it is now). Zerotalk 05:58, 17 November 2024 (UTC)
- Oppose. How can we know that the 73,070 extended-confirmed users are capable of reviewing pending changes? I assume this is a step above normal PCP (eg. pcp is preferred over pcecp), how can reviewing semi-protected pending changes have a higher bar (requiring a request at WP:PERM) than reviewing extended-protected pending changes? Doesn't make much sense to me. — BerryForPerpetuity (talk) 14:15, 20 November 2024 (UTC)
- I do not think that XCON are reviewers is fixed. This RfC is primarily about the creation of PCECP. ~/Bunnypranav:<ping> 14:21, 20 November 2024 (UTC)
- Well, they're capable of reviewing edit requests. Aaron Liu (talk) 14:39, 20 November 2024 (UTC)
- Sure, but assuming this will work the same as PCR, isn't it possible that an extended-confirmed user who doesn't want to review edits, will try to edit a PCECP page, and be required to review edits beforehand? They're not actively seeking out to review edits in the same way that a PCR or someone who handles edit requests does. Will their review be on par with the scrutiny required for this level of protection? — BerryForPerpetuity (talk) 14:55, 20 November 2024 (UTC)
- You do not need to review edits to edit the pending version of the page, which is what happens when you press save on a page with pending edits. Aaron Liu (talk) 15:02, 20 November 2024 (UTC)
- Is it not the case that reviewers need to check a page's pending changes to edit a page? Either way, the point of "what would constitute a revert" needs to be discussed and decided on before we start to implement this, which I appreciate you discussing above. — BerryForPerpetuity (talk) 15:38, 20 November 2024 (UTC)
- No. It's just that if the newest change is not reviewed, the last reviewed change is shown to readers instead of the latest change. Aaron Liu (talk) 16:00, 20 November 2024 (UTC)
- Is it not the case that reviewers need to check a page's pending changes to edit a page? Either way, the point of "what would constitute a revert" needs to be discussed and decided on before we start to implement this, which I appreciate you discussing above. — BerryForPerpetuity (talk) 15:38, 20 November 2024 (UTC)
- You do not need to review edits to edit the pending version of the page, which is what happens when you press save on a page with pending edits. Aaron Liu (talk) 15:02, 20 November 2024 (UTC)
- Sure, but assuming this will work the same as PCR, isn't it possible that an extended-confirmed user who doesn't want to review edits, will try to edit a PCECP page, and be required to review edits beforehand? They're not actively seeking out to review edits in the same way that a PCR or someone who handles edit requests does. Will their review be on par with the scrutiny required for this level of protection? — BerryForPerpetuity (talk) 14:55, 20 November 2024 (UTC)
How can we know that the 72,734 extended-confirmed users are capable of reviewing pending changes?
This isn't about pending changes level 1. This is about pending changes as applied to enforce ECP, with the level [auto-accept=extendedconfirmed] [review=extendedconfirmed]. As this is only intended to be used for WP:ARBECR restricted pages, it shouldn't be used for anything else.- What might need to happen for this to work is there are ways to configure who can auto-accept and review changes individually (rather than bundled as is right now) with the FlaggedRevs extension. Something like this for these drop-downs:
- Auto-accept:
- All users
- Autoconfirmed
- Extended confirmed
- Template editor
- Administrators
- Review:
- Autoconfirmed
- Extended confirmed and reviewers
- Template editors and reviewers
- Administrators
- Auto-accept:
- Of course, autoreview will have auto-accept perms regardless of these settings, and review will have review perms regardless of these settings. Awesome Aasim 16:36, 20 November 2024 (UTC)
- I understand what you're saying, and I'm aware this isn't about level 1. I'm not strongly opposed to PCECP, but my original point was talking about the difference in reviewer requirements for semi-protected PC and XCON PC. If this passes, it would make reviewing semi-protected pending changes require a permission request, but reviewing extended-protected pending changes would only require being extended-confirmed. If that could be explained so I could understand it better, I'd appreciate it.
- This also relates to edit requests. XCON users are capable of reviewing edit requests, because they don't have to implement what the request was verbatim. If a user makes a request that has good substance, but has a part that doesn't adhere to some policy (MOS, NPOV, ect), the reviewer can change it to fit policy. With pending changes, there's really no way to do that besides editing the accepted text after accepting it. The edit request reviewer can ask for clarification on something, add notes, give a reason for declining, ect.
- Especially on pages that have ARBCOM enforcement on them, the edit request system is far better than the pending changes system. This approach seems to be a solution for the problem of over-protection, which is what should actually be addressed. — BerryForPerpetuity (talk) 17:22, 22 November 2024 (UTC)
- Personally, I would also support this change if only reviewers may accept.
I think editing a change after acceptance is superior. It makes clear which parts were written by whom (and thus much easier to satisfy our CC license). Aaron Liu (talk) 17:43, 22 November 2024 (UTC)- Identifying which specific parts were written by whom isn't necessary for the CC BY-SA license. (And since each new revision is a new derivative work, it's not that easy to isolate.) isaacl (talk) 18:50, 22 November 2024 (UTC)
- Right, but there's no need to forget the attributive edit summary, which is needed when accepting edit requests. Identifying specific parts is just cleaner this way. Aaron Liu (talk) 18:57, 22 November 2024 (UTC)
- If the change is rejected, then a user who isn't an author of the content appears in the article history. In theory that would unnecessarily entangle the user in any copyright issues that arose, or possibly defamation cases. isaacl (talk) 22:55, 22 November 2024 (UTC)
- I personally see that as a much lesser problem than the EditRequests issue. Aaron Liu (talk) 19:15, 23 November 2024 (UTC)
- If the change is rejected, then a user who isn't an author of the content appears in the article history. In theory that would unnecessarily entangle the user in any copyright issues that arose, or possibly defamation cases. isaacl (talk) 22:55, 22 November 2024 (UTC)
- Right, but there's no need to forget the attributive edit summary, which is needed when accepting edit requests. Identifying specific parts is just cleaner this way. Aaron Liu (talk) 18:57, 22 November 2024 (UTC)
- Identifying which specific parts were written by whom isn't necessary for the CC BY-SA license. (And since each new revision is a new derivative work, it's not that easy to isolate.) isaacl (talk) 18:50, 22 November 2024 (UTC)
- We should be maximizing the number of pages that are editable by all. Protection fails massively at this task. All this does is tell editors "hey don't edit this page", which is fine for certain legal pages and the main page that no one should really be editing, but for articles? There is a reason we have this thing called "code review" on Git and "peer review" everywhere else; we should be encouraging changes but if there is disruption we should be able to hold them for review so we can remove the problematic ones.
- Since Wikipedia is not configured to have software-based RC patrol outside of new pages patrol (and RC patrol would be a problem anyway not only because of the sheer volume of edits but also because edits older than a certain timeframe are removed from the patrol queue), we have to rely on other software measures to hide revisions until they are approved. Specifically, RC patrol hiding all edits until approved (wikiHow does this) would be a problem on Wikipedia. But that is a tangent. Awesome Aasim 19:43, 22 November 2024 (UTC)
- There's also a reason why Git changes aren't pushed directly to the main code branch for review, and instead a pull request is sent to an integrator in order to integrate the changes. There's a bottleneck in processing the request (including integration testing). Also note with software development, rebasing your changes onto the latest integrated stream is your responsibility. The equivalent with pending changes would be for each person to revalidate their proposed change after a preceding change had been approved or rejected. Instead, the workload falls upon the reviewer. Side note: the term "code review" far predates git, and is widely used by many software development teams. isaacl (talk) 22:45, 22 November 2024 (UTC)
- I see I see. I do think we need better pending changes as the current flagged revs system sucks. Also just because a feature is turned on doesn't mean there is consensus to use it, as seen by WP:SUPERPROTECT and WP:PC2. Awesome Aasim 18:11, 23 November 2024 (UTC)
- Your second sentence would render everything about this to be meaningless. Plus, the community does not like unnecessarily turning features on; both of your examples have been removed. Aaron Liu (talk) 19:18, 23 November 2024 (UTC)
- I know, that is my point. We also have consensus to make in Vector 2022 the unlimited width being default which was never turned on. Awesome Aasim 19:20, 23 November 2024 (UTC)
- I don't understand your point. You're making a proposal for a new feature that has to be developed in a MediaWiki extension. If it does get developed, it won't get deployed on English Wikipedia unless there's consensus to use it. And given that the extension is not supported by the WMF right now, to the extent that it won't deploy it on new wikis, I'm not sure it has the ability to support any new version. isaacl (talk) 22:53, 23 November 2024 (UTC)
- I know, that is my point. We also have consensus to make in Vector 2022 the unlimited width being default which was never turned on. Awesome Aasim 19:20, 23 November 2024 (UTC)
- Your second sentence would render everything about this to be meaningless. Plus, the community does not like unnecessarily turning features on; both of your examples have been removed. Aaron Liu (talk) 19:18, 23 November 2024 (UTC)
- I see I see. I do think we need better pending changes as the current flagged revs system sucks. Also just because a feature is turned on doesn't mean there is consensus to use it, as seen by WP:SUPERPROTECT and WP:PC2. Awesome Aasim 18:11, 23 November 2024 (UTC)
- There's also a reason why Git changes aren't pushed directly to the main code branch for review, and instead a pull request is sent to an integrator in order to integrate the changes. There's a bottleneck in processing the request (including integration testing). Also note with software development, rebasing your changes onto the latest integrated stream is your responsibility. The equivalent with pending changes would be for each person to revalidate their proposed change after a preceding change had been approved or rejected. Instead, the workload falls upon the reviewer. Side note: the term "code review" far predates git, and is widely used by many software development teams. isaacl (talk) 22:45, 22 November 2024 (UTC)
- Personally, I would also support this change if only reviewers may accept.
- Oppose, per JSS and others. We don't need another system just to allow the preemptive protection of pages, and allowing non-EC editors to clutter up this history in ARBECR topic areas would just create a lot of extra work with little or no real benefit. – bradv 23:10, 23 November 2024 (UTC)
- Oppose - edit requests only for non-EC users is against spirit of open wiki, but is necessary to prevent the absolute flame-wars/edit-wars on contentious topic pages. having a pending changes version of an article only moves flamewars by non-ECR users to pending changes version. Better to allow edit requests and use ARBECR to close non-productive discussions on talk page than having another venue for CTOP flamewars to occur. Bluethricecreamman (talk) 02:28, 2 December 2024 (UTC)
- In your argument, aren't flamewars still moved to the edit request's discussions? Can't editors also just reject non-productive pending changes? Aaron Liu (talk) 03:48, 2 December 2024 (UTC)
Neutral (PCECP)
- I have made my opposition to all forms of FlaggedRevisions painfully clear since 2011. I will not formally oppose this, however, so as to avoid the process being derailed by people rebutting my opposition. —Jéské Couriano v^_^v threads critiques 02:36, 6 November 2024 (UTC)
- I'm not a fan of the current pending changes, so I couldn't support this. But it also wouldn't effect my editing, so I won't oppose it if it helps others.-- LCU ActivelyDisinterested «@» °∆t° 14:32, 6 November 2024 (UTC)
Discussion (PCECP)
Someone who is an expert at configuring mw:Extension:FlaggedRevs will need to confirm that it is possible to simultaneously have our current type of pending changes protection plus this new type of pending changes protection. The current enwiki FlaggedRevs config looks something like the below and may not be easy to configure. You may want to ping Ladsgroup or post at WP:VPT for assistance.
Extended content
|
---|
// enwiki
// InitializeSettings.php
$wgFlaggedRevsOverride = false;
$wgFlaggedRevsProtection = true;
$wgSimpleFlaggedRevsUI = true;
$wgFlaggedRevsHandleIncludes = 0;
$wgFlaggedRevsAutoReview = 3;
$wgFlaggedRevsLowProfile = true;
// CommonSettings.php
$wgAvailableRights[] = 'autoreview';
$wgAvailableRights[] = 'autoreviewrestore';
$wgAvailableRights[] = 'movestable';
$wgAvailableRights[] = 'review';
$wgAvailableRights[] = 'stablesettings';
$wgAvailableRights[] = 'unreviewedpages';
$wgAvailableRights[] = 'validate';
$wgGrantPermissions['editprotected']['movestable'] = true;
// flaggedrevs.php
wfLoadExtension( 'FlaggedRevs' );
$wgFlaggedRevsAutopromote = false;
$wgHooks['MediaWikiServices'][] = static function () {
global $wgAddGroups, $wgDBname, $wgDefaultUserOptions,
$wgFlaggedRevsNamespaces, $wgFlaggedRevsRestrictionLevels,
$wgFlaggedRevsTags, $wgFlaggedRevsTagsRestrictions,
$wgGroupPermissions, $wgRemoveGroups;
$wgFlaggedRevsNamespaces[] = 828; // NS_MODULE
$wgFlaggedRevsTags = [ 'accuracy' => [ 'levels' => 2 ] ];
$wgFlaggedRevsTagsRestrictions = [
'accuracy' => [ 'review' => 1, 'autoreview' => 1 ],
];
$wgGroupPermissions['autoconfirmed']['movestable'] = true; // T16166
$wgGroupPermissions['sysop']['stablesettings'] = false; // -aaron 3/20/10
$allowSysopsAssignEditor = true;
$wgFlaggedRevsNamespaces = [ NS_MAIN, NS_PROJECT ];
# We have only one tag with one level
$wgFlaggedRevsTags = [ 'status' => [ 'levels' => 1 ] ];
# Restrict autoconfirmed to flagging semi-protected
$wgFlaggedRevsTagsRestrictions = [
'status' => [ 'review' => 1, 'autoreview' => 1 ],
];
# Restriction levels for auto-review/review rights
$wgFlaggedRevsRestrictionLevels = [ 'autoconfirmed' ];
# Group permissions for autoconfirmed
$wgGroupPermissions['autoconfirmed']['autoreview'] = true;
# Group permissions for sysops
$wgGroupPermissions['sysop']['review'] = true;
$wgGroupPermissions['sysop']['stablesettings'] = true;
# Use 'reviewer' group
$wgAddGroups['sysop'][] = 'reviewer';
$wgRemoveGroups['sysop'][] = 'reviewer';
# Remove 'editor' and 'autoreview' (T91934) user groups
unset( $wgGroupPermissions['editor'], $wgGroupPermissions['autoreview'] );
# Rights for Bureaucrats (b/c)
if ( isset( $wgGroupPermissions['reviewer'] ) ) {
if ( !in_array( 'reviewer', $wgAddGroups['bureaucrat'] ?? [] ) ) {
// promote to full reviewers
$wgAddGroups['bureaucrat'][] = 'reviewer';
}
if ( !in_array( 'reviewer', $wgRemoveGroups['bureaucrat'] ?? [] ) ) {
// demote from full reviewers
$wgRemoveGroups['bureaucrat'][] = 'reviewer';
}
}
# Rights for Sysops
if ( isset( $wgGroupPermissions['editor'] ) && $allowSysopsAssignEditor ) {
if ( !in_array( 'editor', $wgAddGroups['sysop'] ) ) {
// promote to basic reviewer (established editors)
$wgAddGroups['sysop'][] = 'editor';
}
if ( !in_array( 'editor', $wgRemoveGroups['sysop'] ) ) {
// demote from basic reviewer (established editors)
$wgRemoveGroups['sysop'][] = 'editor';
}
}
if ( isset( $wgGroupPermissions['autoreview'] ) ) {
if ( !in_array( 'autoreview', $wgAddGroups['sysop'] ) ) {
// promote to basic auto-reviewer (semi-trusted users)
$wgAddGroups['sysop'][] = 'autoreview';
}
if ( !in_array( 'autoreview', $wgRemoveGroups['sysop'] ) ) {
// demote from basic auto-reviewer (semi-trusted users)
$wgRemoveGroups['sysop'][] = 'autoreview';
}
}
};
|
–Novem Linguae (talk) 09:41, 6 November 2024 (UTC)
- I basically came here to ask if this is even possible or if it would need WMMF devs involvement or whatever.
- For those unfamiliar, pending changes is not the same thing as the flagged revisions used on de.wp. PC was developed by the foundation specifically for this project after we asked for it. We also used to have WP:PC2 but nobody really knew what that was supposed to be and how to use it and it was discontinued. Just Step Sideways from this world ..... today 21:21, 6 November 2024 (UTC)
- Is PC2 an indication of implementation being possible? Aaron Liu (talk) 22:27, 6 November 2024 (UTC)
- Depends on what exactly is meant by "implementation". A configuration where edits by non-extendedconfirmed users need review by reviewers would probably be similar to what was removed in gerrit:/r/334511 to implement T156448 (removal of PC2). I don't know whether a configuration where edits by non-extendedconfirmed users can be reviewed by any extendedconfirmed user while normal PC still can only be reviewed by reviewers is possible or not. Anomie⚔ 13:32, 7 November 2024 (UTC)
- Looking at the MediaWiki documentation, it is not possible atm. That said, currently the proposal assumes that it is possible and we should work with that (though I would also support allowing all extended-confirmed to review all pending changes). Aaron Liu (talk) 13:56, 7 November 2024 (UTC)
- Depends on what exactly is meant by "implementation". A configuration where edits by non-extendedconfirmed users need review by reviewers would probably be similar to what was removed in gerrit:/r/334511 to implement T156448 (removal of PC2). I don't know whether a configuration where edits by non-extendedconfirmed users can be reviewed by any extendedconfirmed user while normal PC still can only be reviewed by reviewers is possible or not. Anomie⚔ 13:32, 7 November 2024 (UTC)
- Is PC2 an indication of implementation being possible? Aaron Liu (talk) 22:27, 6 November 2024 (UTC)
I think the RfC summary statement is a bit incomplete. My understanding is that the pending changes feature introduces a set of rights which can be assigned to corresponding user groups. I believe all the logic is based on the user rights, so there's no way to designate that one article can be autoreviewed by one user group while another article can be autoreviewed by a different user group. Thus unless the proposal is to replace autoconfirmed pending changes with extended confirmed pending changes, I don't think saying "enabled" in the summary is an adequate description. And if the proposal is to replace autoconfirmed pending changes, I think that should be explicitly stated. isaacl (talk) 22:06, 6 November 2024 (UTC)
- The proposal assumes that coexistence is technically possible. Aaron Liu (talk) 22:28, 6 November 2024 (UTC)
- The proposal did not specify if it assumed co-existence is possible, or enabling it is possible, which could mean replacement. Thus I feel the summary statement (before the timestamp, which is what shows up in the central RfC list) is incomplete. isaacl (talk) 22:31, 6 November 2024 (UTC)
- While on a re-read,
It is assumed that it is technically possible to have PCECP
does not explicitly imply co-existence, that is how I interpreted it. Anyways, it would be wonderful to hear from @Awesome Aasim about this. Aaron Liu (talk) 22:42, 6 November 2024 (UTC)- The key question that ought to be clarified is if the proposal is to have both, or to replace the current one with a new version. (That ties back to the question of whether or not the arbitration committee's involvement is required.) Additionally, it would be more accurate not to use a word in the summary that implies the only cost is turning on a switch. isaacl (talk) 22:49, 6 November 2024 (UTC)
- It is assuming that we can have PC1 where only reviewers can approve edits and PCECP where only extended confirmed users can approve edits AND make edits without requiring approval. With the current iteration I don't know if it is technically possible. If it requires an extension rewrite or replacement, that is fine. If something is still unclear, please let me know. Awesome Aasim 23:06, 6 November 2024 (UTC)
- I suggest changing the summary statement to something like, "Should a new pending changes protection level be added to Wikipedia – extended confirmed pending changes (hereby abbreviated as PCECP)?". The subsequent paragraph can provide the further explanation on who would be autoreviewed and who would serve as reviewers with the new proposed level. isaacl (talk) 23:19, 6 November 2024 (UTC)
- Okay, done. I tweaked the wording a little. Awesome Aasim 23:40, 6 November 2024 (UTC)
- I suggest changing the summary statement to something like, "Should a new pending changes protection level be added to Wikipedia – extended confirmed pending changes (hereby abbreviated as PCECP)?". The subsequent paragraph can provide the further explanation on who would be autoreviewed and who would serve as reviewers with the new proposed level. isaacl (talk) 23:19, 6 November 2024 (UTC)
- It is assuming that we can have PC1 where only reviewers can approve edits and PCECP where only extended confirmed users can approve edits AND make edits without requiring approval. With the current iteration I don't know if it is technically possible. If it requires an extension rewrite or replacement, that is fine. If something is still unclear, please let me know. Awesome Aasim 23:06, 6 November 2024 (UTC)
- The key question that ought to be clarified is if the proposal is to have both, or to replace the current one with a new version. (That ties back to the question of whether or not the arbitration committee's involvement is required.) Additionally, it would be more accurate not to use a word in the summary that implies the only cost is turning on a switch. isaacl (talk) 22:49, 6 November 2024 (UTC)
- While on a re-read,
- The proposal did not specify if it assumed co-existence is possible, or enabling it is possible, which could mean replacement. Thus I feel the summary statement (before the timestamp, which is what shows up in the central RfC list) is incomplete. isaacl (talk) 22:31, 6 November 2024 (UTC)
- I think inclusion of the preemptive-protection part in the background statement is causing confusion. AFAIK preemptive protection and whether we should use PCECP over ECP are separate questions. Aaron Liu (talk) 19:11, 7 November 2024 (UTC)
Q2: If this proposal passes, should PCECP be applied preemptively to WP:ARBECR topics?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Particularly on low traffic articles as well as all talk pages. WP:ECP would still remain an option to apply on top of PCECP. Awesome Aasim 19:58, 5 November 2024 (UTC)
Support (Preemptive PCECP)
- Support for my reasons in Q1. Awesome Aasim 19:58, 5 November 2024 (UTC)
- Also to add on there needs to be some enforcement measure for WP:ARBECR. No technical enforcement measures on WP:ARBECR is akin to site-banning an editor and then refusing to block them because "blocks should be preventative". Awesome Aasim 19:42, 13 November 2024 (UTC)
- Blocking a site-banned user is preventative, because if we didn't need to prevent them from editing they wouldn't have been site banned. Thryduulf (talk) 21:16, 13 November 2024 (UTC)
- Also to add on there needs to be some enforcement measure for WP:ARBECR. No technical enforcement measures on WP:ARBECR is akin to site-banning an editor and then refusing to block them because "blocks should be preventative". Awesome Aasim 19:42, 13 November 2024 (UTC)
- Slightly ambivalent on protecting talk pages, but I guess it would bring prominence to low-traffic pages. Aaron Liu (talk) 20:13, 5 November 2024 (UTC)
- Per isaacl, I only support preemptive protection on low-traffic pages. Aaron Liu (talk) 23:21, 12 November 2024 (UTC)
Support, including on talk pages. With edit requests mostly dealt with through pending changes, protecting the talk pages too should limit the disruption and unconstructive comments that are often commonplace there.(Changing my mind, I don't think applying PCECP on all pages would be a constructive solution. The rules of ARBECR limit participation to extended-confirmed editors, but the spirit of the rules has been to only enforce that on pages with actual disruption, not preemptively. 20:49, 7 November 2024 (UTC)) Chaotic Enby (talk · contribs) 20:21, 5 November 2024 (UTC)- Support I'm going to disagree with the "no" argument entirely - we should be preemptively ECPing (even without pending changes). It's a perversion of logic to say "you can't (per policy) do push this button", and then refuse to actually technically stop you from pushing the button even though we know you could. * Pppery * it has begun... 20:52, 5 November 2024 (UTC)
- Support (Summoned by bot): While I disagree with ECR in general, this is a better way of enforcing it as long as it exists. Constructive "edit requests" can be accepted, and edits that people disagree with can be easily reverted. I'm slightly concerned with how this could affect the pending changes backlog (which has a fairly small number of active reviewers at the moment), but I'm sure that can be figured out. C F A 💬 23:41, 5 November 2024 (UTC)
Oppose (Preemptive PCECP)
- No, I don't think this is necessary at this time. I think it should be usable there, but I don't feel like this is a necessary step at this time. We could revisit it later. WhatamIdoing (talk) 20:37, 5 November 2024 (UTC)
- No, we still shouldn't be protecting preemptively. Wait until there's disruption, and then choose between PCXC or regular XC protection (I would strongly favour the former for the reasons I gave above). Cremastra (u — c) 20:43, 5 November 2024 (UTC)
- Mu - This is a question that should be asked afterwards, not same time as, since ArbCom will want to look at any such proposal. —Jéské Couriano v^_^v threads critiques 02:38, 6 November 2024 (UTC)
- No, I feel this would be a bad idea. Critics of Wikipedia already use the idea that it's controlled by a select group, this would only make that misconception more common. -- LCU ActivelyDisinterested «@» °∆t° 14:36, 6 November 2024 (UTC)
- Preemptive protection has always been contrary to policy, with good reason. Just Step Sideways from this world ..... today 21:26, 6 November 2024 (UTC)
- Absolutely not. No need for protection if there is no disruption. The number of protected pages should be kept low, and the number of pages that cry out "look at me!" on your watchlist (anything under pending changes) should be as close to zero as possible. —Kusma (talk) 21:44, 6 November 2024 (UTC)
No need for protection if there is no disruption.
Trouble is, the ECR restriction is enacted in response to widespread disruption, this time to the entire topic area as a whole. Disregard for POV, blatant inclusion of unverifiable or false (unreliable) information, and more all pose serious threats of disruption to the project. If WP:ARBECR was applied broadly without any protection I would agree, but WP:ARBECR is applied in response to disruption (or a serious threat of), not preemptively. Take this one for example, which is a long winded ANI discussion that ended in the WP:GS for the Russo-Ukranian War (and the ECR restrictions). And as for Arbitration Committee, ArbCom is a last resort when all other attempts to resolve disruption fail. See WP:ARBPIA WP:ARBPIA2 WP:ARBPIA3 WP:ARBPIA4. The earliest reference to the precursor to ARBECR in this case is on the third ArbCom case. Not protecting within a topic area that has a high risk of disruption is akin to having a high-risk template unprotected. The only difference is that carelessly editing a high-risk template creates technical problems, while carelessly editing a high-risk topic area creates content problems.- Either the page is protected technically (which enforces a community or ArbCom decision that only specific editors are allowed in topic areas) or the page is not protected technically but protected socially (which then gives a chance of evasion). I see this situation no different from banning an editor sitewide and then refusing to block them on the grounds that "blocks should only be used to prevent disruption" while ignoring the circumstances leading up to the site ban.
- What PCECP would do is allow for better enforcement of the community aspect. New editors won't be bitten, if they find something that needs fixing like a typo, they can make an edit and it can get approved. More controversial edits will get relegated to the talk page where editors not banned from that topic area can discuss that topic. And blatant POV pushing and whatnot would get reverted and would never even be seen by readers.
- The workflow would look like this: new/anon user make an edit → edit gets held for review → extended confirmed user approves the edit. Rather than the current workflow (and the reason why preemptive ECP is unpopular): new/anon user makes an edit → user is greeted with a "this page is protected" message → user describes what they would like to be changed but in a badly formulated way → edit request gets closed as "unclear" or something similar. Awesome Aasim 14:21, 11 November 2024 (UTC)
- Consider this POV change made to a topic that I presume is covered under WP:ARBPIA and that is not protected. The whole reason that there is WP:ARBECR is to prevent stuff like this from happening. There already is consensus either among arbitrators or among the community to enact ECR within specific contentious topic areas, so I don't see how it is productive to refuse to protect pages because of "not enough disruption" when the entire topic area has faced widespread disruption in the past. Awesome Aasim 18:18, 23 November 2024 (UTC)
- Simple, everyday vandalism is far from the levels of disruption that caused the topic to be marked Contentious. Aaron Liu (talk) 19:20, 23 November 2024 (UTC)
- That example I provided isn't vandalism. Yes it is disruptive POV pushing but it is not vandalism. Wikipedia also exists in the real world, and Wikipedia does not have the technical tools to fight armies of POV pushers and more. One example is Special:PermaLink/1197462753#Arbitration_motion_regarding_PIA_Canvassing. When the stakes are this high people feel entitled to impose their view on the project, but Wikipedia isn't the place to right great wrongs. Awesome Aasim 19:32, 23 November 2024 (UTC)
- It is vandalism, the changing of content beyond recognition. Even if it were just POV-pushing, there was no army here. Aaron Liu (talk) 19:41, 23 November 2024 (UTC)
- That example I provided isn't vandalism. Yes it is disruptive POV pushing but it is not vandalism. Wikipedia also exists in the real world, and Wikipedia does not have the technical tools to fight armies of POV pushers and more. One example is Special:PermaLink/1197462753#Arbitration_motion_regarding_PIA_Canvassing. When the stakes are this high people feel entitled to impose their view on the project, but Wikipedia isn't the place to right great wrongs. Awesome Aasim 19:32, 23 November 2024 (UTC)
- Simple, everyday vandalism is far from the levels of disruption that caused the topic to be marked Contentious. Aaron Liu (talk) 19:20, 23 November 2024 (UTC)
- Consider this POV change made to a topic that I presume is covered under WP:ARBPIA and that is not protected. The whole reason that there is WP:ARBECR is to prevent stuff like this from happening. There already is consensus either among arbitrators or among the community to enact ECR within specific contentious topic areas, so I don't see how it is productive to refuse to protect pages because of "not enough disruption" when the entire topic area has faced widespread disruption in the past. Awesome Aasim 18:18, 23 November 2024 (UTC)
- Per my vote above. Ratnahastin (talk) 09:00, 7 November 2024 (UTC)
- Absolutely not. Protection should only ever be preventative. Kusma puts it better than I can. Thryduulf (talk) 13:49, 7 November 2024 (UTC)
- Per my comment above. jp×g🗯️ 18:17, 7 November 2024 (UTC)
- No; see my comment above. I prefer to see disruption before protecting. Lectonar (talk) 08:51, 8 November 2024 (UTC)
- No. We should be quicker to apply protection in these topics than we would elsewhere, but not preemptively except on highly visible pages (which, in these topics, are probably ECP-protected anyway). Animal lover |666| 17:18, 11 November 2024 (UTC)
- No, that would create a huge backlog. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)
- Oppose per Kusma Andre🚐 01:30, 17 November 2024 (UTC)
Neutral (preemptive PCECP)
Discussion (preemptive PCECP)
- @Jéské Couriano Could you link to said ArbCom discussion? Aaron Liu (talk) 03:51, 6 November 2024 (UTC)
- I'm not saying such a discussion exists, but changes to Arbitration remedies/discretionary sanctions are something they would want to weigh in on. Arbitration policy (which includes WP:Contentious topics) is in their wheelhouse and this would have serious implications for WP:CT/A-I and any further instances where ArbCom (rather than individual editors, as a discretionary sanction) would need to resort to a 500/30 rule as an explicit remedy. —Jéské Couriano v^_^v threads critiques 04:58, 6 November 2024 (UTC)
- That is not my reading of WP:ARBECR. Specifically,
On any page where the restriction is not enforced through extended confirmed protection, this restriction may be enforced by...the use of pending changes...
(bold added by me for emphasis). But if there is consensus not to use this preemptively so be it. Awesome Aasim 05:13, 6 November 2024 (UTC)
- That is not my reading of WP:ARBECR. Specifically,
- I'm not saying such a discussion exists, but changes to Arbitration remedies/discretionary sanctions are something they would want to weigh in on. Arbitration policy (which includes WP:Contentious topics) is in their wheelhouse and this would have serious implications for WP:CT/A-I and any further instances where ArbCom (rather than individual editors, as a discretionary sanction) would need to resort to a 500/30 rule as an explicit remedy. —Jéské Couriano v^_^v threads critiques 04:58, 6 November 2024 (UTC)
- While I appreciate the forward thinking that PCECP may want to be used in Arb areas, this feels like a considerable muddying of the delineation between the Committee's role and the community's role. Traditionally, Contentious Topics have been the realm of ArbCom, and General Sanctions have been the realm of the Community. Part of the logic comes down to who takes the blame when things go wrong. The Community shouldn't take the blame when ArbCom makes a decision, and vice versa. Part of the logic is separation of powers. If the community wants to say "ArbCom, you will enforce this so help you God," then that should be done by amending ArbPol. Part of the logic is practical. If the community creates a process that adds to an existing Arb process, what happens when the Arbs want to change that process? Or even end it altogether? Bottomline: Adopting PCECP for ARBECR is certainly something ArbCom could do. But I'd ask the community to consider the broader structural problems that would arise if the community adopted it on behalf of ArbCom. CaptainEek Edits Ho Cap'n!⚓ 05:18, 7 November 2024 (UTC)
- Interesting. I'd say ArbCom should be able to override the community if they truly see such action fit and worthy of potential backlash. Aaron Liu (talk) 12:30, 7 November 2024 (UTC)
- Just a terminology note, although I appreciate many think of general sanctions in that way, it's defined on the Wikipedia:General sanctions page as
... a type of Wikipedia sanctions that apply to all editors working in a particular topic area. ... General sanctions are measures used by the community or the Arbitration Committee ("ArbCom") to improve the editing atmosphere of an article or topic area.
. Thus the contentious topics framework is a form of general sanctions. isaacl (talk) 15:22, 7 November 2024 (UTC) - Regarding the general point: I agree that it is cumbersome for the community to impose a general sanction that is added on top of a specific arbitration remedy. I would prefer that the community work with the arbitration committee to amend its remedy, which would facilitate keeping the description of the sanction and logging of its enforcement together, instead of split. (I appreciate that for this specific proposal, logging of enforcement is not an issue.) isaacl (talk) 15:30, 7 November 2024 (UTC)
- Extended confirmed started off as an arbcom concept - 500 edits/30 days - which the community then choose to adopt. ArbCom then decided to make its remedy match the community's version - such that if the community were to decide extended confirmed were 1000 edits/90 days all ArbCom restrictions would update. I find this a healthy feedback loop between ArbCom and the community. The community could clearly choose (at least on a policy level, given some technical concerns) to enact PCECP. It could choose to apply this to some/all pages. If it is comfortable saying that it wants to delegate some of which pages this applies to the Arbitration Committee I think it can do so without amending ArbPol. However, I think ArbCom could could decide that PCECP would not apply in some/all CTOP areas given that the Committee is exempt from consensus for areas with-in its scope. And so it might ultimately make more sense to do what isaacl suggests. Best, Barkeep49 (talk) 16:02, 7 November 2024 (UTC)
- The "contentious topics" procedure does seem like something that the community should absolutely mirror and that ultimately both the community and ArbCom should work out of. If one diverges, there is probably a good reason why it diverged.
- As for the
broader structural problems that would arise if the community adopted it on behalf of ArbCom
, there are already structural problems with general sanctions because of the community's failure to adopt the new CTOP procedure for new contentious topics. Although the community has adopted the contents of WP:ARBECR for other topic areas like WP:RUSUKR, they don't adopt it by reference but by copying the whole text verbatim. Awesome Aasim 17:13, 7 November 2024 (UTC)- That's not the same structural problem. The community hasn't had a lot of discussion about adopting the contentious topic framework for its own use (in my opinion, because it's a very process-wonky discussion that doesn't interest enough editors to generate a consensus), but that doesn't interfere with how the arbitration committee uses the contentious topic framework. This proposal is suggesting that the community automatically layer on its own general sanction on top of any time the arbitration committee decides to enact a specific sanction. Thus the committee would have to consider each time whether or not to override the community add-on, and amendment requests might have to be made both to the committee and the community. isaacl (talk) 17:33, 7 November 2024 (UTC)
- Prior to contentious topics there were discretionary sanctions. Those became very muddled and so the committee created Contentious topics to help clarify the line between community and committee (disclosure: I help draft much of that work). As part of that the committee also established ways for the community to tie-in to contentious topics if it wanted. So for the community hasn't made that choice which is fine. But I do this is an area that, in general, ArbCom does better than the community because there is more attention paid to having consistency across areas and when a problem arises I have found (in basically this one area only) ArbCom to be more agile at addressing it. But the community is also more willing to pass a GS than ArbCom is to designate something a CT (which I think is a good hting all around) and so having the community come to consensus about how, if at all, it wants to tie in to CT (and its evolutions) or if it would prefer to do its own thing (including just mirroring whatever happens to be in CT at the time but not subsequent changes) would probably be a good meta discussion to have. But it also doesn't seem necessary for this particular proposal. Best, Barkeep49 (talk) 17:41, 7 November 2024 (UTC)
Q3: If this proposal does not pass, should ECP be applied preemptively to articles under WP:ARBECR topics?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Support (preemptive ECP)
Support as a second option, but only to articles. Talk pages can be enforced solely through reverts and short protections so I see little reason why those should be protected. Awesome Aasim 19:58, 5 November 2024 (UTC)Moved to oppose. Awesome Aasim 19:10, 23 November 2024 (UTC)Support for articles per Aasim. Talk pages still need to be open for edit requests.(Also changing my mind, per above. If anything, we should clarify ARBECR so that the 500-30 limit is only applied in cases where it is needed, not automatically, to resolve the ambiguity. 20:52, 7 November 2024 (UTC)) Chaotic Enby (talk · contribs) 20:20, 5 November 2024 (UTC)- Support per my comment in the previous section. * Pppery * it has begun... 20:52, 5 November 2024 (UTC)
- I agree with Chaotic Enby and Pppery above and think all CT articles should be protected. I am generally not a fan of protecting Talk pages, but it's true that many CT Talk pages are cesspools of hate, so I am not sure where I sit on protecting Talk pages. Toadspike [Talk] 20:57, 5 November 2024 (UTC)
- Under the current wording of ARBECR,
When such a restriction is in effect in a topic area, only extended-confirmed editors may make edits related to the topic area.
We should protect pages, rather than letting new editors edit and then reverting them for basically no reason. This is a waste of their time and very BITEy. - I am not opposed to changing the wording of ARBECR to forbid reverting solely because an editor is not extended confirmed, which is a silly reason to revert otherwise good edits. However, until ArbCom changes ARBECR, we are stuck with the rules we have. We ought to make these rules clear to editors before they edit, by page protection, instead of after they edit, by reversion. Toadspike [Talk] 10:55, 16 November 2024 (UTC)
- Under the current wording of ARBECR,
- Support preemptive ECP without PCECP (for article space only). If we have a strict policy (or ArbCom ruling) that a class of user is forbidden to edit a class of page, there is no downside whatever to implementing that policy by technical means. All it does is stop prohibited edits. The consequences would all be positive, such as removing the need for constant monitoring, reducing IP vandalism to zero, and reducing the need to template new editors who haven't learned the rules yet. What I'd like with regard to the last one, is that a non-EC editor sees an "edit" button on an ECP page but clicking it diverts them to a page that explains EC and how to get it. Zerotalk 05:53, 17 November 2024 (UTC)
Oppose (preemptive ECP)
- Oppose because I think this is a bad idea. For one thing, just making a list of all the covered articles could produce disputes that we don't need. (This article might be covered, but is it truly covered? Reasonable people could easily disagree about whether some articles are "mostly" about the restricted area vs "partly", and therefore about whether the rule applies.) Second, where a serious and obvious problem, such as blatant vandalism, is concerned, it would be better to have an IP revert it than to mindlessly follow the rules. It is important to remember that our rules exist as a means to an end. We follow them because, and to the extent that, they help overall. We expect admins and other editors to exercise discretion. It is our policy that Wikipedia:If a rule prevents you from improving or maintaining Wikipedia, ignore it. This is a proposal to declare that the IAR policy never applies to the rule about who should normally be editing these articles, and that exercising discretion is not allowed. WhatamIdoing (talk) 20:42, 5 November 2024 (UTC)
- I am neither Arb nor admin, but I think the words "broadly construed" are specifically chosen so that if a topic is "partly" about the restricted area, it is included in the CTOP. @WhatamIdoing, could you please show me an example of a case where CTOP designation or ECP was disputed? Toadspike [Talk] 10:59, 16 November 2024 (UTC)
- I avoid most of those articles, but consider "the entire set of Arab-Israeli conflict-related articles, broadly interpreted": Does that include BLPs who come from Israel/Palestine? What about BLPs who are in the news because of what they said about the Israel–Hamas war? IMO reasonable people could disagree about whether "every person living in the affected area" or "every person talking about the conflict" is part of "the entire set of Arab-Israeli conflict-related articles, broadly interpreted". WhatamIdoing (talk) 19:54, 16 November 2024 (UTC)
- David Miller is what we call a "partial" Arbpia. So while it's a BLP in general, parts of it are subject to Arbpia/CT, not a particularly unusual situation. The talkpage and edit notices should, but don't always, tell you whether it is or isn't, part of. Selfstudier (talk) 20:59, 16 November 2024 (UTC)
- I avoid most of those articles, but consider "the entire set of Arab-Israeli conflict-related articles, broadly interpreted": Does that include BLPs who come from Israel/Palestine? What about BLPs who are in the news because of what they said about the Israel–Hamas war? IMO reasonable people could disagree about whether "every person living in the affected area" or "every person talking about the conflict" is part of "the entire set of Arab-Israeli conflict-related articles, broadly interpreted". WhatamIdoing (talk) 19:54, 16 November 2024 (UTC)
- WP:IAR applies to content not to conduct. ArbCom is empowered to take action against poor conduct. You can't claim WP:IAR for example to reverse engineering a script that requires specific permissions to use. Likewise a new editor cannot claim "IAR" to adding unverifiable (albeit true) information to an ARBECR protected article. Awesome Aasim 15:25, 16 November 2024 (UTC)
- IAR stands for IgnoreAllRules. The latter two cannot be claimed valid based on IgnoreAllRules because they don't have strong IgnoreAllRules arguments for what they did, not because IgnoreAllRules somehow only applies to content. Aaron Liu (talk) 16:07, 16 November 2024 (UTC)
- I meant ignore all rules applies to rules not to behavior. Point still stands as ARBPIA addresses behavior not content. Awesome Aasim 21:04, 16 November 2024 (UTC)
- I agree that "ignore all rules" applies to rules – including rules about behavior. ARBPIA is a rule about behavior. IAR therefore applies to ARBPIA.
- Of course, if breaking the rule doesn't prove helpful to Wikipedia in some way, then no matter what type of rule it is, you shouldn't break the rule. We have a rule against bad grammar in articles, and you should not break that rule. But when two rules conflict – say, the style rule of "No bad grammar" and the behavioral rule of "No editing this ARBPIA article while logged out, even if it's because you're on a public computer and can't remember your password" – IAR says you can choose to ignore the rule that prevents you from improving Wikipedia. WhatamIdoing (talk) 21:34, 16 November 2024 (UTC)
- I meant ignore all rules applies to rules not to behavior. Point still stands as ARBPIA addresses behavior not content. Awesome Aasim 21:04, 16 November 2024 (UTC)
- IAR stands for IgnoreAllRules. The latter two cannot be claimed valid based on IgnoreAllRules because they don't have strong IgnoreAllRules arguments for what they did, not because IgnoreAllRules somehow only applies to content. Aaron Liu (talk) 16:07, 16 November 2024 (UTC)
- I am neither Arb nor admin, but I think the words "broadly construed" are specifically chosen so that if a topic is "partly" about the restricted area, it is included in the CTOP. @WhatamIdoing, could you please show me an example of a case where CTOP designation or ECP was disputed? Toadspike [Talk] 10:59, 16 November 2024 (UTC)
- While there's already precedent for preemptive protection at e.g. RFPP, I do not like this. For one, as talk pages (and, by extension, edit requests) cannot use the visual editor, this makes it much harder for newcomers to contribute edits, often unnecessarily on articles where there are no disruption. Aaron Liu (talk) 23:47, 5 November 2024 (UTC)
- Oppose (Summoned by bot): Too strict. C F A 💬 00:03, 6 November 2024 (UTC)
- Mu - This is basically my reading of the 500/30 rule as writ. Anything that would fall into the 500/30'd topic should be XCP'd on discovery. It's worth noting I don't view this as anywhere close to ideal but then neither did ArbCom, and given the circumstances of the real-world ethnopolitical conflict only escalating as of late (which in turn feeds the disruption) the only other - even worse - option would be full-protection across the board everywhere in the area. So why am I not arguing Support? Because just like the question above, this is putting the cart before the horse and this is better off being discussed after this RfC ends, not same time as. —Jéské Couriano v^_^v threads critiques 02:47, 6 November 2024 (UTC)
- Oppose Preemptive protection of any page where there is not a problem that needs solving. Just Step Sideways from this world ..... today 21:28, 6 November 2024 (UTC)
- Absolutely not, pages that do not experience disruption should be open to edit. Pending changes should never become widely used to avoid situations like dewiki's utterly absurd 53-day backlog. —Kusma (talk) 21:53, 6 November 2024 (UTC)
- Very strong oppose, again Kusma puts it excellently. Protection should always be the exception, not the norm. Even in the Israel-Palestine topic area most articles do not experience disruption. Thryduulf (talk) 13:50, 7 November 2024 (UTC)
- WP:RUNAWAY sums up some of the tactics used by disruptive editors: namely
Their edits are limited to a small number of pages that very few people watch
andConversely, their edits may be distributed over a wide range of articles to make it less likely that any given user watches a sufficient number of affected articles to notice the disruptions
. If a user is really insistent on pushing their agenda, they might not be able to push it on the big pages, they may push it on some of the smaller pages where their edits may get unwatched for months if not years. Then, researchers digging up information will come across the POV article and blindly cite it. Although Wikipedia should never be cited as a source, it still happens. Awesome Aasim 14:35, 11 November 2024 (UTC)
- WP:RUNAWAY sums up some of the tactics used by disruptive editors: namely
- Per my comment above. jp×g🗯️ 18:18, 7 November 2024 (UTC)
- No, see my comment to the other questions. Lectonar (talk) 08:52, 8 November 2024 (UTC)
- No, we should never be preemptively protecting pages. Cremastra (u — c) 16:35, 10 November 2024 (UTC)
- No, except on the most prominent articles on each CT topic (probably already done on current CTs, but relevant for new ones). Animal lover |666| 19:47, 11 November 2024 (UTC)
- Absolutely not. See above comments for details. ~~ AirshipJungleman29 (talk) 20:50, 14 November 2024 (UTC)
- Comment - The number of revisions within the PIA topic area that violate the ARBECR rule is not measured. It is not currently possible to say anything meaningful about the amount of 'disruption' in the topic area by non-EC IPs and accounts. And the way people estimate the amount of 'disruption' subjectively depends on the timescale they choose to measure it. Nobody can see all of the revisions and the number of people looking is small. Since the ARBECR rule was introduced around the start of 2020, there have been over 71,000 revisions by IPs to articles and talk pages within the subset of the PIA topic, about 11,000 pages, used to gather statistical data (ARBPIA templated articles and articles that are members of both wikiproject Israel and wikiproject Palestine). Nobody has any idea how many of those were constructive, how many were disruptive, how many involved ban-evading disposable accounts etc. And yet, this incomplete information situation apparently has little to no impact on the credence we all assign to our views about what would work best for the PIA topic area. I personally think it is better to dispense with non-evidence-based beliefs about the state of the topic area at any given time and simply let the servers enforce the rule as written in WP:ARBECR, "only extended-confirmed editors may make edits related to the topic area, subject to the following provisions...". Sean.hoyland (talk) 17:22, 16 November 2024 (UTC)
- Make sense, but I am not sure if this is meant to be an oppose. Personally, since there hasn't been much big outrage not solved by a simple RfPP, anecdotally I see no problem with the status quo on this question. Aaron Liu (talk) 01:24, 17 November 2024 (UTC)
- Oppose per Thryduulf and others Andre🚐 01:29, 17 November 2024 (UTC)
- Oppose. Preemptive protection is just irresponsible.—Alalch E. 23:22, 22 November 2024 (UTC)
- As OP I am actually starting to lean weak oppose unless if we have a robust and new-user-friendly edit request system (which currently we don't). We already preemptively protect templates used on a lot of pages for technical reasons, and I don't think new users are at all going to be interested in templates so our current edit request system works decent for templates, modules, code pages, etc. When we choose to protect it should be the same as blocking which is the risk of disruption for specific pages or topic areas, using previous disruption to hope predict the future. Users already have a hard time submitting edit requests for pages not within contentious topic areas, so as it stands right now preemptive protection will do more harm than good. Awesome Aasim 19:10, 23 November 2024 (UTC)
- Oppose - more harm than good, too strict. Bluethricecreamman (talk) 02:30, 2 December 2024 (UTC)
Neutral (preemptive ECP)
Discussion (preemptive ECP)
I think this question should be changed to "...articles under WP:ARBECR topics?". Aaron Liu (talk) 20:11, 5 November 2024 (UTC)
- Okay, updated. Look good? Awesome Aasim 20:13, 5 November 2024 (UTC)
As I discussed in another comment, should this concept gain approval, I feel it is best for the community to work with the arbitration committee to amend its remedy. isaacl (talk) 15:34, 7 November 2024 (UTC)
- And as I discussed in another comment while I think the community could do this, I agree with isaac that it would be best to do it in a way that works with the committee. Best, Barkeep49 (talk) 16:03, 7 November 2024 (UTC)
Q4: Should there be a Git-like system for submitting and reviewing edits to protected pages?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
This behaves a little like pending changes, but with a few different things:
- There would be an additional option entitled "allow users to submit edits for review" in the protection window. There could also be a specific user group able to accept such edits.
- Instead of the standard "protected page text" informing the user is protected, when this option is enabled, the user is given a message something like "This page is currently protected, so you are currently submitting an edit request. Only when your change is approved will your edit be visible." An edit summary as well as a more detailed explanation into the review can be provided. Same for title blacklisted pages. However, the "permission error" will still show for attempting to rename the page, as well as for cases where a user cannot edit a page for a reason other than protection (like being blocked from editing).
- All the changes submitted for review end up in some namespace (like Review:1234567) with the change id. Only users with the ability to edit the page or accept the revision would be able to see these changes. There would also be the ability to discuss each change on the talk page for that change or something similar. This namespace by design will be unprotectable.
- Users with the ability to edit the page (or when a higher accept level is selected, users with that accept level) are given the ability to merge these changes in. Administrators can delete changes just like they can delete individual revisions, and these changes can also be suppressed just like individual revisions.
- Changes are not directly committed to the edit history, unlike the current pending changes system; only to the page in the Review: namespace.
This would be a major improvement over our edit request system which ONLY allows a user to write what they want changed, and that is often prone to stuff that is not WP:CHANGEXY. If there are merge conflicts preventing a clean merge then the person who submitted the edit or the reviewer will have to manually fix it before it merges cleanly. If this path is chosen we can safely retire pending changes. Awesome Aasim 18:52, 23 November 2024 (UTC)
Survey (Q4)
- Support failing Q1, as it streamlines the experience for making edit requests, especially for new users. I have had ideas for scripts to make the experience of submitting an edit request a lot easier but none has really come to fruition. I still don't entirely agree with the arguments with Q2 and Q3, but I am starting to agree that that is putting the pen before the pig and thus can be closed as premature, unless if there is an emerging consensus that pages being within a topic area should not be protected for being within that particular topic area. Awesome Aasim 18:52, 23 November 2024 (UTC)
- Support in theory, but wait to see if this is technically possible to implement. While a clear improvement, it will likely require quite some amount of work (and workshopping) for implementation. While a non-binding poll to gauge community interest is a good thing, having a full RfC to adopt this before coding has even begun is way too premature. Chaotic Enby (talk · contribs) 21:29, 23 November 2024 (UTC)
- Too soon to know. Once it is known that it is technically possible and you have mockups of things like interfaces and details of how it would handle a range of common real-world scenarios then we can discuss whether it would make sense to implement it. Thryduulf (talk) 22:52, 23 November 2024 (UTC)
- The whole premise of this RfC is if this is possible, and if it is not that some are willing to make this possible. Awesome Aasim 22:54, 23 November 2024 (UTC)
- Before proposing something like this, first find out whether it is possible. If it isn't currently possible but could be, work out structures and how it will work, at least broadly. Then find out whether enough people want it that someone spending the time to make it will be worthwhile. You can't just assume that anything you want is technically possible and that if enough other people also want it that developers will make it for you. Some relatively simple, uncontroversial feature requests, with demonstrated demand, have been open tasks awaiting developer intention for over 15 years. Thryduulf (talk) 02:16, 24 November 2024 (UTC)
- As an actual developer, this seems like it would be possible in the technical sense, but also a sufficiently large project that it won't actually get done unless some WMF team takes the initiative to do it. This would likely amount to writing a new extension, which would have to go through the review queue, whose first step now is
Find at least one WMF team (or staff member on behalf of their team) to agree to offer basic support for the extension for when it's deployed to Wikimedia Production
. And I have no idea what team would support this. Moderator Tools would be my first guess, but they refused to support Adiutor even when it was actually coded up and ready to go and is much simpler, so they definitely won't. I personally think this requirement is unnecessary (and hypocritical), and the WMF needs to stop stifling volunteers' creativity, but there's nothing I can do about it now. And all of this is despite the fact that I think there's actually some merit to the idea. * Pppery * it has begun... 04:17, 24 November 2024 (UTC)
- As an actual developer, this seems like it would be possible in the technical sense, but also a sufficiently large project that it won't actually get done unless some WMF team takes the initiative to do it. This would likely amount to writing a new extension, which would have to go through the review queue, whose first step now is
- Before proposing something like this, first find out whether it is possible. If it isn't currently possible but could be, work out structures and how it will work, at least broadly. Then find out whether enough people want it that someone spending the time to make it will be worthwhile. You can't just assume that anything you want is technically possible and that if enough other people also want it that developers will make it for you. Some relatively simple, uncontroversial feature requests, with demonstrated demand, have been open tasks awaiting developer intention for over 15 years. Thryduulf (talk) 02:16, 24 November 2024 (UTC)
- The whole premise of this RfC is if this is possible, and if it is not that some are willing to make this possible. Awesome Aasim 22:54, 23 November 2024 (UTC)
- Provisionally support - there is the problem that this requires implementation, so a support !vote has to wait until someone comes along who has the skills needed and is sufficiently enthusiastic about the proposal to get it done. This barrier aside, I do think that this is a good idea. It is more likely to attract attention if the underlying proposal is approved. Perhaps the underlying proposal could be added as an alternate to page protection for use by Arbcom. — Charles Stewart (talk) 05:19, 28 November 2024 (UTC)
- Support - I think this would be a better way to replace edit request system, by having many potential merges, instead of a single pending changes version. If a flame warrior wants to make their own version of an article, no need to worry about the pending changes version being polluted and edit warred over, let the isolated proposed branch exist for that one user. Bluethricecreamman (talk) 02:33, 2 December 2024 (UTC)
Discussion (Q4)
If additional proposals come (seems unlikely), I wonder if this might be better split as a "pending changes review" or something similar. Awesome Aasim 18:52, 23 November 2024 (UTC)
I really think this should be straight-up implemented as whatever first instead of being asked in an RfC. Aaron Liu (talk) 19:32, 23 November 2024 (UTC)
First, please stop calling this a git-like system. The real essence of version control systems is branching history. Plus one of the key principles for git is to enable developers to keep the branching history as simple as possible, with changes merged cleanly into an integration branch, so proposed changes never show up in the history of the integration branch.
I would prefer keeping the article history clear of any edit requests. There could be a tool that would clone an article (or designated sections) to a user subpage, preserving attribution in the edit summary. The user could make their changes on that page, and then a tool could assist them in creating an edit request. Whoever processes the request will be able to review the diff on the subpage. If the current version of the article has changed significantly, they can ask the requester to rebase the page to the current version and redo their change. I think this approach simplifies both creating and reviewing a proposed change, and helps spread the workload of integrating changes when they pile up. isaacl (talk) 22:44, 23 November 2024 (UTC)
- It won't. If the change is not merged. The point of this is the edit history remains clear up until the edit is approved. We can do some "squashing" as well as limit edits to be reviewed to the original creator. A commit on GitHub and GitLab does not show up on main until merged. It is already possible to merge two page's histories right now, this is done after cut and paste moves. This just takes it to a different level. Awesome Aasim 22:53, 23 November 2024 (UTC)
- History merge isn't really the same thing, in that you can't interlace changes in the version history, but only have a "clean" merge when the two have disjoint timespans. If multiple versions of the same page are edited simultaneously before being merged, even assuming no conflicts in merging, the current histmerge system will not be able to handle it properly. Chaotic Enby (talk · contribs) 22:58, 23 November 2024 (UTC)
- If it doesn't show up in the article history, then it isn't like pending changes at all, so I suggest your summary should be updated accordingly. In which case, under the hood your proposal is similar to mine; I suggest having subpages under the user page would be easier for the user to manage. Squashing shouldn't be done with the history of public branches (commits should remain fixed once they've been made known to everyone) plus rewriting history can be confusing, so I think the change history should be preserved on the working page. If you mean that the submission into the article should be one edit, sure.
- My proposal was to layer on tools to assist with creating edit requests, while yours seeks to integrate the system with the edit function when a user is prevented from editing due to page protection. Thus from an implementation perspective, my proposal can be implemented independently of the rest of the MediaWiki code base (and could be done with gadgets), while yours would require changes to the MediaWiki code. Better integration of course offers a more cohesive user experience, but faces greater implementation and integration challenges. I suggest reaching out to the WMF development team to find a contact to discuss your ideas. isaacl (talk) 23:13, 23 November 2024 (UTC)
- I agree that for now we should have JS tools, although that itself has challenges. A modification to MediaWiki core will also have challenges but it might be worth it in the long run, as Core gets regular updates to features, but extensions not always. Awesome Aasim 01:31, 24 November 2024 (UTC)
- Okay, I took a stab at making the experience of making an edit request a bit more new-user friendly: User:Awesome Aasim/editrequestor.js.
- I did notice someone else created a similar script but it behaves quite differently. This relies largely on the MediaWiki compare API to build a result. Unfortunately it uses deprecated libraries, etc. and will definitely need rewriting, but I think it is a good first prototype.
- If something similar was loaded for every edit request with
withJS
, I wonder how this will change the views of users who expressed opposition. Awesome Aasim 02:35, 30 November 2024 (UTC)- Not sure which users you're thinking of, as no one in this discussion has so far opposed changes to the edit process so it can feed an edit request system without introducing pending changes into the article history. (I can imagine opposition based on potentially swamping the edit request system, and a lack of capacity to handle requests, but I don't think the discussion is there yet.) Maybe you can create a short video to demonstrate how your prototype functions? It should be a good starting point for discussions with the appropriate WMF developers. isaacl (talk) 20:19, 30 November 2024 (UTC)
- The "similar script" I am referring to is User:NguoiDungKhongDinhDanh/FormattedEditRequest. But it works a bit differently, rather than intercepting "submit an edit request" requests, it adds a link to a portlet.
- Here is a MP4 file of my prototype. If this can be converted to a compatible format and uploaded to Wikipedia that would be nice. Awesome Aasim 20:44, 30 November 2024 (UTC)
- I wasn't wondering about the other script, but thanks for the info. isaacl (talk) 22:23, 30 November 2024 (UTC)
- Not sure which users you're thinking of, as no one in this discussion has so far opposed changes to the edit process so it can feed an edit request system without introducing pending changes into the article history. (I can imagine opposition based on potentially swamping the edit request system, and a lack of capacity to handle requests, but I don't think the discussion is there yet.) Maybe you can create a short video to demonstrate how your prototype functions? It should be a good starting point for discussions with the appropriate WMF developers. isaacl (talk) 20:19, 30 November 2024 (UTC)
- I agree that for now we should have JS tools, although that itself has challenges. A modification to MediaWiki core will also have challenges but it might be worth it in the long run, as Core gets regular updates to features, but extensions not always. Awesome Aasim 01:31, 24 November 2024 (UTC)
General discussion
Since we're assuming that PCECP is possible and the last two questions definitely deal with policy, I feel like maybe this should go to VPP instead, with the header edited to something like "Extended-confirmed pending changes and preemptive protection in contentious topics" to reflect the slightly−larger-than-advertised scope? Aaron Liu (talk) 23:53, 5 November 2024 (UTC)
- I think policy proposals are also okay here, though I see your point. There is definitely overlap, though. This is both a request for a technical change as well as establishing policy/guidelines around that technical change (or lack thereof). Awesome Aasim 00:26, 6 November 2024 (UTC)
If this proposal is accepted, my assumption is that we'd bring back the ORANGELOCK which was used for the original incarnation of Pending Changes Level 2. There's a proposed lock already at File:Pending_Changes_Protected_Level_2.svg, though it needs fixes in terms of name (should probably be something like Pending-level-2-protection-shackle.png
or Extended-pending-protection-shackle.png
), SVG code (the top curve is a bit cut off), and color (should probably be darker but still clearly distinguishable from REDLOCK). —pythoncoder (talk | contribs) 21:43, 8 November 2024 (UTC)
- I think light blue is a better color for this. But in any case we will probably need a lock with a checkmark and the letter "E" for extended confirmed. Awesome Aasim 22:22, 8 November 2024 (UTC)
- Light blue seems too similar to the sky-blue currently used for WP:SALT —pythoncoder (talk | contribs) 18:04, 1 December 2024 (UTC)
- I would go for either the EC lock just with the icon replaced with a checkmark or what you said but with the same color and a diagonal line down the middle. Aaron Liu (talk) 20:02, 1 December 2024 (UTC)
Courtesy ping
Courtesy ping all from the idea lab that participated in helping formulate this RfC: @Toadspike @Jéské Couriano @Aaron Liu @Mach61 @Cremastra @Anomie @SamuelRiv @Isaacl @WhatamIdoing @Ahecht @Bunnypranav. Awesome Aasim 19:58, 5 November 2024 (UTC)
Protection?
I am actually starting to wonder if "protection" is a bit of a misnomer, because technically pages under pending changes are not really "protected". Yeah the edits are subject to review, but there are no technical measures to prevent a user from editing. It is just like recent changes on many wikis; those hold edits for review until they are approved, but they do not "protect" the entire wiki. Awesome Aasim 23:40, 11 November 2024 (UTC)
- How about “kinder, gentler protection”? To appear in the know, you can use an acronym, such as in “TCPIP is an example of KGP”. — Charles Stewart (talk) 04:57, 28 November 2024 (UTC)
Move to close
The main proposal is basically deadlocked and has been for six days, and the sub-proposals are clearly failing. Seems like we have a result. Just Step Sideways from this world ..... today 23:09, 22 November 2024 (UTC)
- I was about to withdraw Q2 and Q3 for putting the pen before the pig, but I did realize I added a couple more comments particularly to Q2. I did add a Q4 that might be more actionable and that is about making the experience of submitting edit requests a lot better. I am starting to agree though for Q2 and Q3 everything that has needed to be said has been said so the proposals can be withdrawn.
- We do need to consider the experience of the users actually being locked out of this. I understand the opposition to Q3 (and in fact just struck my !vote because of this). But Q2? Look at the disaster that WP:V22RFC, WP:V22RFC2, and WP:V22RFC3 is. These surveys are barely representative of new users, just of experienced editors. We should absolutely be bringing new editors to the table for these discussions. Awesome Aasim 19:13, 23 November 2024 (UTC)
- Please don't pre-close. 4 of the opposers to the main proposal seem to address only Q2 instead of Q1, and I don't see anyone addressing the argument that it's less restrictive than ECP. It's up to the closer to weigh the consensus. Aaron Liu (talk) 19:30, 23 November 2024 (UTC)
RfC: Should a blackout be organized in protest of the Wikimedia Foundation's actions?
RfC: Log the use of the HistMerge tool at both the merge target and merge source
|
Currently, there are open phab tickets proposing that the use of the HistMerge tool be logged at the target article in addition to the source article. Several proposals have been made:
- Option 1a: When using Special:MergeHistory, a null edit should be placed in both the merge target and merge source's page's histories stating that a history merge took place.
- (phab:T341760: Special:MergeHistory should place a null edit in the page's history describing the merge, authored Jul 13 2023)
- Option 1b: When using Special:MergeHistory, add a log entry recorded for the articles at the both HistMerge target and source that records the existence of a history merge.
- (phab:T118132: Merging pages should add a log entry to the destination page, authored Nov 8 2015)
- Option 2: Do not log the use of the Special:MergeHistory tool at the merge target, maintaining the current status quo.
Should the use of the HistMerge tool be explicitly logged? If so, should the use be logged via an entry in the page history or should it instead be held in a dedicated log? — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)
Survey: Log the use of the HistMerge tool
- Option 1a/b. I am in principle in support of adding this logging functionality, since people don't typically have access to the source article title (where the histmerge is currently logged) when viewing an article in the wild. There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. As for whether this is logged directly in the page history (as is done currently with page protection) or if this is merely in a separate log file, I don't have particularly strong feelings, but I do think that adding functionality to log histmerges at the target article would improve clarity in page histories. — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)
- Option 1a/b. No strong feelings on which way is best (I'll let the experienced histmergers comment on this), but logging a history merge definitely seems like a useful feature. Chaotic Enby (talk · contribs) 16:02, 20 November 2024 (UTC)
- Option 1a/b. Choatic Enby has said exactly what I would have said (but more concisely) had they not said it first. Thryduulf (talk) 16:23, 20 November 2024 (UTC)
- 1b would be most important to me but but 1a would be nice too. But this is really not the place for this sort of discussion, as noted below. Graham87 (talk) 16:28, 20 November 2024 (UTC)
- Option 2 History merging done right should be seamless, leaving the page indistinguishable from if the copy-paste move being repaired had never happened. Adding extra annotations everywhere runs counter to that goal. Prefer 1b to 1a if we have to do one of them, as the extra null edits could easily interfere with the history merge being done in more complicated situations. * Pppery * it has begun... 16:49, 20 November 2024 (UTC)
- Could you expound on why they should be indistinguishable? I don't see how this could harm any utility. A log action at the target page would not show up in the history anyways, and a null edit would have no effect on comparing revisions. Aaron Liu (talk) 17:29, 20 November 2024 (UTC)
- Why shouldn't it be indistinguishable? Why it it necessary to go out of our way to say even louder that someone did something wrong and it had to be cleaned up? * Pppery * it has begun... 17:45, 20 November 2024 (UTC)
- All cleanup actions are logged to all the pages they affect. Aaron Liu (talk) 18:32, 20 November 2024 (UTC)
- Why shouldn't it be indistinguishable? Why it it necessary to go out of our way to say even louder that someone did something wrong and it had to be cleaned up? * Pppery * it has begun... 17:45, 20 November 2024 (UTC)
- Could you expound on why they should be indistinguishable? I don't see how this could harm any utility. A log action at the target page would not show up in the history anyways, and a null edit would have no effect on comparing revisions. Aaron Liu (talk) 17:29, 20 November 2024 (UTC)
- 2 History merges are already logged, so this survey name is somewhat off the mark. As someone who does this work: I do not think these should be displayed at either location. It would cause a lot of noise in history pages that people probably would not fundamentally understand (2 revisions for "please process this" and "remove tag" and a 3rd revision for the suggested log), and it would be "out of order" in that you will have merged a bunch of revisions but none of those revisions would be nearby the entry in the history page itself. I also find protections noisy in this way as well, and when moves end up causing a need for history merging, you end up with doubled move entries in the merged history, which also is confusing. Adding history merges to that case? No thanks. History merges are more like deletions and undeletions, which already do not add displayed content to the history view. Izno (talk) 16:54, 20 November 2024 (UTC)
- They presently are logged, but only at the source article. Take for example this entry. When I search for the merge target, I get nothing. It's only when I search the merge source that I'm able to get a result, but there isn't a way to know the merge source.
- If I don't know when or if the histmerge took place, and I don't know what article the history was merged from, I'd have to look through the entirety of the merge log manually to figure that out—and that's suboptimal. — Red-tailed hawk (nest) 17:05, 20 November 2024 (UTC)
- ... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)
- But ignoring that, why is it valuable to know this information? What do you gain? And is what you gain actually valuable to your end objective? For example, let's take your
There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful.
Is not the revisions left behind in the page history by both the person requesting and the person performing the histmerge not enough (see {{histmerge}})? There are history merges done that don't have that request format such as the WikiProject history merge format, but those are almost always ancient revisions, so what are you gaining there? And where they are not ancient revisions, they are trivial kinds of the form "draft x -> page y, I hate that I even had to interact with this history merge it was so trivial (but also these are great because I don't have to spend significant time on them)". Izno (talk) 17:32, 20 November 2024 (UTC)
I don't think everyone would necessarily agree (see Toadspike's comment below). Chaotic Enby (talk · contribs) 17:42, 20 November 2024 (UTC)... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)
- Page moves do leave a null edit on the page that describes where the page was moved from and was moved to. And it's easy to work backwards from there to figure out the page move history. The same cannot be said of the Special:MergeHistory tool, which doesn't make it easy to re-construct what the heck went on unless we start diving naïvely through the logs. — Red-tailed hawk (nest) 17:50, 20 November 2024 (UTC)
- It can be *possible* to find the original history merge source page without looking through the merge log, but the method for doing so is very brittle and extremeley hacky. Basically, look for redirects to the page using "What links here", and find the redirect whose first edit has an unusual byte difference. This relies on the redirect being stable and not deleted or retargetted. There is also another way that relies on byte difference bugs as described in the above-linked discussion by wbm1058. Both of those are ... particularly awful. Graham87 (talk) 03:48, 21 November 2024 (UTC)
- In the given example, the history-merge occurred here. Your "log" is the edit summaries. "Created page with '..." is the edit summary left by a normal page creation. But wait, there is page history before the edit that created the page. How did it get there? Hmm, the previous edit summary "Declining submission: v - Submission is improperly sourced (AFCH)" tips you off to look for the same title in draft: namespace. Voila! Anyone looking for help with understanding a particular merge may ask me and I'll probably be able to figure it out for you. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)
- Here's another example, of a merge within mainspace. The automatic edit summary (created by the MediaWiki software) of this (No difference) diff "Removed redirect to Jordan B. Acker" points you to the page that was merged at that point. Voila. Voila. Voila. – wbm1058 (talk) 13:44, 21 November 2024 (UTC)
- There are times where those traces aren't left. Aaron Liu (talk) 13:51, 21 November 2024 (UTC)
- Here's another scenario, this one from WP:WikiProject History Merge. The page history shows an edit adding +5,800 bytes, leaving the page with 5,800 bytes. But the previous edit did not leave a blank page. Some say this is a bug, but it's also a feature. That "bug" is actually your "log" reporting that a hist-merge occurred at that edit. Voila, the log for that page shows a temp delete & undelete setting the page up for a merge. The first item on the log:
- @ 20:14, 16 January 2021 Tbhotch moved page Flag of Yucatán to Flag of the Republic of Yucatán (Correct name)
- clues you in to where to look for the source of the merge. Voila, that single edit which removed −5,633 bytes tells you that previous history was merged off of that page. The log provides the details. – wbm1058 (talk) 16:03, 21 November 2024 (UTC)
- (phab:T76557: Special:MergeHistory causes incorrect byte change values in history, authored Dec 2 2014) — Preceding unsigned comment added by Wbm1058 (talk • contribs) 18:13, 21 November 2024 (UTC)
- Again, there are times where the clues are much harder to find, and even in those cases, it'd be much better to have a unified and assured way of finding the source. Aaron Liu (talk) 16:11, 21 November 2024 (UTC)
- Indeed. This is a prime example of an unintended undocumented feature. Graham87 (talk) 08:50, 22 November 2024 (UTC)
- Yeah. I don't think that we can permanently rely on that, given that future versions of MediaWiki are not bound in any real way to support that workaround. — Red-tailed hawk (nest) 04:24, 3 December 2024 (UTC)
- Indeed. This is a prime example of an unintended undocumented feature. Graham87 (talk) 08:50, 22 November 2024 (UTC)
- Again, there are times where the clues are much harder to find, and even in those cases, it'd be much better to have a unified and assured way of finding the source. Aaron Liu (talk) 16:11, 21 November 2024 (UTC)
- Here's another scenario, this one from WP:WikiProject History Merge. The page history shows an edit adding +5,800 bytes, leaving the page with 5,800 bytes. But the previous edit did not leave a blank page. Some say this is a bug, but it's also a feature. That "bug" is actually your "log" reporting that a hist-merge occurred at that edit. Voila, the log for that page shows a temp delete & undelete setting the page up for a merge. The first item on the log:
- There are times where those traces aren't left. Aaron Liu (talk) 13:51, 21 November 2024 (UTC)
- Support 1b (log only), oppose 1a (null edit). I defer to the experienced histmergers on this, and if they say that adding null edits everywhere would be inconvenient, I believe them. However, I haven't seen any arguments against logging the histmerge at both articles, so I'll support it as a sensible idea. (On a similar note, it bothers me that page moves are only logged at one title, not both.) Toadspike [Talk] 17:10, 20 November 2024 (UTC)
- Option 2. The merges are already logged, so there’s no reason to add it to page histories. While it may be useful for habitual editors, it will just confuse readers who are looking for an old revision and occasional editors. Ships & Space(Edits) 18:33, 20 November 2024 (UTC)
- But only the source page is logged as the "target". IIRC it currently can be a bit hard to find out when and who merged history into a page if you don't know the source page and the mergeperson didn't leave any editing indication that they merged something. Aaron Liu (talk) 18:40, 20 November 2024 (UTC)
- 1B. The present situation of the action being only logged at one page is confusing and unhelpful. But so would be injecting null-edits all over the place. — SMcCandlish ☏ ¢ 😼 01:38, 21 November 2024 (UTC)
- Option 2. This exercise is dependent on finding a volunteer MediaWiki developer willing to work on this. Good luck with that. Maybe you'll find one a decade from now. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)
- And, more importantly, someone in the MediaWiki group to review it. I suspect there are many people, possibly including myself, who would code this if they didn't think they were wasting their time shuffling things from one queue to another. * Pppery * it has begun... 06:03, 21 November 2024 (UTC)
- That link requires a Gerrit login/developer account to view. It was a struggle to get in to mine (I only have one because of an old Toolforge account and I'd basically forgotten about it), but for those who don't want to go through all that, that group has only 82 members (several of whose usernames I recognise) and I imagine they have a lot on their collective plate. There's more information about these groups at Gerrit/Privilege policy on MediaWiki. Graham87 (talk) 15:38, 21 November 2024 (UTC)
- Sorry, I totally forgot Gerrit behaved in that counterintuitive way and hid public information from logged out users for no reason. The things you miss if Gerrit interactions become something you do pretty much every day. If you want to count the members of the group you also have to follow the chain of included groups - it also includes https://ldap.toolforge.org/group/wmf, https://ldap.toolforge.org/group/ops and the WMDE-MediaWiki group (another login-only link), as well as a few other permission edge cases (almost all of which are redundant because the user is already in the MediaWiki group) * Pppery * it has begun... 18:07, 21 November 2024 (UTC)
- That link requires a Gerrit login/developer account to view. It was a struggle to get in to mine (I only have one because of an old Toolforge account and I'd basically forgotten about it), but for those who don't want to go through all that, that group has only 82 members (several of whose usernames I recognise) and I imagine they have a lot on their collective plate. There's more information about these groups at Gerrit/Privilege policy on MediaWiki. Graham87 (talk) 15:38, 21 November 2024 (UTC)
- And, more importantly, someone in the MediaWiki group to review it. I suspect there are many people, possibly including myself, who would code this if they didn't think they were wasting their time shuffling things from one queue to another. * Pppery * it has begun... 06:03, 21 November 2024 (UTC)
- Support 1a/b, and I would encourage the closer to disregard any opposition based solely on the chances of someone ever actually implementing it. —Compassionate727 (T·C) 12:52, 21 November 2024 (UTC)
- Fine. This stupid RfC isn't even asking the right questions. Why did I need to delete (an expensive operation) and then restore a page in order to "set up for a history merge" Should we fix the software so that it doesn't require me to do that? Why did the page-mover resort to cut-paste because there was page history blocking their move, rather than ask a administrator for help? Why doesn't the software just let them move over that junk page history themselves, which would negate the need for a later hist-merge? (Actually in this case the offending user only has made 46 edits, so they don't have page-mover privileges. But they were able to move a page. They just couldn't move it back a day later after they changed their mind.) wbm1058 (talk) 13:44, 21 November 2024 (UTC)
- Yeah, revision move would be amazing, for a start. Graham87 (talk) 15:38, 21 November 2024 (UTC)
- Fine. This stupid RfC isn't even asking the right questions. Why did I need to delete (an expensive operation) and then restore a page in order to "set up for a history merge" Should we fix the software so that it doesn't require me to do that? Why did the page-mover resort to cut-paste because there was page history blocking their move, rather than ask a administrator for help? Why doesn't the software just let them move over that junk page history themselves, which would negate the need for a later hist-merge? (Actually in this case the offending user only has made 46 edits, so they don't have page-mover privileges. But they were able to move a page. They just couldn't move it back a day later after they changed their mind.) wbm1058 (talk) 13:44, 21 November 2024 (UTC)
- Option 1b – changes to a page's history should be listed in that page's log. There's no need to make a null edit; pagemove null edits are useful because they meaningfully fit into the page's revision history, which isn't the case here. jlwoodwa (talk) 00:55, 22 November 2024 (UTC)
- Option 1b sounds best since that's what those in the know seem to agree on, but 1a would probably be OK. Abzeronow (talk) 03:44, 23 November 2024 (UTC)
- Option 1b seems like the one with the best transparency to me. Thanks. Huggums537voted! (sign🖋️|📞talk) 06:59, 25 November 2024 (UTC)
Discussion: Log the use of the HistMerge tool
- I'm noticing some commentary in the above RfC (on widening importer rights) as to whether or not this might be useful going forward. I do think that having the community weigh in one way or another here would be helpful in terms of deciding whether or not this functionality is worth building. — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)
- This is a missing feature, not a config change. Aaron Liu (talk) 15:58, 20 November 2024 (UTC)
- Indeed; it's about a feature proposal. — Red-tailed hawk (nest) 16:02, 20 November 2024 (UTC)
- As many of the above, this is a feature request and not something that should be special for the English Wikipedia. — xaosflux Talk 16:03, 20 November 2024 (UTC)
- See phab:T341760. I'm not seeing any sort of reason this would need per-project opt-ins requiring a local discussion. — xaosflux Talk 16:05, 20 November 2024 (UTC)
- True, but I agree with Red-tailed hawk that it's good to have the English Wikipedia community weigh on whether we want that feature implemented here to begin with. Chaotic Enby (talk · contribs) 16:05, 20 November 2024 (UTC)
- Here is the Phabricator project page for MergeHistory, and the project's 11 open tasks. – wbm1058 (talk) 18:13, 21 November 2024 (UTC)
- I agree that this is an odd thing to RFC. This is about a feature in MediaWiki core, and there are a lot more users of MediaWiki core than just English Wikipedia. However, please do post the results of this RFC to both of the phab tickets. It will be a useful data point with regards to what editors would find useful. –Novem Linguae (talk) 23:16, 21 November 2024 (UTC)
CheckUser for all new users
All new users (IPs and accounts) should be subject to CheckUser against known socks. This would prevent recidivist socks from returning and save the time and energy of users who have to prove a likely case at SPI. Recidivist socks often get better at covering their "tells" each time making detection increasingly difficult. Users should not have to make the huge effort of establishing an SPI when editing from an IP or creating a new account is so easy. We should not have to endure Wikipedia:Long-term abuse/HarveyCarter, Wikipedia:Sockpuppet investigations/Phạm Văn Rạng/Archive or Wikipedia:Sockpuppet investigations/Orchomen/Archive if CheckUser can prevent them. Mztourist (talk) 04:06, 22 November 2024 (UTC)
- I'm pretty sure that even if we had enough checkuser capacity to routinely run checks on every new user that doing so would be contrary to global policy. Thryduulf (talk) 04:14, 22 November 2024 (UTC)
- Setting aside privacy issues, the fact that the WMF wouldn't let us do it, and a few other things: Checking a single account, without any idea of who you're comparing them to, is not very effective, and the worst LTAs are the ones it would be least effective against. This has been floated several times in the much narrower context of adminship candidates, and rejected each time. It probably belongs on WP:PEREN by now. -- Tamzin[cetacean needed] (they|xe) 04:21, 22 November 2024 (UTC)
- Why can't it be automated? What are the privacy issues and what would WMF concerns be? There has to be a better system than SPI which imposes a huge burden on the filer (and often fails to catch socks) while we just leave the door open for LTAs. Mztourist (talk) 04:39, 22 November 2024 (UTC)
- How would it be automated? We can't just block everyone who even sometimes shares an IP with someone, which is most editors once you factor in mobile editing and institutional WiFi. Even if we had a system that told checkusers about all shared-IP situations and asked them to investigate, what are they investigating for? The vast majority of IP overlaps will be entirely innocent, often people who don't even know each other. There's no way for a checkuser to find any signal in all that noise. So the only way a system like this would work is if checkusers manually identified IP ranges that are being used by LTAs, and then placed blocks on those ranges to restrict them from account creation... Which is what already happens. -- Tamzin[cetacean needed] (they|xe) 04:58, 22 November 2024 (UTC)
- I would assume that IT experts can work out a way to automate CheckUser. If someone edits on a shared IP used by a previous sock that should be flagged and human CheckUsers notified so they can look at the edits and the previous sock edits and warn or block as necessary. Mztourist (talk) 05:46, 22 November 2024 (UTC)
- We already have autoblock. For cases it doesn't catch, there's an additional manual layer of blocking, where if a sock is caught on an IP that's been used before but wasn't caught by autoblock, a checkuser will block the IP if it's technically feasible, sometimes for months or years at a time. Beyond that, I don't think you can imagine just how often "someone edits on a shared IP used by a previous sock". I'm doing that right now, probably, because I'm editing through T-Mobile. Basically anyone who's ever edited in India or Nigeria has been on an IP used by a previous sock. Basically anyone who's used a large institution's WiFi. There is not any way to weed through all that noise with automation. -- Tamzin[cetacean needed] (they|xe) 05:54, 22 November 2024 (UTC)
- Addendum: An actually potentially workable innovation would be something like a system that notifies CUs if an IP is autoblocked more than once in a certain time period. That would be a software proposal for Phabricator, though, not an enwiki policy proposal, and would still have privacy implications that would need to be squared with the WMF. -- Tamzin[cetacean needed] (they|xe) 05:57, 22 November 2024 (UTC)
- I believe Tamzin has it about right, but I want to clarify a thing. If you're hypothetically using T-Mobile (and this also applies to many other ISPs and many LTAs) then the odds are very high that you're using an IP address which has never been used before. With T-Mobile, which is not unusually large by any means, you belong to at least one /32 range which contains a number of IP addresses so big that it has 30 digits. These ranges contain a huge number of users. At the other extreme you have some countries with only a handful of IPs, which everyone uses. These IPs also typically contain a huge number of users. TLDR; is someone is using a single IP on their own then we'll probably just block it, otherwise you're talking about matching a huge number of users. -- zzuuzz (talk) 03:20, 23 November 2024 (UTC)
- As I understand it, if you're hypothetically using T-Mobile, then you're not editing, because someone range-blocked the whole network in pursuit of a vandal(s). See Wikipedia:Advice to T-Mobile IPv6 users. WhatamIdoing (talk) 03:36, 23 November 2024 (UTC)
- T-Mobile USA is a perennial favourite of many of the most despicable LTAs, but that's besides the point. New users with an account can actually edit from T-Mobile. They can also edit from Jio, or Deutsche Telecom, Vodafone, or many other huge networks. -- zzuuzz (talk) 03:50, 23 November 2024 (UTC)
- As I understand it, if you're hypothetically using T-Mobile, then you're not editing, because someone range-blocked the whole network in pursuit of a vandal(s). See Wikipedia:Advice to T-Mobile IPv6 users. WhatamIdoing (talk) 03:36, 23 November 2024 (UTC)
- We already have autoblock. For cases it doesn't catch, there's an additional manual layer of blocking, where if a sock is caught on an IP that's been used before but wasn't caught by autoblock, a checkuser will block the IP if it's technically feasible, sometimes for months or years at a time. Beyond that, I don't think you can imagine just how often "someone edits on a shared IP used by a previous sock". I'm doing that right now, probably, because I'm editing through T-Mobile. Basically anyone who's ever edited in India or Nigeria has been on an IP used by a previous sock. Basically anyone who's used a large institution's WiFi. There is not any way to weed through all that noise with automation. -- Tamzin[cetacean needed] (they|xe) 05:54, 22 November 2024 (UTC)
- I would assume that IT experts can work out a way to automate CheckUser. If someone edits on a shared IP used by a previous sock that should be flagged and human CheckUsers notified so they can look at the edits and the previous sock edits and warn or block as necessary. Mztourist (talk) 05:46, 22 November 2024 (UTC)
- How would it be automated? We can't just block everyone who even sometimes shares an IP with someone, which is most editors once you factor in mobile editing and institutional WiFi. Even if we had a system that told checkusers about all shared-IP situations and asked them to investigate, what are they investigating for? The vast majority of IP overlaps will be entirely innocent, often people who don't even know each other. There's no way for a checkuser to find any signal in all that noise. So the only way a system like this would work is if checkusers manually identified IP ranges that are being used by LTAs, and then placed blocks on those ranges to restrict them from account creation... Which is what already happens. -- Tamzin[cetacean needed] (they|xe) 04:58, 22 November 2024 (UTC)
- Why can't it be automated? What are the privacy issues and what would WMF concerns be? There has to be a better system than SPI which imposes a huge burden on the filer (and often fails to catch socks) while we just leave the door open for LTAs. Mztourist (talk) 04:39, 22 November 2024 (UTC)
- Would violate the policy WP:NOTFISHING. –Novem Linguae (talk) 04:43, 22 November 2024 (UTC)
- It would apply to every new User as a protective measure against sockpuppetry, like a credit check before you get a card/overdraft. WP:NOTFISHING is archaic like the whole burdensome SPI system that forces honest users to do all the hard work of proving sockpuppetry while socks and vandals just keep being welcomed in under WP:AGF. Mztourist (talk) 05:46, 22 November 2024 (UTC)
- What you're suggesting is to just inundate checkusers with thousands of cases. The suggestion (as I understand it) removes burden from SPI filers by adding a disproportional burden on checkusers, who are already an overworked group. If you're suggesting an automated solution, then I believe IP blocks/IP range blocks and autoblock (discussed by Tamzin, above) already cover enough. It's quite hard to weigh up what you're really suggesting because it feels very vague without much detail - it sounds like you're just saying "a new SPI should be opened for every new user and IP, forever" which is not really a workable solution (for instance, 50 accounts were made in the last 15 minutes, which is about one every 18 seconds) BugGhost🦗👻 18:12, 22 November 2024 (UTC)
- And most of those accounts will make zero, one, or two edits, and then never be used again. Even if we liked this idea, doing it for every single account creation would be a waste of resources. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)
- What you're suggesting is to just inundate checkusers with thousands of cases. The suggestion (as I understand it) removes burden from SPI filers by adding a disproportional burden on checkusers, who are already an overworked group. If you're suggesting an automated solution, then I believe IP blocks/IP range blocks and autoblock (discussed by Tamzin, above) already cover enough. It's quite hard to weigh up what you're really suggesting because it feels very vague without much detail - it sounds like you're just saying "a new SPI should be opened for every new user and IP, forever" which is not really a workable solution (for instance, 50 accounts were made in the last 15 minutes, which is about one every 18 seconds) BugGhost🦗👻 18:12, 22 November 2024 (UTC)
- It would apply to every new User as a protective measure against sockpuppetry, like a credit check before you get a card/overdraft. WP:NOTFISHING is archaic like the whole burdensome SPI system that forces honest users to do all the hard work of proving sockpuppetry while socks and vandals just keep being welcomed in under WP:AGF. Mztourist (talk) 05:46, 22 November 2024 (UTC)
- No, they should not. voorts (talk/contributions) 17:23, 22 November 2024 (UTC)
- This, very bluntly, flies in the face of WMF policy with regards to use/protection of PII, and as noted by Tamzin this would result in frankly obscene amounts of collateral damage. You have absolutely no idea how frequently IP addresses get passed around (especially in the developing world or on T Mobile), such that it could feasibly have three different, unrelated, people on it over the course of a day or so. —Jéské Couriano v^_^v threads critiques 18:59, 22 November 2024 (UTC)
- Just out of curiosity: If a certain case of IPs spamming at Help Desk is any indication, would a CU be able to stop that in its track? 2601AC47 (talk|contribs) Isn't a IP anon 14:29, 23 November 2024 (UTC)
- CU's use their tools to identify socks when technical proof is necessary. The problem you're linking to is caused by one particular LTA account who is extremely obvious and doesn't really require technical proof to identify - check users would just be able to provide evidence for something that is already easy to spot. There's an essay on the distinction over at WP:DUCK BugGhost🦗👻 14:45, 23 November 2024 (UTC)
- @2601AC47: No, and that is because the user in question's MO is to abuse VPNs. Checkuser is worthless in this case because of that (but the IPs can and should be blocked for 1yr as VPNs). —Jéské Couriano v^_^v threads critiques 19:35, 26 November 2024 (UTC)
- LTA MAB is using a peer-to-peer VPN service which is similar to TOR. Blocking peer-to-peer VPN service endpoint IP addresses carries a higher risk of collateral damage because those aren't assigned to the VPN provider but rather a third party ISP who is likely to dynamically reassign the blocked address to a completely innocent party. 216.126.35.235 (talk) 00:22, 27 November 2024 (UTC)
- I slightly oppose this idea. This is not Reddit where socks are immediately banned or shadowbanned outright. Reddit doesn't have WP:DUCK as any wiki does. Ahri Boy (talk) 00:14, 25 November 2024 (UTC)
- How do you know this is how Reddit deals with ban and suspension evasion? They use advanced techniques such as device and IP fingerprinting to ban and suspend users in under an hour. 2600:1700:69F1:1410:5D40:53D:B27E:D147 (talk) 23:47, 28 November 2024 (UTC)
- I can see where this is coming from, but we must realise that checkuser is not magic pixie dust nor is it meant for fishing. - Ratnahastin (talk) 04:49, 27 November 2024 (UTC)
- The question I ask myself is why must we realize that it is not meant for fishing? To catch fish, you need to fish. The no-fishing rule is not fit for purpose, nor is it a rule that other organizations that actively search for ban evasion use. Machines can do the fishing. They only need to show us the fish they caught. Sean.hoyland (talk) 05:24, 27 November 2024 (UTC)
- I think for the same reason we don't want governments to be reading our mail and emails. If we checkuser everybody, then nobody has any privacy. Donald Albury 20:20, 27 November 2024 (UTC)
- The question I ask myself is why must we realize that it is not meant for fishing? To catch fish, you need to fish. The no-fishing rule is not fit for purpose, nor is it a rule that other organizations that actively search for ban evasion use. Machines can do the fishing. They only need to show us the fish they caught. Sean.hoyland (talk) 05:24, 27 November 2024 (UTC)
I sympathize with Mztourist. The current system is less effective than it needs to be. Ban evading actors make a lot of edits, they are dedicated hard-working folk in contentious topic areas. They can make up nearly 10% of new extendedconfirmed actors some years and the quicker an actor becomes EC the more likely they are to be blocked later for ban evasion. Their presence splits the community into two classes, the sanctionable and the unsanctionable with completely different payoff matrices. This has many consequences in contentious topic areas and significantly impacts the dynamics. The current rules are probably not good rules. Other systems have things like a 'commitment to authenticity' and actively search for ban evasion. It's tempting to burn it all down and start again, but with what? Having said that, the SPI folks do a great job. The average time from being granted extendedconfirmed to being blocked for ban evasion seems to be going down. Sean.hoyland (talk) 18:28, 22 November 2024 (UTC)
- I confess that I am doubtful about that 10% claim. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)
- WhatamIdoing, me too. I'm doubtful about everything I say because I've noticed that the chance it is slightly to hugely wrong is quite high. The EC numbers are work in progress, but I got distracted. The description "nearly 10% of new extendedconfirmed actors" is a bit misleading, because 'new' doesn't really mean new actors. It means actors that acquired EC for a given year, so newly acquired privileges. They might have registered in previous years. Also, I don't have 100% confidence in the way count EC grants because there are some edge cases, and I'm ignoring sysops. But anyway, the statement was based on this data of questionable precision. And the statement about a potential relationship between speed of EC acquisition and probability of being blocked is based on this data of questionable precision. And of course, currently undetected socks are not included, and there will be many. Sean.hoyland (talk) 03:39, 23 November 2024 (UTC)
- I'm not interested in clicking through to a Google file. Here's my back-of-the-envelope calculation: We have something like 120K accounts that would qualify for EXTCONF. Most of these are no longer active, and many stopped editing so long ago that they don't actually have the user right.
- Wikipedia is almost 24 years old. That makes convenient math: On average, since inception, 5K editors have achieved EXTCONF levels each year.
- If the 10% estimate is true, then 500 accounts per year – about 10 per week – are being created by banned editors and going undetected long enough for the accounts to make 500 edits and to work in CTOP areas. Do we even have enough WP:BANNED editors to make it plausible to expect banned editors to bring 500 accounts a year up to EXTCONF levels (plus however many accounts get started but are detected before then)? WhatamIdoing (talk) 03:53, 23 November 2024 (UTC)
- Suit yourself. I'm not interested in what interests other people or back of the envelope calculations. I'm interested in understanding the state of a system over time using evidence-based approaches by extracting data from the system itself. Let the data speak for itself. It has a lot to tell us. Then it is possible to test hypotheses and make evidence-based decisions. Sean.hoyland (talk) 04:13, 23 November 2024 (UTC)
- @WhatamIdoing, there's a sockmaster in the IPA CTOP who has made more than 100 socks. 500 new XC socks every year doesn't seem that much of a stretch in comparison. -- asilvering (talk) 19:12, 23 November 2024 (UTC)
- More than 100 XC socks? Or more than 100 detected socks, including socks with zero edits?
- Making a lot of accounts isn't super unusual, but it's a lot of work to get 100 accounts up to 500+ edits. Making 50,000 edits is a lot, even if it's your full-time job. WhatamIdoing (talk) 01:59, 24 November 2024 (UTC)
- Lots of users get it done in a couple of days, often through vandal fighting tools. It really is not that many when the edits are mostly mindless. nableezy - 00:18, 26 November 2024 (UTC)
- But that's kind of my point: "A couple of days", times 100 accounts, means 200–300 days per year. If you work five days per week and 52 weeks per year, that's 260 work days. This might be possible, but it's a full-time job.
- Since the 30-day limit is something that can't be achieved through effort, I wonder if a sudden change to, say, 6 months would produce a five-month reprieve. WhatamIdoing (talk) 02:23, 26 November 2024 (UTC)
- Who says it’s only one at a time? Icewhiz for example has had 4 plus accounts active at a time. nableezy - 02:25, 26 November 2024 (UTC)
- There is some data about ban evasion timelines for some sockmasters in PIA that show how accounts are operated in parallel. Operating multiple accounts concurrently seems to be the norm. Sean.hoyland (talk) 04:31, 26 November 2024 (UTC)
- Imagine that it takes an average of one minute to make a (convincing) edit. That means that 500 edits = 8.33 hours, i.e., more than one full work day.
- Imagine, too, that having reached this point, you actually need to spend some time using your newly EXTCONF account. This, too, takes time.
- If you operate several accounts at once, that means:
- You spend an hour editing from Account1. You spend the next hour editing from Account2. You spend another hour editing from Account3. You spend your fourth hour editing from Account4. Then you take a break for lunch, and come back to edit from Accounts 5 through 8.
- At the end of the day, you have brought 8 accounts up to 60 edits (12% of the minimum goal). And maybe one of them got blocked, too, which is lost effort. At this rate, it would take you an entire year of full-time work to get 100 EXTCONF accounts, even though you are operating multiple accounts concurrently. Doing 50 edits per day in 10 accounts is not faster than doing 500 edits in 1 account. It's the same amount of work. WhatamIdoing (talk) 05:13, 29 November 2024 (UTC)
- Sure it’s an effort, though it doesn’t take a minute an edit. But I’m not sure why I need to imagine something that has happened multiple times already. Icewhiz most recently had like 4-5 EC accounts active, and there are probably several more. Yes, there is an effort there. But also yes, it keeps happening. nableezy - 15:00, 29 November 2024 (UTC)
- My point is that "4-5 EC accounts" is not "100". WhatamIdoing (talk) 19:31, 30 November 2024 (UTC)
- It’s 4-5 at a time for a single sock master. Check the Icewhiz SPI for how many that adds up to over time. nableezy - 20:16, 30 November 2024 (UTC)
- My point is that "4-5 EC accounts" is not "100". WhatamIdoing (talk) 19:31, 30 November 2024 (UTC)
- Sure it’s an effort, though it doesn’t take a minute an edit. But I’m not sure why I need to imagine something that has happened multiple times already. Icewhiz most recently had like 4-5 EC accounts active, and there are probably several more. Yes, there is an effort there. But also yes, it keeps happening. nableezy - 15:00, 29 November 2024 (UTC)
- There is some data about ban evasion timelines for some sockmasters in PIA that show how accounts are operated in parallel. Operating multiple accounts concurrently seems to be the norm. Sean.hoyland (talk) 04:31, 26 November 2024 (UTC)
- Many of our frequent fliers are already adept at warehousing accounts for months or even years, so a bump in the time period probably won't make much off a difference. Additionally, and without going into detail publicly, there are several methods whereby semi- or even fully-automated editing can be used to get to 500 edits with a minimum of effort, or at least well within script-kid territory. Because so many of those are obvious on inspection some will assume that all of them are, but there are a number of rather subtle cases that have come up over the years and it would be foolish to assume that it isn't ongoing. 184.152.68.190 (talk) 17:31, 28 November 2024 (UTC)
- Who says it’s only one at a time? Icewhiz for example has had 4 plus accounts active at a time. nableezy - 02:25, 26 November 2024 (UTC)
- Lots of users get it done in a couple of days, often through vandal fighting tools. It really is not that many when the edits are mostly mindless. nableezy - 00:18, 26 November 2024 (UTC)
- WhatamIdoing, me too. I'm doubtful about everything I say because I've noticed that the chance it is slightly to hugely wrong is quite high. The EC numbers are work in progress, but I got distracted. The description "nearly 10% of new extendedconfirmed actors" is a bit misleading, because 'new' doesn't really mean new actors. It means actors that acquired EC for a given year, so newly acquired privileges. They might have registered in previous years. Also, I don't have 100% confidence in the way count EC grants because there are some edge cases, and I'm ignoring sysops. But anyway, the statement was based on this data of questionable precision. And the statement about a potential relationship between speed of EC acquisition and probability of being blocked is based on this data of questionable precision. And of course, currently undetected socks are not included, and there will be many. Sean.hoyland (talk) 03:39, 23 November 2024 (UTC)
Also, if we divide the space into contentious vs not-contentious, maybe a one size fits all CU policy doesn't make sense. Sean.hoyland (talk) 18:55, 22 November 2024 (UTC)
Terrible idea. Let's AGF that most new users are here to improve Wikipedia instead of damage it. Some1 (talk) 18:33, 22 November 2024 (UTC)
- Ban evading actors who employ deception via sockpuppetry in the WP:PIA topic area are here to improve Wikipedia, from their perspective, rather than damage it. There is no need to use faith. There are statistics. There is a probability that a 'new user' is employing ban evasion. Sean.hoyland (talk) 18:46, 22 November 2024 (UTC)
- My initial comment wasn't a direct response to yours, but new users and IPs won't be able to edit in the WP:PIA topic area anyway since they need to be extended confirmed. Some1 (talk) 20:08, 22 November 2024 (UTC)
- Let's not hold up the way PIA handles new users and IPs, in which they are allowed to post to talk pages but then have their talk page post removed if it doesn't fall within very specific parameters, as some sort of model. CMD (talk) 02:51, 23 November 2024 (UTC)
- My initial comment wasn't a direct response to yours, but new users and IPs won't be able to edit in the WP:PIA topic area anyway since they need to be extended confirmed. Some1 (talk) 20:08, 22 November 2024 (UTC)
Strongly support automatically checkusering all active users (new and existing) at regular intervals. If it were automated -- e.g., a script runs that compares IPs, user agent, other typical subscriber info -- there would be no privacy violation, because that information doesn't have to be disclosed to any human beings. Only the "hits" can be forwarded to the CU team for follow-up. I'd run that script daily. If the policy forbids it, we should change the policy to allow it. It's mind-boggling that Wikipedia doesn't do this already. It's a basic security precaution. (Also, email-required registration and get rid of IP editing.) Levivich (talk) 02:39, 23 November 2024 (UTC)
- I don't think you've been reading the comments from people who know what they are talking about. There would be hundreds, at least, of hits per day that would require human checking. The policy that prohibits this sort of massive breach of privacy is the Foundation's and so not one that en.wp could change even if it were a good idea (which it isn't). Thryduulf (talk) 03:10, 23 November 2024 (UTC)
- A computer can be programmed to check for similarities or patterns in subscriber info (IP, etc), and in editing activity (time cards, etc), and content of edits and talk page posts (like the existing language similarity tool), with various degrees of certainty in the same way the Cluebot does with ORES when it's reverting vandalism. And the threshold can be set so it only forwards matches of a certain certainty to human CUs for review, so as not to overwhelm the humans. The WMF can make this happen with just $1 million of its $180 million per year (and it wouldn't be violating its own policies if it did so). Enwiki could ask for it, other projects might join too. Levivich (talk) 05:24, 23 November 2024 (UTC)
- "Oh now I see what you mean, Levivich, good point, I guess you know what you're talking about, after all."
- "Thanks, Thryduulf!" Levivich (talk) 17:42, 23 November 2024 (UTC)
- I seem to have missed this comment, sorry. However I am very sceptical that sockpuppet detection is meaningfully automatable. From what CUs say it is as much art as science (which is why SPI cases can result in determinations like "possilikely"). This is the sort of thing that is difficult (at best) to automate. Additionally the only way to reliably develop such automation would be for humans analyse and process a massive amount of data from accounts that both are and are not sockpuppets and classify results as one or the other, and that anaylsis would be a massive privacy violation on its own. Assuming you have developed this magic computer that can assign a likelihood of any editor being a sock of someone who has edited in the last three months (data older than that is deleted) on a percentage scale, you then have to decide what level is appropriate to send to humans to check. Say for the sake of argument it is 75%, that means roughly one in four people being accused are innocent and are having their privacy impinged unnecessarily - and how many CUs are needed to deal with this caseload? Do we have enough? SPI isn't exactly backlog free and there aren't hoards of people volunteering for the role (although unbreaking RFA might help with this in the medium to long term). The more you reduce the number sent to CUs to investigate, the less benefit there is over the status quo.
- In addition to all the above, how similar is "similar" in terms of articles edited, writing style, timecard, etc? How are you avoiding legitimate sockpuppets? Thryduulf (talk) 18:44, 23 November 2024 (UTC)
- You know this already but for anyone reading this who doesn't: when a CU "checks" somebody, it's not like they send a signal out to that person's computer to go sniffing around. In fact, all the subscriber info (IP address, etc.) is already logged on the WMF's server logs (as with any website). A CU "check" just means a volunteer CU gets to look at a portion of those logs (to look up a particular account's subscriber info). That's the privacy concern: we have rules, rightfully so, about when volunteer CUs (not WMF staff) can read the server logs (or portions of them). Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs. Privacy is only an issue when those logs are revealed to volunteer CUs.
- So... feeding the logs into software in order to train the software doesn't violate anyone's policy. It's just letting a computer read its own files. Human verification of the training outcomes also doesn't have to violate anyone's privacy -- just don't use volunteer CUs to do it, use WMF staff. Or, anonymize the training data (changing usernames to "Example1", "Example2", etc.). Or use historical data -- which would certainly be part of the training, since the most effective way would be to put known socks into the training data to see if the computer catches them.
- Anyway, training the system won't violate anyone's privacy.
- As for the hit rate -- 75% would be way, way too low. We'd be looking for definitely over 90% or 95%, and probably more like 99.something percent. Cluebot doesn't get vandalism wrong 1 out of 4 times, neither should CluebotCU. Heck, if CluebotCU can't do better than 75%, it's not worth doing. A more interesting question is whether the 99.something% hit rate would be helpful to CUs, or whether that would only catch the socks that are so obvious you don't even need CU to recognize them. Only testing in the field would tell.
- But overall, AI looking for patterns, and checking subscriber info, edit patterns, and the content of edits, would be very helpful in tamping down on socking, because the computer can make far more checks than a human (a computer can look at 1,000 accounts and a 100,000 edits no problem, which no human can do), it'll be less biased than humans, and it can do it all without violating anyone's privacy -- in fact, lowering the privacy violations by lowering the false positives, sending only high-probability (90%+, not 75%+) to humans for review. And it can all be done with existing technology, and the WMF has the money to do it. Levivich (talk) 19:38, 23 November 2024 (UTC)
- The more you write the clearer you make it that you don't understand checkuser or the WMF's policies regarding privacy. It's also clear that I'm not going to convince you that this is unworkable so I'll stop trying. Thryduulf (talk) 20:42, 23 November 2024 (UTC)
- Yeah it's weird how repeatedly insulting me hasn't convinced me yet. Levivich (talk) 20:57, 23 November 2024 (UTC)
- If you are are unable to distinguish between reasoned disagreement and insults, then it's not at all weird that reasoned disagreement fails to convince you. Thryduulf (talk) 22:44, 23 November 2024 (UTC)
- Yeah it's weird how repeatedly insulting me hasn't convinced me yet. Levivich (talk) 20:57, 23 November 2024 (UTC)
- @Levivich: Whatever existing data set we have has too many biases to be useful for this, and this is going to be prone to false positives. AI needs lots of data to be meaningfully trained. Also, AI here would be learning a function; when the output is not in fact a function of the input, there's nothing for an AI model to target, and this is very much the case here. On Wikidata, where I am a CheckUser, almost all edit summaries are automated even for human edits (just like clicking the rollback button is, or undoing an edit is by default), and it is very hard to meaningfully tell whether someone is a sock or not without highly case-specific analysis. No AI model is better than the data it's trained on.
- Also, about the privacy policy: you are completely incorrect when you
"Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs"
. Staff can only access that information on a need to know basis, just like CheckUsers, and data privacy laws like the EU's and California's means you cannot just do whatever random thing you want with the information you collect from users about them.--Jasper Deng (talk) 21:56, 23 November 2024 (UTC)- So which part of the wmf:Privacy Policy would prohibit the WMF from developing an AI that looks at server logs to find socks? Do you want me to quote to you the portions that explicitly disclose that the WMF uses personal information to develop tools and improve security? Levivich (talk) 22:02, 23 November 2024 (UTC)
- I mean yeah that would probably be more productive than snarky bickering BugGhost🦗👻 22:05, 23 November 2024 (UTC)
- @Levivich: Did you read the part where I mentioned privacy laws? Also, in this industry no one is allowed unfettered usage of private data even internally; there are internal policies that govern this that are broadly similar to the privacy policy. It's one thing to test a proposed tool on an IP address like Special:Contribs/2001:db8::/32, but it's another to train an AI model on it. Arguably an equally big privacy concern is the usage of new data from new users after the model is trained and brought online. The foundation is already hiding IP addresses by default even for anonymous users soon, and they will not undermine that mission through a tool like this. Ultimately, the Board of Trustees has to assume legal responsibility and liability for such a thing; put yourself in their position and think of whether they'd like the liability of something like this.--Jasper Deng (talk) 22:13, 23 November 2024 (UTC)
- So can you quote a part of the privacy policy, or a part of privacy laws, or anything, that would prohibit feeding server logs into a "Cluebot-CU" to find socking?
- Because I can quote the part of the wmf:Privacy Policy that allows it, and it's a lot:
Yeah that's a lot. Then there's this whole FAQ that saysWe may use your public contributions, either aggregated with the public contributions of others or individually, to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ...
Because of how browsers work, we receive some information automatically when you visit the Wikimedia Sites ... This information includes the type of device you are using (possibly including unique device identification numbers, for some beta versions of our mobile applications), the type and version of your browser, your browser's language preference, the type and version of your device's operating system, in some cases the name of your internet service provider or mobile carrier, the website that referred you to the Wikimedia Sites, which pages you request and visit, and the date and time of each request you make to the Wikimedia Sites.
Put simply, we use this information to enhance your experience with Wikimedia Sites. For example, we use this information to administer the sites, provide greater security, and fight vandalism; optimize mobile applications, customize content and set language preferences, test features to see what works, and improve performance; understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites, and analyze trends. ...
We actively collect some types of information with a variety of commonly-used technologies. These generally include tracking pixels, JavaScript, and a variety of "locally stored data" technologies, such as cookies and local storage. ... Depending on which technology we use, locally stored data may include text, Personal Information (like your IP address), and information about your use of the Wikimedia Sites (like your username or the time of your visit). ... We use this information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. ...
We and our service providers use your information ... to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ... To fight spam, identity theft, malware and other kinds of abuse. ... To test features to see what works, understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites and analyze trends. ...
When you visit any Wikimedia Site, we automatically receive the IP address of the device (or your proxy server) you are using to access the Internet, which could be used to infer your geographical location. ... We use this location information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. For example, we use this information to provide greater security, optimize mobile applications, and learn how to expand and better support Wikimedia communities. ...
We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use, this Privacy Policy, or any Wikimedia Foundation or user community-based policies. ... We may also disclose your Personal Information if we reasonably believe it necessary to detect, prevent, or otherwise assess and address potential spam, malware, fraud, abuse, unlawful activity, and security or technical concerns. ... To facilitate their work, we give some developers limited access to systems that contain your Personal Information, but only as reasonably necessary for them to develop and contribute to the Wikimedia Sites. ...
It is important for us to be able to make sure everyone plays by the same rules, and sometimes that means we need to investigate and share specific users' information to ensure that they are.
For example, user information may be shared when a CheckUser is investigating abuse on a Project, such as suspected use of malicious "sockpuppets" (duplicate accounts), vandalism, harassment of other users, or disruptive behavior. If a user is found to be violating our Terms of Use or other relevant policy, the user's Personal Information may be released to a service provider, carrier, or other third-party entity, for example, to assist in the targeting of IP blocks or to launch a complaint to the relevant Internet Service Provider.
- So using IP addresses, etc., to develop new tools, to test features, to fight violations of the Terms of Use, and disclosing that info to Checkusers... all explicitly permitted by the Privacy Policy. Levivich (talk) 22:22, 23 November 2024 (UTC)
- @Levivich:
"We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use"
– "reasonably believed to be necessary" is not going to hold up in court when it's sweepingly applied to everyone. This doesn't even take into consideration the laws I mentioned, like GDPR. I'm not a lawyer, and I'm guessing neither are you. If you want to be the one assuming the legal liability for this, contact the board today and sign the contract. Even then they would probably not agree to such an arrangement. So you're preaching to the choir: only the foundation could even consider assuming this risk. Also, it's clear that you do not have a single idea of how developing something like this works if you think it can be done for $1 million. Something this complex has to be done right and tech salaries and computing resources are expensive.--Jasper Deng (talk) 22:28, 23 November 2024 (UTC)- What I am suggesting does not involve sharing everyone's data with Checkusers. It's pretty obvious that looking at their own server logs is "necessary to enforce or investigate potential violations of our Terms of Use". Five people is how big the WMF's wmf:Machine Learning team is, @ $200k each, $1m/year covers it. Five people is enough for that team to improve ORES, so another five-person team dedicated to "ORES-CU" seems a reasonable place to start. They could double that, and still have like $180M left over. Levivich (talk) 22:40, 23 November 2024 (UTC)
- @Levivich: Yeah no, lol. $200k each is not a very competitive total compensation, considering that that needs to include benefits, health insurance, etc. This doesn't include their manager or the hefty hardware required to run ML workflows. It doesn't include the legal support required given the data privacy law compliance needed. Capriciously looking at the logs does not count; accessing data of users the foundation cannot reasonably have said to be likely to cause abuse is not permissible. This all aside from the bias and other data quality issues at hand here. You can delude yourself all you want, but nature cannot be fooled. I'm finished arguing with you anyways, because this proposal is either way dead on arrival.--Jasper Deng (talk) 23:45, 23 November 2024 (UTC)
- @Jasper Deng, haggling over the math here isn't really important. You could quintuple the figures @Levivich gave and the Foundation would still have millions upon millions of dollars left over. -- asilvering (talk) 23:48, 23 November 2024 (UTC)
- @Asilvering: The point I'm making is Levivich does not understand the complexity behind this kind of thing and thus his arguments are not to be given weight by the closer. Jasper Deng (talk) 23:56, 23 November 2024 (UTC)
- As a statistician/data scientist, @Levivich is correct about the technical side of this—building an ML algorithm to detect sockpuppets would be pretty easy. Duplicate user algorithms like these are common across many websites. For a basic classification task like this (basically an ML 101 homework problem), I think $1 million is about right. As a bonus, the same tools could be used to identify and correct for possible canvasing or brigading, which behaves a lot like sockpuppetry from a statistical perspective. A similar algorithm is already used by Twitter's community notes feature.
- IANAL, so I can't comment on the legal side of this, and I can't comment on whether that money would be better-spent elsewhere since I don't know what the WMF budget looks like. Overall though, the technical implementation wouldn't be a major hurdle. – Closed Limelike Curves (talk) 20:44, 24 November 2024 (UTC)
- Third-party services like Sift.com provide this kind of algorithm-based account fraud protection as an alternative to building and maintaining internally. czar 23:41, 24 November 2024 (UTC)
- Building such a model is only a small part of a real production system. If this system is to operate on all account creations, it needs to be at least as reliable as the existing systems that handle account creations. As you probably know, data scientists developing such a model need to be supported by software engineers and site reliability engineers supporting the actual system. Then you have the problem of new sockers who are not on the list of sockmasters to check against. Non-English-language speakers often would be put at a disadvantage too. It's not as trivial as you make it out to be, thus I stand by my estimate.--Jasper Deng (talk) 06:59, 25 November 2024 (UTC)
- None of you have accounted for Hofstadter's law.
- I don't think we need to spend more time speculating about a system that WMF Legal is extremely unlikely to accept. Even if they did, it wouldn't exist until several years from now. Instead, let's try to think of things that we can do ourselves, or with only a very little assistance. Small, lightweight projects with full community control can help us now, and if we prove that ____ works, the WMF might be willing to adopt and expand it later. WhatamIdoing (talk) 23:39, 25 November 2024 (UTC)
- That's a mistake -- doing the same thing Wikipedia has been doing for 20+ years. The mistake is in leaving it to volunteers to catch sockpuppetry, rather than insisting that the WMF devote significant resources to it. And it's a mistake because the one thing we volunteers can't do, that the WMF can do, is comb through the server logs looking for patterns. Levivich (talk) 23:44, 25 November 2024 (UTC)
- Not sure about the "building an ML algorithm to detect sockpuppets would be pretty easy" part, but I admire the optimism. It is certainly the case that it is possible, and people have done it with a surprising level of success a very long time ago in ML terms e.g. https://doi.org/10.1016/j.knosys.2018.03.002. These projects tend to rely on the category graph to distinguish sock and non-sock sets for training, the categorization of accounts as confirmed or suspected socks. However, the category graph is woefully incomplete i.e. there is information in the logs that is not reflected in the graph, so ensuring that all ban evasion accounts are properly categorized as such might help a bit. Sean.hoyland (talk) 03:58, 26 November 2024 (UTC)
- Thankfully, we wouldn't have to build an ML algorithm, we can just use one of the existing ones. Some are even open source. Or WMF could use a third party service like the aforementioned sift.com. Levivich (talk) 16:17, 26 November 2024 (UTC)
- Let me guess: Essentially, you would like their machine-learning team to use Sift's
AI-Powered Fraud Protection
, which from what I can glance, handlessafeguarding subscriptions to defending digital content and in-app purchases
andhelps businesses reduce friction and stop sophisticated fraud attacks that gut growth
, to provide the ability for us toautomatically checkuser all active users
? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:25, 26 November 2024 (UTC)- The WMF already has the ability to "automatically checkuser all users" (the verb "checkuser" just means "look at the server logs"), I'm suggesting they use it. And that they use it in a sophisticated way, employing (existing, open source or commercially available) AI/ML technologies, like the same kind we already use to automatically revert vandalism. Contrary to claims here, doing so would not be illegal or even expensive (comparatively, for the WMF). Levivich (talk) 16:40, 26 November 2024 (UTC)
- So, in my attempt to get things set right and steer towards a consensus that is satisfactory, I sincerely follow-up: What lies beyond that in this vast, uncharted sea? And could this mean any more in the next 5 years? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:49, 26 November 2024 (UTC)
- What lies beyond is mw:Extension:SimilarEditors. Levivich (talk) 17:26, 26 November 2024 (UTC)
- So, @2601AC47, I think the answer to your question is "tell the WMF we really, really, really would like more attention to sockpuppetry and IP abuse from the ML team". -- asilvering (talk) 17:31, 26 November 2024 (UTC)
- Which I don't suppose someone can at the next board meeting on December 11? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 18:00, 26 November 2024 (UTC)
- So, @2601AC47, I think the answer to your question is "tell the WMF we really, really, really would like more attention to sockpuppetry and IP abuse from the ML team". -- asilvering (talk) 17:31, 26 November 2024 (UTC)
- What lies beyond is mw:Extension:SimilarEditors. Levivich (talk) 17:26, 26 November 2024 (UTC)
- So, in my attempt to get things set right and steer towards a consensus that is satisfactory, I sincerely follow-up: What lies beyond that in this vast, uncharted sea? And could this mean any more in the next 5 years? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:49, 26 November 2024 (UTC)
- The WMF already has the ability to "automatically checkuser all users" (the verb "checkuser" just means "look at the server logs"), I'm suggesting they use it. And that they use it in a sophisticated way, employing (existing, open source or commercially available) AI/ML technologies, like the same kind we already use to automatically revert vandalism. Contrary to claims here, doing so would not be illegal or even expensive (comparatively, for the WMF). Levivich (talk) 16:40, 26 November 2024 (UTC)
- I may also point to this, where they mention
development in other areas, such as social media features and machine learning expertise
. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:36, 26 November 2024 (UTC)- e.g. m:Research:Sockpuppet_detection_in_Wikimedia_projects Sean.hoyland (talk) 17:02, 26 November 2024 (UTC)
- And that mentions Socksfinder, still in beta it seems. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 17:10, 26 November 2024 (UTC)
- 3 days! When I first posted my comment and some editors responded that I didn't know what I was talking about, it can't be done, it'd violate the privacy policy and privacy laws, WMF Legal would never allow it... I was wondering how long it would take before somebody pointed out that this thing that can't be done has already been done and has been under development for at least 7 years now.
- Of course it's already under development, it's pretty obvious that the same Wikipedia that developed ClueBot, one of the world's earlier and more successful examples of ML applications, would try to employ ML to fight multiple-account abuse. I mean, I'm obviously not gonna be the first person to think of this "innovation"!
- Anyway, it took 3 days. Thanks, Sean! Levivich (talk) 17:31, 26 November 2024 (UTC)
- e.g. m:Research:Sockpuppet_detection_in_Wikimedia_projects Sean.hoyland (talk) 17:02, 26 November 2024 (UTC)
- Let me guess: Essentially, you would like their machine-learning team to use Sift's
- Thankfully, we wouldn't have to build an ML algorithm, we can just use one of the existing ones. Some are even open source. Or WMF could use a third party service like the aforementioned sift.com. Levivich (talk) 16:17, 26 November 2024 (UTC)
- Unlike what is being proposed, SimilarEditors only works based on publicly available data (e.g. similarities in editing patterns), and not IP data. To quote the page Sean linked,
in the model's current form, we are only considering public data, but most saliently private data such as IP addresses or user-agent information are features currently used by checkusers that could be later (carefully) incorporated into the models
.So, not only the current model doesn't look at IP data, the research project also acknowledges that actually using such data should only be done in a "careful" way, because of those very same privacy policy issues quoted above.On the ML side, however, this does proves that it's being worked on, and I'm honestly not surprised at all that the WMF is working on machine learning-based tools to detect sockpuppets. Chaotic Enby (talk · contribs) 17:50, 26 November 2024 (UTC)- Right. We should ask WMF to do the
later (carefully) incorporated into the models
part (especially since it's now later). BTW, the SimilarUsers API already pulls IP and other metadata. SimilarExtensions (a tool that uses the API) doesn't release that information to CheckUsers, by design. And that's a good thing, we can't just release all IPs to CheckUsers, it does indeed have to be done carefully. But user metadata can be used. What I'm suggesting is that the WMF should proceed to develop these types of tools (including the careful use of user metadata). Levivich (talk) 17:57, 26 November 2024 (UTC)
- Right. We should ask WMF to do the
- Not really clear that they're pulling IP data from logged-in users. The relevant sections reads:
This reads like they're collecting the username or IP depending on whether they're a logged-in user or an IP user. Chaotic Enby (talk · contribs) 18:14, 26 November 2024 (UTC)USER_METADATA
(203MB): for every user inCOEDIT_DATA
, this contains basic metadata about them (total number of edits in data, total number of pages edited, user or IP, timestamp range of edits). - In a few years people might look back on these days when we only had to deal with simple devious primates employing deception as the halcyon days. Sean.hoyland (talk) 18:33, 26 November 2024 (UTC)
- Not sure about the "building an ML algorithm to detect sockpuppets would be pretty easy" part, but I admire the optimism. It is certainly the case that it is possible, and people have done it with a surprising level of success a very long time ago in ML terms e.g. https://doi.org/10.1016/j.knosys.2018.03.002. These projects tend to rely on the category graph to distinguish sock and non-sock sets for training, the categorization of accounts as confirmed or suspected socks. However, the category graph is woefully incomplete i.e. there is information in the logs that is not reflected in the graph, so ensuring that all ban evasion accounts are properly categorized as such might help a bit. Sean.hoyland (talk) 03:58, 26 November 2024 (UTC)
- I assumed 1 million USD/year was accounting for Hofstadter's law several times over. Otherwise it feels wildly pessimistic. – Closed Limelike Curves (talk) 15:57, 26 November 2024 (UTC)
- That's a mistake -- doing the same thing Wikipedia has been doing for 20+ years. The mistake is in leaving it to volunteers to catch sockpuppetry, rather than insisting that the WMF devote significant resources to it. And it's a mistake because the one thing we volunteers can't do, that the WMF can do, is comb through the server logs looking for patterns. Levivich (talk) 23:44, 25 November 2024 (UTC)
- @Jasper Deng, haggling over the math here isn't really important. You could quintuple the figures @Levivich gave and the Foundation would still have millions upon millions of dollars left over. -- asilvering (talk) 23:48, 23 November 2024 (UTC)
- @Levivich: Yeah no, lol. $200k each is not a very competitive total compensation, considering that that needs to include benefits, health insurance, etc. This doesn't include their manager or the hefty hardware required to run ML workflows. It doesn't include the legal support required given the data privacy law compliance needed. Capriciously looking at the logs does not count; accessing data of users the foundation cannot reasonably have said to be likely to cause abuse is not permissible. This all aside from the bias and other data quality issues at hand here. You can delude yourself all you want, but nature cannot be fooled. I'm finished arguing with you anyways, because this proposal is either way dead on arrival.--Jasper Deng (talk) 23:45, 23 November 2024 (UTC)
- What I am suggesting does not involve sharing everyone's data with Checkusers. It's pretty obvious that looking at their own server logs is "necessary to enforce or investigate potential violations of our Terms of Use". Five people is how big the WMF's wmf:Machine Learning team is, @ $200k each, $1m/year covers it. Five people is enough for that team to improve ORES, so another five-person team dedicated to "ORES-CU" seems a reasonable place to start. They could double that, and still have like $180M left over. Levivich (talk) 22:40, 23 November 2024 (UTC)
- @Levivich:
- So which part of the wmf:Privacy Policy would prohibit the WMF from developing an AI that looks at server logs to find socks? Do you want me to quote to you the portions that explicitly disclose that the WMF uses personal information to develop tools and improve security? Levivich (talk) 22:02, 23 November 2024 (UTC)
- The more you write the clearer you make it that you don't understand checkuser or the WMF's policies regarding privacy. It's also clear that I'm not going to convince you that this is unworkable so I'll stop trying. Thryduulf (talk) 20:42, 23 November 2024 (UTC)
- A computer can be programmed to check for similarities or patterns in subscriber info (IP, etc), and in editing activity (time cards, etc), and content of edits and talk page posts (like the existing language similarity tool), with various degrees of certainty in the same way the Cluebot does with ORES when it's reverting vandalism. And the threshold can be set so it only forwards matches of a certain certainty to human CUs for review, so as not to overwhelm the humans. The WMF can make this happen with just $1 million of its $180 million per year (and it wouldn't be violating its own policies if it did so). Enwiki could ask for it, other projects might join too. Levivich (talk) 05:24, 23 November 2024 (UTC)
IP range 2600:1700:69F1:1410:0:0:0:0/64 blocked by a CU |
---|
The following discussion has been closed. Please do not modify it. |
|
- Any such system would be subject to numerous biases or be easily defeatable. Such an automated anti-abuse system would have to be exclusively a foundation initiative as only they have the resources for such a monumental undertaking. It would need its own team of developers.--Jasper Deng (talk) 18:57, 23 November 2024 (UTC)
Absolutely no chance that this would pass. WP:SNOW, even though there isn't a flood of opposes. There are two problems:
- The existing CheckUser team barely has the bandwidth for the existing SPI load. Doing this on every single new user would be impractical and would enable WP:LTA's by diverting valuable CheckUser bandwidth.
- Even if we had enough CheckUser's, this would be a severe privacy violation absolutely prohibited under the Foundation privacy policy.
The vast majority of vandals and other disruptive users don't need CU involvement to deal with. There's very little to be gained from this.--Jasper Deng (talk) 18:36, 23 November 2024 (UTC)
- It is perhaps an interesting conversation to have but I have to agree that it is unworkable, and directly contrary to foundation-level policy which we cannot make a local exemption to. En.wp, I believe, already has the largest CU team of any WMF project, but we would need hundreds more people on that team to handle something like this. In the last round of appointments, the committee approved exactly one checkuser, and that one was a returning former mamber of the team. And there is the very real risk that if we appointed a whole bunch of new CUs, some of them would abuse the tool. Just Step Sideways from this world ..... today 18:55, 23 November 2024 (UTC)
- And its worth pointing out that the Committee approving too few volunteers for Checkuser (regardless of whether you think they are or aren't) is not a significant part of this issue. There simply are not tens of people who are putting themselves forward for consideration as CUs. Since 2016 54 applications (an average of per year) have been put forward for consideration by Functionaries (the highest was 9, the lowest was 2). Note this is total applications not applicants (more than one person has applied multiple times), and is not limited to candidates who had a realistic chance of being appointed. Thryduulf (talk) 20:40, 23 November 2024 (UTC)
- The dearth of candidates has for sure been an ongoing thing, it's worth reminding admins that they don't have to wait for the committee to call for candidates, you can put your name forward at any time by emailing the committee. Just Step Sideways from this world ..... today 23:48, 24 November 2024 (UTC)
- And its worth pointing out that the Committee approving too few volunteers for Checkuser (regardless of whether you think they are or aren't) is not a significant part of this issue. There simply are not tens of people who are putting themselves forward for consideration as CUs. Since 2016 54 applications (an average of per year) have been put forward for consideration by Functionaries (the highest was 9, the lowest was 2). Note this is total applications not applicants (more than one person has applied multiple times), and is not limited to candidates who had a realistic chance of being appointed. Thryduulf (talk) 20:40, 23 November 2024 (UTC)
- Generally, I tend to get the impression from those who have checkuser rights that CU should be done as a last resort, and other, less invasive methods are preferred, and it would seem that indiscriminate use of it would be a bad idea, so I would have some major misgivings about this proposal. And given the ANI case, the less user information that we retain, the better (which is also probably why temporary accounts are a necessary and prudent idea despite other potential drawbacks). Abzeronow (talk) 03:56, 23 November 2024 (UTC)
- Oppose. A lot has already been written on the unsustainable workload for the CU team this would create and the amount of collateral damage; I'll add in the fact that our most notorious sockmasters in areas like PIA already use highly sophisticated methods to evade CU detection, and based on what I've seen at the relevant SPIs most of the blocks in these cases are made with more weight given to the behaviour, and even then only after lengthy deliberations on the matter. These sort of sockmasters seem to have been in the OP's mind when the request was made, and I do not see automated CU being of any more use than current techniques against such dedicated sockmasters. And, has been mentioned before, most cases of sockpuppetry (such as run-of-the-mill vandals and trolls using throwaway accounts for abuse) don't need CU anyways. JavaHurricane 08:17, 24 November 2024 (UTC)
- These are, unfortunately, fair points about the limits of CU and the many experienced and dedicated ban evading actors in PIA. CU information retention policy is also a complicating factor. Sean.hoyland (talk) 08:28, 24 November 2024 (UTC)
- As I said in my original post, recidivist socks often get better at covering their "tells" each time making behavioural detection increasingly difficult and meaning the entire burden falls on the honest user to convince an Admin to take an SPI case seriously with scarce evidence. After many years I'm tired of defending various pages from sock POV edits and if WMF won't make life easier then increasingly I just won't bother, I'm sure plenty of other users feel the same way. Mztourist (talk) 05:45, 26 November 2024 (UTC)
- These are, unfortunately, fair points about the limits of CU and the many experienced and dedicated ban evading actors in PIA. CU information retention policy is also a complicating factor. Sean.hoyland (talk) 08:28, 24 November 2024 (UTC)
SimilarEditors
The development of mw:Extension:SimilarEditors -- the type of tool that could be used to do what Mztourist suggests -- has been "stalled" since 2023 and downgraded to low-priority in 2024, according to its documentation page and related phab tasks (see e.g. phab:T376548, phab:T304633, phab:T291509). Anybody know why? Levivich (talk) 17:43, 26 November 2024 (UTC)
- Honestly, the main function of that sort of thing seems to be compiling data that is already available on XTools and various editor interaction analyzers, and then presenting it nicely and neatly. I think that such a page could be useful as a sanity check, and it might even be worth having that sort of thing as a standalone toolforge app, but I don't really see why the WMF would make that particular extension a high priority. — Red-tailed hawk (nest) 17:58, 26 November 2024 (UTC)
- Well, it doesn't have to be that particular extension, but it seems to me that the entire "idea" has been stalled, unless they're working on another tool that I'm unaware of (very possible). (Or, it could be because of recent changes in domestic and int'l privacy laws that derailed their previous development advances, or it could be because of advancements in ML elsewhere making in-house development no longer practical.)
As to why the WMF would make this sort of problem a high priority, I'd say because the spread of misinformation on Wikipedia by sockpuppets is a big problem. Even without getting into the use of user metadata, just look at recent SPIs I filed, like Wikipedia:Sockpuppet investigations/Icewhiz/Archive#27 August 2024 and Wikipedia:Sockpuppet investigations/Icewhiz/Archive#09 October 2024. That involved no private data at all, but a computer could have done automatically, in seconds, what took me hours to do manually, and those socks could have been uncovered before they made thousands and thousands of edits spreading misinformation. If the computer looked at private data as well as public data, it would be even more effective (and would save CUs time as well). Seems to me to be a worthy expenditure of 0.5% or 1% of the WMF's annual budget. Levivich (talk) 18:09, 26 November 2024 (UTC)
- Well, it doesn't have to be that particular extension, but it seems to me that the entire "idea" has been stalled, unless they're working on another tool that I'm unaware of (very possible). (Or, it could be because of recent changes in domestic and int'l privacy laws that derailed their previous development advances, or it could be because of advancements in ML elsewhere making in-house development no longer practical.)
- This looks really interesting. I don't really know how extensions are rolled out to individual wikis - can anyone with knowledge about that summarise if having this tool turned on (for check users/relevant admins) for en.wp is feasible? Do we need a RFC, or is this a "maybe wait several years for a phab ticket" situation? BugGhost🦗👻 18:09, 26 November 2024 (UTC)
- I find it amusing that ~4 separate users above are arguing that automatic identification of sockpuppets is impossible, impractical, and the WMF would never do it—and meanwhile, the WMF is already doing it. – Closed Limelike Curves (talk) 19:29, 27 November 2024 (UTC)
- So, discussion is over? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 19:31, 27 November 2024 (UTC)
- I think what's happening is that people are having two simultaneous discussions – automatic identification of sockpuppets is already being done, but what people say "the WMF would never do" is using private data (e.g. IP addresses) to identify them. Which adds another level of (ethical, if not legal) complications compared to what SimilarEditors is doing (only processing data everyone can access, but in an automated way). Chaotic Enby (talk · contribs) 07:59, 28 November 2024 (UTC)
- "automatic identification of sockpuppets is already being done" is probably an overstatement, but I agree that there may be a potential legal and ethical minefield between the Similarusers service that uses public information available to anyone from the databases after redaction of private information (i.e. course-grained sampling of revision timestamps combined with an attempt to quantify page intersection data), and a service that has access to the private information associated with a registered account name. Sean.hoyland (talk) 11:15, 28 November 2024 (UTC)
- The WMF said they're planning on incorporating IP addresses and device info as well! – Closed Limelike Curves (talk) 21:21, 29 November 2024 (UTC)
- Yes, automatic identification of (these) sockpuppets is impossible. There are many reasons for this, but the simplest one is this: These types of tools require hundreds of edits – at minimum – to return any viable data, and the sort of sockmasters who get accounts up to that volume of edits know how to evade detection by tools that analyse public information. The markers would likely indicate people from similar countries – naturally, two Cypriots would be interested in Category:Cyprus and over time similar hour and day overlaps will emerge, but what's to let you know whether these are actual socks when they're evading technical analysis? You're back to square one. There are other tools such as mediawikiwiki:User:Ladsgroup/masz which I consider equally circumstantial; an analysis of myself returns a high likelihood of me being other administrators and arbitrators, while analysing an alleged sock currently at SPI returns the filer as the third most likely sockmaster. This is not commentary on the tools themselves, but rather simply the way things are. DatGuyTalkContribs 17:42, 28 November 2024 (UTC)
- Oh, fun! Too bad it's CU-restricted, I'm quite curious to know what user I'm most stylometrically similar to. -- asilvering (talk) 17:51, 28 November 2024 (UTC)
- That would be LittlePuppers and LEvalyn. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)
- Fascinating! One I've worked with, one I haven't, both AfC reviewers. Not bad. -- asilvering (talk) 06:14, 29 November 2024 (UTC)
- That would be LittlePuppers and LEvalyn. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)
- Idk, the half dozen ARBPIA socks I recently reported at SPI were obvious af to me, as are several others I haven't reported yet. That may be because that particular sockfarm is easy to spot by its POV pushing and a few other habits; though I bet in other topic areas it's the same. WP:ARBECR helps because it forces the socks to make 500 edits minimum before they can start POV pushing, but still we have to let them edit for a while post-XC just to generate enough diffs to support an SPI filing. Software that combines tools like Masz and SimilarEditor, and does other kinds of similar analysis, could significantly reduce the amount of editor time required to identify and report them. Levivich (talk) 18:02, 28 November 2024 (UTC)
- I think it is possible, studies have demonstrated that it is possible, but it is true that having a sufficient number of samples is critical. Samples can be aggregated in some cases. There are several other important factors too. I have tried some techniques, and sometimes they work, or let's say they can sometimes produce results consistent with SPI results, better than random, but with plenty of false positives. It is also true that there are a number of detection countermeasures (that I won't describe) that are already employed by some bad actors that make detection harder. But I think the objective should be modest, to just move a bit in the right direction by detecting more ban evading accounts than are currently detected, or at least to find ways to reduce the size of the search space by providing ban evasion candidates. Taking the human out of the detection loop might take a while. Sean.hoyland (talk) 18:39, 28 November 2024 (UTC)
- If you mean it's never going to be possible to catch some sockpuppets—the best-hidden, cleverest, etc. ones—you're completely correct. But I'm guessing we could cut the amount of time SPI has to spend dramatically with just some basic checks. – Closed Limelike Curves (talk) 02:27, 29 November 2024 (UTC)
- I disagree. Empirically, the vast majority of time spent at SPI is not on finding possible socks, nor is it using the CheckUser tool on them, but rather it's the CU completed cases (of which there are currently 14 and I should probably stop slacking and get onto some) with non-definitive technical results waiting on an administrator to make the final determination on whether they're socks or not. Extension:SimilarUsers would concentrate various information that already exists (EIA, RoySmith's SPI tools) in one place, but I wouldn't say the accessibility of these tools is a cause of SPI backlog. An AI analysis tool to give an accurate magic number for likelihood? I'm anything but a Luddite, but still believe that's wishful thinking. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)
- Something seems better than nothing in this context doesn't it? EIA and the Similarusers service don't provide an estimate of the significance of page intersections. An intersection on a page with few revisions or few unique actors or few pageviews etc. is very different from a page intersection on the Donald Trump page. That kind of information is probably something that could sometimes help, even just to evaluate the importance of intersection evidence presented at SPIs. It seems to me that any kind of assistance could help. And another thing about the number of edits is that too many samples can also present challenges related to noise, with signals getting smeared out, although the type of noise in a user's data can itself be a characteristic signal in some cases it seems. And if there are too few samples, you can generate synthetic samples based on the actual samples and inject them into spaces. Search strategy matters a lot. The space of everyone vs everyone is vast, so good luck finding potential matches in that space without a lot of compute, especially for diffs. But many socks inhabit relatively small subspaces of Wikipedia, at least in the 20%-ish of time (on average in PIA) they edit(war)/POV-push etc. in their topic of interest. So, choosing the candidate search space and search strategy wisely can make the problem much more tractable for a given topic area/subspace. Targeted fishing by picking a potential sock and looking for potential matches (the strategy used by the Similarusers service and CU I guess) is obviously a very different challenge than large-scale industrial fishing for socks in general. Sean.hoyland (talk) 04:08, 29 November 2024 (UTC)
- And to continue the whining about existing tools, EIA and the Similarusers service use a suboptimal strategy in my view. If the objective is page intersection information for a potential sock against a sockmaster, and a ban evasion source has employed n identified actors so far e.g. almost 50 accounts for Icewhiz, the source's revision data should be aggregated for the intersection. This is not difficult to do using the category graph and the logs. Sean.hoyland (talk) 04:25, 29 November 2024 (UTC)
- There is so much more that could be done with the software. EIA gives you page overlaps (and isn't 100% accurate at it), but it doesn't tell you:
- how many times the accounts made the same edits (tag team edit warring)
- how many times they voted in the same formal discussions (RfC, AfD, RM, etc) and whether they voted the same way or different (vote stacking)
- how many times they use the same language and whether they use unique phraseology
- whether they edit at the same times of day
- whether they edit on the same days
- whether account creation dates (or start-of-regular-editing dates) line up with when other socks were blocked
- whether they changed focus after reaching XC and to what extent (useful in any ARBECR area)
- whether they "gamed" or "rushed" to XC (same)
- All of this (and more) would be useful to see in a combined way, like a dashboard. It might make sense to restrict access to such compilations of data to CUs, and the software could also throw in metadata or subscriber info in there, too (or not), and it doesn't have to reduce it all into a single score like ORES, but just having this info compiled in one place would save editors the time of having to compile it manually. If the software auto-swept logs for this info and alerted humans to any "high scores" (however defined, eg "matches across multiple criteria"), it would probably not only reduce editor time but also increase sock discovery. Levivich (talk) 04:53, 29 November 2024 (UTC)
- This is like one of my favorite strategies for meetings. Propose multiple things, many of which are technically challenging, then just walk out of the meeting.
- The 'how many times the accounts made the same edits' is probably do-able because you can connect reverted revisions to the revisions that reverted them using json data in the database populated as part of the tagging system, look at the target state reverted to and whether the revision was an exact revert. ...or maybe not without computing diffs, having just looked at an article with a history of edit warring. Sean.hoyland (talk) 07:43, 29 November 2024 (UTC)
- I agree with Levivich that automated, privacy-protecting sock-detection is not a pipe dream. I proposed a system something like this in 2018, see also here, and more recently here. However, it definitely requires a bit of software development and testing. It also requires the community and the foundation devs or product folks to prioritize the idea. Andre🚐 02:27, 10 December 2024 (UTC)
- There is so much more that could be done with the software. EIA gives you page overlaps (and isn't 100% accurate at it), but it doesn't tell you:
- I disagree. Empirically, the vast majority of time spent at SPI is not on finding possible socks, nor is it using the CheckUser tool on them, but rather it's the CU completed cases (of which there are currently 14 and I should probably stop slacking and get onto some) with non-definitive technical results waiting on an administrator to make the final determination on whether they're socks or not. Extension:SimilarUsers would concentrate various information that already exists (EIA, RoySmith's SPI tools) in one place, but I wouldn't say the accessibility of these tools is a cause of SPI backlog. An AI analysis tool to give an accurate magic number for likelihood? I'm anything but a Luddite, but still believe that's wishful thinking. DatGuyTalkContribs 03:02, 29 November 2024 (UTC)
- Oh, fun! Too bad it's CU-restricted, I'm quite curious to know what user I'm most stylometrically similar to. -- asilvering (talk) 17:51, 28 November 2024 (UTC)
- Comment. For some time I have vehemnently suspected that this site is crawling with massive numbers of sockpuppets, that the community seems to be unable or unwilling to recognise probable sockpuppets for what they are, and it is not feasible to send them to SPI one at a time. I see a large number of accounts that are sleepers, or that have low edit counts, trying to do things that are controversial or otherwise suspicious. I see them showing up at discussions in large numbers and in quick succession, and offering !votes consist of interpretations of our policies and guidelines that may not reflect consensus, or other statements that may not be factually accurate.
- I think the solution is simple: when closing community discussions, admins should look at the edit count of each !voter when determining how much weight to give his !vote. The lower the edit count, the greater the level of sleeper behaviour, and the more controversial the subject of the discussion is amongst the community, the less weight should be given to !vote.
- For example, if an account with less than one thousand edits !votes in a discussion about 16th century Tibetan manuscripts, we may well be able to trust that !vote, because the community does not care about such manuscripts. But if the same account !votes on anything connected with "databases" or "lugstubs", we should probably give that !vote very little weight, because that was the subject of a massive dispute amongst the community, and any discussion on that subject is not particulary unlikely to be crawling with socks on both sides. The feeling is that, if you want to be taken seriously in such a controversial discussion, you need to make enough edits to prove that you are a real person, and not a sock. James500 (talk) 15:22, 12 December 2024 (UTC)
- The site presumably has a large number of unidentified sockpuppets. As for the identified ban evading accounts, accounts categorized or logged as socks, if you look at 2 million randomly selected articles for the 2023-10-07 to 2024-10-06 year, just under 2% of the revisions are by ban evading actors blocked for sockpuppetry (211,546 revisions out of 10,732,361). A problem with making weight dependent on edit count is that the edit count number does not tell you anything about the probability that an account is a sock. Some people use hundreds of disposable accounts, making just a few edits with each account. Others stick around and make thousands of edits before they are detected. Also, Wikipedia provides plenty of tools that people can use to rapidly increase their edit count. Sean.hoyland (talk) 16:12, 12 December 2024 (UTC)
- I strongly oppose any idea of mass-CUing any group of users, and I'm pretty sure the WMF does too. This isn't the right way to fight sockpuppets. QuicoleJR (talk) 14:35, 15 December 2024 (UTC)
- Can I ask why? Is it a privacy-based concern? IPs are automatically collected and stored for 90 days, and maybe for years in the backups, regardless of CUs. That's a 90 day window that a machine could use to do something with them without anyone running a CU and without anyone having to see what the machine sees. Sean.hoyland (talk) 15:05, 15 December 2024 (UTC)
- @Levivich—one situation where I think we could pull a lot of data, and probably detect tons of sockpuppets, is !votes like RfAs and RfCs. Those have a lot of data, in addition to a very strong incentive for socking—you'd expect to see a bimodal distribution where most accounts have moderately-correlated views, but a handful have extremely strong-correlations (always !voting the same way), more than could plausibly happen by chance or by overlapping views. For accounts in the latter group, we'd have strong grounds to suspect collusion/canvassing or socking.
- RfAs are already in a very nice machine-readable format. RfCs aren't, but most could easily be made machine-readable (by adopting a few standardized templates). We could also build a tool for semi-automated recoding of old RfCs to get more data. – Closed Limelike Curves (talk) 18:56, 16 December 2024 (UTC)
- Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors. Those are very well-attended. The same may apply to many RfCs. Perhaps the less well-attended ones might be affected, but the main challenge is article edits, which will not be similarly structured. CMD (talk) 19:13, 16 December 2024 (UTC)
Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors.
- Given we've had situations of sockpuppets being made admins themselves, I'm not too sure of this myself. If someone did create a bunch of socks, as some people have alleged in this thread, it'd be weird of them not to use those socks to influence policy decisions. I'm pretty skeptical, but I do think investigating would be a good idea (if nothing else because of how important it is—even the possibility of substantial RfA/RfC manipulation is quite bad, because it undermines the whole idea of consensus). – Closed Limelike Curves (talk) 21:04, 16 December 2024 (UTC)
- Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors. Those are very well-attended. The same may apply to many RfCs. Perhaps the less well-attended ones might be affected, but the main challenge is article edits, which will not be similarly structured. CMD (talk) 19:13, 16 December 2024 (UTC)
What do we do with this information?
I think we've put the cart before the horse here a bit. While we've established it's possible to detect most sockpuppets automatically—and the WMF is already working on it—it's not clear what this would actually achieve, because having multiple accounts isn't against the rules. I think we'd need to establish a set of easy-to-enforce boundaries for people using multiple accounts. My proposal is to keep it simple—two accounts controlled by the same person can't edit the same page (or participate in the same discussion) without disclosing they're the same editor.– Closed Limelike Curves (talk) 04:41, 14 December 2024 (UTC)
- This is already covered by WP:LEGITSOCK I think. Andre🚐 05:03, 14 December 2024 (UTC)
- And as there are multiple legitimate ways to disclose, not all of which are machine readable, any automatically generated list is going to need human review. Thryduulf (talk) 10:13, 14 December 2024 (UTC)
- Yes, that's definitely the case, an automatic sock detection should probably never be an autoblock, or at least not unless there is a good reason in that specific circumstance, like a well-trained filter for a specific LTA. Having the output of automatic sock detection should still be restricted to CU/OS or another limited user group who can be trusted to treat possible user-privacy-related issues with discretion, and have gone through the appropriate legal rigmarole. There could also be some false positives or unusual situations when piloting a program like this. For example, I've seen dynamic IPs get assigned to someone else after a while, which is unlikely but not impossible depending on how an ISP implements DHCP, though I guess collisions become less common with IPV6. Or if the fingerprinting is implemented with a lot of datapoints to reduce the likelihood of false positives. Andre🚐 10:31, 14 December 2024 (UTC)
- I think we are probably years away from being able to rely on autonomous agents to detect and block socks without a human in the loop. For now, people need as much help as they can get to identify ban evasion candidates. Sean.hoyland (talk) 10:51, 14 December 2024 (UTC)
or at least not unless there is a good reason in that specific circumstance,
- Yep, basically I'm saying we need to define "good reason". The obvious situation is automatically blocking socks of blocked accounts. I also think we should just automatically prevent detected socks from editing the same page (ideally make it impossible, to keep it from being done accidentally). – Closed Limelike Curves (talk) 17:29, 14 December 2024 (UTC)
- Yes, that's definitely the case, an automatic sock detection should probably never be an autoblock, or at least not unless there is a good reason in that specific circumstance, like a well-trained filter for a specific LTA. Having the output of automatic sock detection should still be restricted to CU/OS or another limited user group who can be trusted to treat possible user-privacy-related issues with discretion, and have gone through the appropriate legal rigmarole. There could also be some false positives or unusual situations when piloting a program like this. For example, I've seen dynamic IPs get assigned to someone else after a while, which is unlikely but not impossible depending on how an ISP implements DHCP, though I guess collisions become less common with IPV6. Or if the fingerprinting is implemented with a lot of datapoints to reduce the likelihood of false positives. Andre🚐 10:31, 14 December 2024 (UTC)
- And as there are multiple legitimate ways to disclose, not all of which are machine readable, any automatically generated list is going to need human review. Thryduulf (talk) 10:13, 14 December 2024 (UTC)
Requiring registration for editing
- Note: This section was split off from "CheckUser for all new users" (permalink) and the "parenthetical comment" referred to below is:
(Also, email-required registration and get rid of IP editing.)
—03:49, 26 November 2024 (UTC)
@Levivich, about your parenthetical comment on requiring registration:
Part of the eternally unsolvable problem is that new editors are frankly bad at it. I can give examples from my own editing: Create an article citing a personal blog post as the main source? Check. Merge two articles that were actually different subjects? Been there, done that, got the revert. Misunderstand and mangle wikitext? More times than I can count. And that's after I created my account. Like about half of experienced editors, I edited as an IP first, fixing a typo here or reverting some vandalism there.
But if we don't persist through these early problems, we don't get experienced editors. And if we don't get experienced editors, Wikipedia will die.
Requiring registration ("get rid of IP editing") shrinks the number of people who edit. The Portuguese Wikipedia banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. The number of contributions has dropped, too. They went from 160K to 210K edits per month down to 140K most months.
Some of the experienced editors have said that they like this. No IPs means less impulsive vandalism, and the talk pages are stable if you want to talk to the editor. Fewer newbies means I don't "have to" clean up after so many mistake-makers! Fewer editors, and especially fewer inexperienced editors, is more convenient – in the short term. But I wonder whether they're going to feel the same way a decade from now, when their community keeps shrinking, and they start wondering when they will lose critical mass.
The same thing happens in the real world, by the way. Businesses want to hire someone with experience. They don't want to train the helpless newbie. And then after years of everybody deciding that training entry-level workers is Somebody else's problem, they all look around and say: Where are all the workers that I need? Why didn't someone else train the next generation while I was busy taking the easy path?
In case you're curious, there is a Wikipedia that puts all of the IP and newbie edits under "PC" type restrictions. Nobody can see the edits until they've been approved by an experienced editor. The rate of vandalism visible to ordinary readers is low. Experienced editors love the level of control they have. Have a look at what's happened to the size of their community during the last decade. Is that what you want to see here? If so, we know how to make that happen. The path to that destination even looks broad, easy, and paved with all kinds of good intentions. WhatamIdoing (talk) 04:32, 23 November 2024 (UTC)
- Size isn't everything... what happened to their output--the quality of their encyclopedias--after they made those changes? Levivich (talk) 05:24, 23 November 2024 (UTC)
- Well, I can tell you objectively that the number of edits declined, but "quality" is in the eye of the beholder. I understand that the latter community has the lowest use of inline citations of any mid-size or larger Wikipedia. What's now yesterday's TFA there wouldn't even be rated B-class here due to whole sections not having any ref tags. In terms of citation density, their FA standard is currently where ours was >15 years ago.
- But I think you have missed the point. Even if the quality has gone up according to the measure of your choice, if the number of contributors is steadily trending in the direction of zero, what will the quality be when something close to zero is reached? That community has almost halved in the last decade. How many articles are out of date, or missing, because there simply aren't enough people to write them? A decade from now, with half as many editors again, how much worse will the articles be? We're none of us idiots here. We can see the trend. We know that people die. You have doubtless seen this famous line:
All men are mortal. Socrates is a man. Therefore, Socrates is mortal.
- I say:
All Wikipedia editors are mortal. Dead editors do not maintain or improve Wikipedia articles. Therefore, maintaining and improving Wikipedia requires editors who are not dead.
- – and, memento mori, we are going to die, my friend. I am going to die. If we want Wikipedia to outlive us, we cannot be so shortsighted as to care only about the quality today, and never the quality the day after we die. WhatamIdoing (talk) 06:13, 23 November 2024 (UTC)
- Trends don't last forever. Enwiki's active user count decreased from its peak over a few years, then flattened out for over a decade. The quality increased over that period of time (by any measure). Just because these other projects have shed users doesn't mean they're doomed to have zero users at some point in the future. And I think there's too many variables to know how much any particular change made on a project affects its overall user count, nevermind the quality of its output. Levivich (talk) 06:28, 23 November 2024 (UTC)
- If the graph to the right accurately reflects the age distribution of Wikipedia users, then a large chunk of the user base will die off within the next decade or two. Not to be dramatic, but I agree that requiring registration to edit, which will discourage readers from editing in the first place, will hasten the project's decline.... Some1 (talk) 14:40, 23 November 2024 (UTC)
- 😂 Seriously? What do you suppose that chart looked like 20 years ago, and then what happened? Levivich (talk) 14:45, 23 November 2024 (UTC)
- There are significantly more barriers to entry than there were 20 years ago, and over that time the age profile has increased (quite significantly iirc). Adding more barriers to entry is not the way to solve the issued caused by barriers to entry. Thryduulf (talk) 15:50, 23 November 2024 (UTC)
- "PaperQA2 writes cited, Wikipedia style summaries of scientific topics that are significantly more accurate than existing, human-written Wikipedia articles" - maybe the demographics of the community will change. Sean.hoyland (talk) 16:30, 23 November 2024 (UTC)
- That talks about LLMs usage in artcles, not the users. 2601AC47 (talk|contribs) Isn't a IP anon 16:34, 23 November 2024 (UTC)
- Or you could say it's about a user called PaperQA2 that writes Wikipedia articles significantly more accurate than articles written by other users. Sean.hoyland (talk) 16:55, 23 November 2024 (UTC)
- No, it is very clearly about a language model. As far as I know, PaperQA2, or WikiCrow (the generative model using PaperQA2 for question answering), has not actually been making any edits on Wikipedia itself. Chaotic Enby (talk · contribs) 16:58, 23 November 2024 (UTC)
- That is true. It is not making any edits on Wikipedia itself. There is a barrier. But my point is that in the future that barrier may not be there. There may be users like PaperQA2 writing articles better than other users and the demographics will have changed to include new kinds of users, much younger than us. Sean.hoyland (talk) 17:33, 23 November 2024 (UTC)
- And who will never die off! Levivich (talk) 17:39, 23 November 2024 (UTC)
- But which will not be Wikipedia. WhatamIdoing (talk) 06:03, 24 November 2024 (UTC)
- And who will never die off! Levivich (talk) 17:39, 23 November 2024 (UTC)
- That is true. It is not making any edits on Wikipedia itself. There is a barrier. But my point is that in the future that barrier may not be there. There may be users like PaperQA2 writing articles better than other users and the demographics will have changed to include new kinds of users, much younger than us. Sean.hoyland (talk) 17:33, 23 November 2024 (UTC)
- No, it is very clearly about a language model. As far as I know, PaperQA2, or WikiCrow (the generative model using PaperQA2 for question answering), has not actually been making any edits on Wikipedia itself. Chaotic Enby (talk · contribs) 16:58, 23 November 2024 (UTC)
- Or you could say it's about a user called PaperQA2 that writes Wikipedia articles significantly more accurate than articles written by other users. Sean.hoyland (talk) 16:55, 23 November 2024 (UTC)
- That talks about LLMs usage in artcles, not the users. 2601AC47 (talk|contribs) Isn't a IP anon 16:34, 23 November 2024 (UTC)
- "PaperQA2 writes cited, Wikipedia style summaries of scientific topics that are significantly more accurate than existing, human-written Wikipedia articles" - maybe the demographics of the community will change. Sean.hoyland (talk) 16:30, 23 November 2024 (UTC)
- In re "What do you suppose that chart looked like 20 years ago": I believe that the numbers, very roughly, are that the community has gotten about 10 years older, on average, than it was 20 years ago. That is, we are bringing in some younger people, but not at a rate that would allow us to maintain the original age distribution. (Whether the original age distribution was a good thing is a separate consideration.) WhatamIdoing (talk) 06:06, 24 November 2024 (UTC)
- There are significantly more barriers to entry than there were 20 years ago, and over that time the age profile has increased (quite significantly iirc). Adding more barriers to entry is not the way to solve the issued caused by barriers to entry. Thryduulf (talk) 15:50, 23 November 2024 (UTC)
- I like looking at the en.wikipedia user retention graph hosted on Toolforge (for anyone who might go looking for it later, there's a link to it at Wikipedia:WikiProject Editor Retention § Resources). It shows histograms of how many editors have edited in each month, grouped by all the editors who started editing in the same month. The data is noisy, but it does seem to show an increase in editing tenure since 2020 (when the pursuit of a lot of solo hobbies picked up, of course). Prior to that, there does seem to be a bit of slow growth in tenure length since the lowest point around 2013. isaacl (talk) 17:18, 23 November 2024 (UTC)
- The trend is a bit clearer when looking at the retention graph based on those who made at least 10 edits in a month. (To see the trend when looking at the retention graph based on 100 edits in a month, the default colour range needs to be shifted to accommodate the smaller numbers.) isaacl (talk) 17:25, 23 November 2024 (UTC)
- I'd say that the story there is: Something amazing happened in 2006. Ours (since both of us registered our accounts that year) was the year from which people stuck around. I think that would be just about the time that the wall o' automated rejection really got going. There was some UPE-type commercial pressure, but it didn't feel unmanageable. It looks like an inflection point in retention. WhatamIdoing (talk) 06:12, 24 November 2024 (UTC)
- I don't think something particularly amazing happened in 2006. I think the rapid growth in articles starting in 2004 attracted a large land rush of editors as Wikipedia became established as a top search result. The cohort of editors at that time had the opportunity to cover a lot of topics for the first time on Wikipedia, requiring a lot of co-ordination, which created bonds between editors. As topic coverage grew, there were fewer articles that could be more readily created by generalists, the land rush subsided, and there was less motivation for new editors to persist in editing. Boom-bust cycles are common for a lot of popular things, particularly in tech where newer, shinier things launch all the time. isaacl (talk) 19:07, 24 November 2024 (UTC)
- Ah yes, that glorious time when we gained an article on every Pokemon character and, it seems, every actor who was ever credited in a porn movie. Unfortunately, many of the editors I bonded with then are no longer active. Some are dead, some finished school, some presumably burned out, at least one went into the ministry. Donald Albury 23:49, 26 November 2024 (UTC)
- I don't think something particularly amazing happened in 2006. I think the rapid growth in articles starting in 2004 attracted a large land rush of editors as Wikipedia became established as a top search result. The cohort of editors at that time had the opportunity to cover a lot of topics for the first time on Wikipedia, requiring a lot of co-ordination, which created bonds between editors. As topic coverage grew, there were fewer articles that could be more readily created by generalists, the land rush subsided, and there was less motivation for new editors to persist in editing. Boom-bust cycles are common for a lot of popular things, particularly in tech where newer, shinier things launch all the time. isaacl (talk) 19:07, 24 November 2024 (UTC)
- I'd say that the story there is: Something amazing happened in 2006. Ours (since both of us registered our accounts that year) was the year from which people stuck around. I think that would be just about the time that the wall o' automated rejection really got going. There was some UPE-type commercial pressure, but it didn't feel unmanageable. It looks like an inflection point in retention. WhatamIdoing (talk) 06:12, 24 November 2024 (UTC)
- 😂 Seriously? What do you suppose that chart looked like 20 years ago, and then what happened? Levivich (talk) 14:45, 23 November 2024 (UTC)
Have a look at what happened to the size of their community.
—I'm gonna be honest: eyeballing it, I don't actually see much (if any) difference with enwiki's activity. "Look at this graph" only convinces people when the dataset passes the interocular trauma test (e.g. the hockey stick).- On the other hand, if there's a dataset of "when did $LANGUAGEwiki adopt universal pending changes protections", we could settle this argument once and for all using a real statistical model that can deliver precise effect sizes on activity. Maybe then we can all finally drop the stick. – Closed Limelike Curves (talk) 18:08, 26 November 2024 (UTC)
This particular idea will not pass, but the binary developing in the discussion is depressing. A bargain where we allow IPs to edit (or unregistered users generally when IPs are masked), and therefore will sit on our hands when dealing with abuse and even harassment is a grim one. Any steps taken to curtail the second half of that bargain would make the first half stronger, and I am generally glad to see thoughts about it, even if they end up being impractical. CMD (talk) 02:13, 24 November 2024 (UTC)
- I don't want us to sit on our hands when we see abuse and harassment. I think our toolset is about 20 years out of date, and I believe there are things we could do that will help (e.g., mw:Temporary accounts, cross-wiki checkuser tools for Stewards, detecting and responding to a little bit more information about devices/settings [perhaps, e.g., whether an edit is being made from a private/incognito window]). WhatamIdoing (talk) 06:39, 24 November 2024 (UTC)
- Temporary accounts will help with the casual vandalism, but they’re not going to help with abuse and harassment. If it limits the ability to see ranges, it will make issues slightly worse. CMD (talk) 07:13, 24 November 2024 (UTC)
- I'm not sure what the current story is there, but when I talked to the team last (i.e., in mid-2023), we were talking about the value of a tool that would do range-related work. For various reasons, this would probably be Toolforge instead of MediaWiki, and it would probably be restricted (e.g., to admins, or to a suitable group chosen by each community), but the goal was to make it require less manual work, particularly for cross-wiki abuse, and to be able to aggregate some information without requiring direct disclosure of some PII. WhatamIdoing (talk) 23:56, 25 November 2024 (UTC)
- Temporary accounts will help with the casual vandalism, but they’re not going to help with abuse and harassment. If it limits the ability to see ranges, it will make issues slightly worse. CMD (talk) 07:13, 24 November 2024 (UTC)
Oh look, misleading statistics! "The Portuguese Wikipedia banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. " Of course you have a spike in new registrations soon after you stop allowing IP edits, and you can't sustain that spike. That is not evidence of anything. It would have been more honest and illustrative to show the graph before and after the policy change, not only afterwards, e.g. thus. Oh look, banning IP editing has resulted in on average some 50% more registered editors than before the ban. Number of active editors is up 50% as well[22]. The number of new pages has stayed the same[23]. Number of edits is down, yes, but how much of this is due to less vandalism / vandalism reverts? A lot apparently, as the count of user edits has stayed about the same[24]. Basically, from those statistics, used properly, it is impossible to detect any issues with the Portuguese Wikipedia due to the banning of IP editing. Fram (talk) 08:55, 26 November 2024 (UTC)
- "how much of this is due to less vandalism / vandalism reverts?" That's a good question. Do we have some data on this? Jo-Jo Eumerus (talk) 09:20, 26 November 2024 (UTC)
- @Jo-Jo Eumerus:, the dashboard is here although it looks like they stopped reporting the data in late 2021. If you take "Number of reverts" as a proxy for vandalism, you can see that the block shifted the number of reverts from a higher equilibrium to a lower one, while overall non-reverted edits does not seem to have changed significantly during that period. CMD (talk) 11:44, 28 November 2024 (UTC)
- Upon thinking, it would be useful to know how many good edits are done by IP. Or as is, unreverted edits. Jo-Jo Eumerus (talk) 14:03, 30 November 2024 (UTC)
- @Jo-Jo Eumerus:, the dashboard is here although it looks like they stopped reporting the data in late 2021. If you take "Number of reverts" as a proxy for vandalism, you can see that the block shifted the number of reverts from a higher equilibrium to a lower one, while overall non-reverted edits does not seem to have changed significantly during that period. CMD (talk) 11:44, 28 November 2024 (UTC)
- I agree that one should expect a spike in registration. (In fact, I have suggested a strictly temporary requirement to register – a few hours, even – to push some of our regular IPs into creating accounts.) But once you look past that initial spike, the trend is downward. WhatamIdoing (talk) 05:32, 29 November 2024 (UTC)
But once you look past that initial spike, the trend is downward.
- I still don't see any evidence that this downward trend is unusual. Apparently the WMF did an analysis of ptwiki and didn't find evidence of a drop in activity. Net edits (non-revert edits standing for at least 48 hours) increased by 5.7%, although edits across other wikis increased slightly more. The impression I get is any effects are small either way—the gains from freeing up anti-vandalism resources basically offset the cost of some IP editors not signing up.
- TBH this lines up with what I'd expect. Very few people I talk to cite issues like "creating an account" as a major barrier to editing Wikipedia. The most common barrier I've heard from people who tried editing and gave it up is "Oh, I tried, but then some random admin reverted me, linked me to MOS:OBSCURE BULLSHIT, and told me to go fuck myself but with less expletives." – Closed Limelike Curves (talk) 07:32, 29 November 2024 (UTC)
Not really obvious, and not more or even less so in Portuguese wikipedia [25] than in e.g. Enwiki[26], FRwiki[27], NLwiki[28], ESwiki[29], Svwiki[30]... So, once again, these statistics show no issue at all with disabling IP editing on Portuguese Wikipedia. Fram (talk) 10:38, 29 November 2024 (UTC)But once you look past that initial spike, the trend is downward.
Aside from the obvious loss of good 'IP' editors, I think there's a risk of unintended consequences from 'stopping vandalism' at all; 'vandalism' and 'disruptive editing' from IP editors (or others) isn't necessarily a bad thing, long term. Even the worst disruptive editors 'stir the pot' of articles, bringing attention to articles that need it, and otherwise would have gone unnoticed. As someone who mostly just trawls through recent changes, I can remember dozens of times when where an IP, or brand new, user comes along and breaks an article entirely, but their edit leads inexorably to the article being improved. Sometimes there is a glimmer of a good point in their edit, that I was able to express better than they were, maybe in a more balanced or neutral way. Sometimes they make an entirely inappropriate edit, but it brings the article to the top of the list, and upon reading it I notice a number of other, previously missed, problems in the article. Sometimes, having reverted a disruptive change, I just go and add some sources or fix a few typos in the article before I go on my merry way. You might think 'Ah, but 'Random article' would let you find those problems too. BUT random article' is, well, random. IP editors are more targeted, and that someone felt the need to disparage a certain person's mother in fact brings attention to an article about someone who is, unbeknownst to us editors, particularly contentious in the world of Czech Jazz Flautists so there is a lot there to fix. By stopping people making these edits, we risk a larger proportion of articles becoming an entirely stagnant. JeffUK 15:00, 9 December 2024 (UTC)
- I feel that the glassmaker has been too clever by half here. "Ahh, but vandalism of articles stimulates improvements to those articles." If the analysis ends there, I have no objections. But if, on the other hand, you come to the conclusion that it is a good thing to vandalize articles, that it causes information to circulate, and that the encouragement of editing in general will be the result of it, you will oblige me to call out, "Halt! Your theory is confined to that which is seen; it takes no account of that which is not seen." If I were to pay a thousand people to vandalize Wikipedia articles full-time, bringing more attention to them, would I be a hero or villain? If vandalism is good, why do we ban vandals instead of thanking them? Because vandalism is bad—every hour spent cleaning up after a vandal is one not spent writing a new article or improving an existing one.
- On targeting: vandals are more targeted than a "random article", but are far more destructive than basic tools for prioritizing content, and less effective than even very basic prioritization tools like sorting articles by total views. – Closed Limelike Curves (talk) 19:11, 9 December 2024 (UTC)
- I mean, I only said Vandalism "isn't necessarily a bad thing, long term", I don't think it's completely good, but maybe I should have added 'in small doses', fixing vandalism takes one or two clicks of the mouse in most cases and it seems, based entirely on my anecdotal experience, to sometimes have surprisingly good consequences; intuitively, you wouldn't prescribe vandalism, but these things have a way of finding a natural balance, and what's intuitive isn't necessarily what's right. One wouldn't prescribe dropping asteroids on the planet you're trying to foster life upon after you finally got it going, but we can be pretty happy that it happened! - And 'vandalism' is the very worst of what unregistered (and registered!) users get up to, there are many, many more unambiguously positive contributors than unambiguously malicious. JeffUK 20:03, 9 December 2024 (UTC)
intuitively, you wouldn't prescribe vandalism
- Right, and I think this is mainly the intuition I wanted to invoke here—"more vandalism would be good" a bit too galaxy-brained of a take for me to find it compelling without some strong empirical evidence to back it up.
- Although TBH, I don't see this as a big deal either way. We already have to review and check IP edits for vandalism; the only difference is whether that content is displayed while we wait for review (with pending changes, the edit is hidden until it's reviewed; without it, the edit is visible until someone reviews and reverts it). This is unlikely to substantially affect contributions (the only difference on the IP's end is they have to wait a bit for their edit to show up) or vandalism (since we already de facto review IP edits). – Closed Limelike Curves (talk) 04:29, 14 December 2024 (UTC)
- I mean, I only said Vandalism "isn't necessarily a bad thing, long term", I don't think it's completely good, but maybe I should have added 'in small doses', fixing vandalism takes one or two clicks of the mouse in most cases and it seems, based entirely on my anecdotal experience, to sometimes have surprisingly good consequences; intuitively, you wouldn't prescribe vandalism, but these things have a way of finding a natural balance, and what's intuitive isn't necessarily what's right. One wouldn't prescribe dropping asteroids on the planet you're trying to foster life upon after you finally got it going, but we can be pretty happy that it happened! - And 'vandalism' is the very worst of what unregistered (and registered!) users get up to, there are many, many more unambiguously positive contributors than unambiguously malicious. JeffUK 20:03, 9 December 2024 (UTC)
Revise Wikipedia:INACTIVITY
Point 1 of Procedural removal for inactive administrators which currently reads "Has made neither edits nor administrative actions for at least a 12-month period" should be replaced with "Has made no administrative actions for at least a 12-month period". The current wording of 1. means that an Admin who takes no admin actions keeps the tools provided they make at least a few edits every year, which really isn't the point. The whole purpose of adminship is to protect and advance the project. If an admin isn't using the tools then they don't need to have them. Mztourist (talk) 07:47, 4 December 2024 (UTC)
Endorsement/Opposition (Admin inactivity removal)
- Support as proposer. Mztourist (talk) 07:47, 4 December 2024 (UTC)
- Oppose - this would create an unnecessary barrier to admins who, for real life reasons, have limited engagement for a bit. Asking the tools back at BN can feel like a faff. Plus, logged admin activity is a poor guide to actual admin activity. In some areas, maybe half of actions aren't logged? —Femke 🐦 (talk) 19:17, 4 December 2024 (UTC)
- Oppose. First, not all admin actions are logged as such. One example which immediately comes to mind is declining an unblock request. In the logs, that's just a normal edit, but it's one only admins are permitted to make. That aside, if someone has remained at least somewhat engaged with the project, they're showing they're still interested in returning to more activity one day, even if real-life commitments prevent them from it right now. We all have things come up that take away our available time for Wikipedia from time to time, and that's just part of life. Say, for example, someone is currently engaged in a PhD program, which is a tremendously time-consuming activity, but they still make an edit here or there when they can snatch a spare moment. Do we really want to discourage that person from coming back around once they've completed it? Seraphimblade Talk to me 21:21, 4 December 2024 (UTC)
- We could declare specific types of edits which count as admin actions despite being mere edits. It should be fairly simple to write a bot which checks if an admin has added or removed specific texts in any edit, or made any of specific modifications to pages. Checking for protected edits can be a little harder (we need to check for protection at the time of edit, not for the time of the check), but even this can be managed. Edits to pages which match specific regular expression patterns should be trivial to detect. Animal lover |666| 11:33, 9 December 2024 (UTC)
- Oppose There's no indication that this is a problem needs fixing. ⇒SWATJester Shoot Blues, Tell VileRat! 00:55, 5 December 2024 (UTC)
- Support Admins who don't use the tools should not have the tools. * Pppery * it has begun... 03:55, 5 December 2024 (UTC)
- Oppose While I have never accepted "not all admin actions are logged" as a realistic reason for no logged actions in an entre year, I just don't see what problematic group of admins this is in response to. Previous tweaks to the rules were in response to admins that seemed to be gaming the system, that were basically inactive and when they did use the tools they did it badly, etc. We don't need a rule that ins't pointed a provable, ongoing problem. Just Step Sideways from this world ..... today 19:19, 8 December 2024 (UTC)
- Oppose If an admin is still editing, it's not unreasonable to assume that they are still up to date with policies, community norms etc. I see no particular risk in allowing them to keep their tools. Scribolt (talk) 19:46, 8 December 2024 (UTC)
- Oppose: It feels like some people are trying to accelerate admin attrition and I don't know why. This is a solution in search of a problem. Gnomingstuff (talk) 07:11, 10 December 2024 (UTC)
- Oppose Sure there is a problem, but the real problem I think is that it is puzzling why they are still admins. Perhaps we could get them all to make a periodic 'declaration of intent' or some such every five years that explains why they want to remain an admin. Alanscottwalker (talk) 19:01, 11 December 2024 (UTC)
- Oppose largely per scribolt. We want to take away mops from inactive accounts where there is a risk of them being compromised, or having got out of touch with community norms, this proposal rather targets the admins who are active members of the community. Also declining incorrect deletion tags and AIV reports doesn't require the use of the tools, doesn't get logged but is also an important thing for admins to do. ϢereSpielChequers 07:43, 15 December 2024 (UTC)
Discussion (Admin inactivity removal)
- Making administrative actions can be helpful to show that the admin is still up-to-date with community norms. We could argue that if someone is active but doesn't use the tools, it isn't a big issue whether they have them or not. Still, the tools can be requested back following an inactivity desysop, if the formerly inactive admin changes their mind and wants to make admin actions again. For now, I don't see any immediate issues with this proposal. Chaotic Enby (talk · contribs) 08:13, 4 December 2024 (UTC)
- Looking back at previous RFCs, in 2011 the reasoning was to reduce the attack surface for inactive account takeover, and in 2022 it was about admins who haven't been around enough to keep up with changing community norms. What's the justification for this besides "use it or lose it"? Further, we already have a mechanism (from the 2022 RFC) to account for admins who make a few edits every year. Anomie⚔ 12:44, 4 December 2024 (UTC)
- I also note that not all admin actions are logged. Logging editing through full protection requires abusing the Edit Filter extension. Reviewing of deleted content isn't logged at all. Who will decide whether an admin's XFD "keep" closures are really WP:NACs or not? Do adminbot actions count for the operator? There are probably more examples. Currently we ignore these edge cases since the edits will probably also be there, but now if we can desysop someone who made 100,000 edits in the year we may need to consider them. Anomie⚔ 12:44, 4 December 2024 (UTC)
- I had completely forgotten that many admin actions weren't logged (and thus didn't "count" for activity levels), that's actually a good point (and stops the "community norms" arguments as healthy levels of community interaction can definitely be good evidence of that). And, since admins desysopped for inactivity can request the tools back, an admin needing the bit but not making any logged actions can just ask for it back. At this point, I'm not sure if there's a reason to go through the automated process of desysopping/asking for resysop at all, rather than just politely ask the admin if they still need the tools.I'm still very neutral on this by virtue of it being a pretty pointless and harmless process either way (as, again, there's nothing preventing an active admin desysopped for "inactivity" from requesting the tools back), but I might lean oppose just so we don't add a pointless process for the sake of it. Chaotic Enby (talk · contribs) 15:59, 4 December 2024 (UTC)
- To me this comes down to whether the community considers it problematic for an admin to have tools they aren't using. Since it's been noted that not all admin actions are logged, and an admin who isn't using their tools also isn't causing any problems, I'm not sure I see a need to actively remove the tools from an inactive admin; in a worst-case scenario, isn't this encouraging an admin to (potentially mis-)use the tools solely in the interest of keeping their bit? There also seems to be somewhat of a bad-faith assumption to the argument that an admin who isn't using their tools may also be falling behind on community norms. I'd certainly like to hope that if I was an admin who had been inactive that I would review P&G relevant to any admin action I intended to undertake before I executed. DonIago (talk) 15:14, 4 December 2024 (UTC)
- As I have understood it, the original rationale for desysopping after no activity for a year was the perception that an inactive account was at higher danger of being hijacked. It had nothing to do with how often the tools were being used, and presumably, if the admin was still editing, even if not using the tools, the account was less likely to be hijacked. - Donald Albury 22:26, 4 December 2024 (UTC)
- And also, if the account of an active admin was hijacked, both the account owner and those they interact with regularly would be more likely to notice the hijacking. The sooner a hijacked account is identified as hijacked, the sooner it is blocked/locked which obviously minimises the damage that can be done. Thryduulf (talk) 00:42, 5 December 2024 (UTC)
- I was not aware that not all admin actions are logged, obviously they should all be correctly logged as admin actions. If you're an Admin you should be doing Admin stuff, if not then you obviously don't need the tools. If an Admin is busy IRL then they can either give up the tools voluntarily or get desysopped for inactivity. The "Asking the tools back at BN can feel like a faff." isn't a valid argument, if an Admin has been desysopped for inactivity then getting the tools back should be "a faff". Regarding the comment that "There's no indication that this is a problem needs fixing." the problem is Admins who don't undertake admin activity, don't stay up to date with policies and norms, but don't voluntarily give up the tools. The 2022 change was about total edits over 5 years, not specifically admin actions and so didn't adequately address the issue. Mztourist (talk) 03:23, 5 December 2024 (UTC)
obviously they should all be correctly logged as admin actions
- how would you log actions that are administrative actions due to context/requiring passive use of tools (viewing deleted content, etc.) rather than active use (deleting/undeleting, blocking, and so on)/declining requests where accepting them would require tool use? (e.g. closing various discussions that really shouldn't be NAC'd, reviewing deleted content, declining page restoration) Maybe there are good ways of doing that, but I haven't seen any proposed the various times this subject came up. Unless and until "soft" admin actions are actually logged somehow, "editor has admin tools and continues to engage with the project by editing" is the closest, if very imperfect, approximation to it we have, with criterion 2 sort-of functioning to catch cases of "but these specific folks edit so little over a prolonged time that it's unlikely they're up-to-date and actively engaging in soft admin actions". (I definitely do feel criterion 2 could be significantly stricter, fwiw) AddWittyNameHere 05:30, 5 December 2024 (UTC)- Not being an Admin I have no idea how their actions are or aren't logged, but is it a big ask that Admins perform at least a few logged Admin actions in a year? The "imperfect, approximation" that "editor has admin tools and continues to engage with the project by editing" is completely inadequate to capture Admin inactivity. Mztourist (talk) 07:06, 6 December 2024 (UTC)
- Why is it "completely inadequate"? Thryduulf (talk) 10:32, 6 December 2024 (UTC)
- I've been a "hawk" regarding admin activity standards for a very long time, but this proposal comes off as half-baked. The rules we have now are the result of careful consideration and incremental changes aimed at specific, provable issues with previous standards. While I am not a proponent of "not all actions are logged" as a blanket excuse for no logged actions in several years, it is feasible that an admin could be otherwise fully engaged with the community while not having any logged actions. We haven't been having trouble with admins who would be removed by this, so where's the problem? Just Step Sideways from this world ..... today 19:15, 8 December 2024 (UTC)
- Why is it "completely inadequate"? Thryduulf (talk) 10:32, 6 December 2024 (UTC)
- Not being an Admin I have no idea how their actions are or aren't logged, but is it a big ask that Admins perform at least a few logged Admin actions in a year? The "imperfect, approximation" that "editor has admin tools and continues to engage with the project by editing" is completely inadequate to capture Admin inactivity. Mztourist (talk) 07:06, 6 December 2024 (UTC)
"Blur all images" switch
Although i know that WP:NOTCENSORED, i propose that the Vector 2022 and Minerva Neue skins (+the Wikipedia mobile apps) have a "blur all images" toggle that blurs all the images on all pages (requiring clicking on them to view them), which simplifies the process of doing HELP:NOSEE as that means:
- You don't need to create an account to hide all images.
- You don't need any complex JavaScript or CSS installation procedures. Not even browser extensions.
- You can blur all images in the mobile apps, too.
- It's all done with one push of a button. No extra steps needed.
- Blurring all images > hiding all images. The content of a blurred image could be easily memorized, while a completely hidden image is difficult to compare to the others.
And it shouldn't be limited to just Wikipedia. This toggle should be available on all other WMF projects and MediaWiki-powered wikis, too. 67.209.128.126 (talk) 15:26, 5 December 2024 (UTC)
- Sounds good. Damon will be thrilled. Martinevans123 (talk) 15:29, 5 December 2024 (UTC)
- Sounds like something I can try to make a demo of as a userscript! Chaotic Enby (talk · contribs) 15:38, 5 December 2024 (UTC)
- User:Chaotic Enby/blur.js should do the job, although I'm not sure how to deal with the Page Previews extension's images. Chaotic Enby (talk · contribs) 16:16, 5 December 2024 (UTC)
- Will be a problem for non registered users, as the default would clearly to leave images in blurred for them. — Masem (t) 15:40, 5 December 2024 (UTC)
- Better show all images by default for all users. If you clear your cookies often you can simply change the toggle every time. 67.209.128.132 (talk) 00:07, 6 December 2024 (UTC)
- That's my point: if you are unregistered, you will see whatever the default setting is (which I assume will be unblurred, which might lead to more complaints). We had similar problems dealing with image thumbnail sizes, a setting that unregistered users can't adjust. Masem (t) 01:10, 6 December 2024 (UTC)
- I'm confused about how this would lead to more complaints. Right now, logged-out users see every image without obfuscation. After this toggle rolls out, logged-out users would still see every image without obfuscation. What fresh circumstance is leading to new complaints? ꧁Zanahary꧂ 07:20, 12 December 2024 (UTC)
- Well, we'd be putting in an option to censor, but not actively doing it. People will have issues with that. Lee Vilenski (talk • contribs) 10:37, 12 December 2024 (UTC)
- Isn't the page Help:Options to hide an image "an option to censor" we've put in? Gråbergs Gråa Sång (talk) 11:09, 12 December 2024 (UTC)
- Well, we'd be putting in an option to censor, but not actively doing it. People will have issues with that. Lee Vilenski (talk • contribs) 10:37, 12 December 2024 (UTC)
- I'm confused about how this would lead to more complaints. Right now, logged-out users see every image without obfuscation. After this toggle rolls out, logged-out users would still see every image without obfuscation. What fresh circumstance is leading to new complaints? ꧁Zanahary꧂ 07:20, 12 December 2024 (UTC)
- That's my point: if you are unregistered, you will see whatever the default setting is (which I assume will be unblurred, which might lead to more complaints). We had similar problems dealing with image thumbnail sizes, a setting that unregistered users can't adjust. Masem (t) 01:10, 6 December 2024 (UTC)
- Better show all images by default for all users. If you clear your cookies often you can simply change the toggle every time. 67.209.128.132 (talk) 00:07, 6 December 2024 (UTC)
- I'm not opposed to this, if it can be made to work, fine. Gråbergs Gråa Sång (talk) 19:11, 5 December 2024 (UTC)
- What would be the goal of a blur all images option? It seems too tailored. But a "hide all images" could be suitable. EEpic (talk) 06:40, 11 December 2024 (UTC)
- Simply removing them might break page layout, so images could be replaced with an equally sized placeholder. JayCubby 13:46, 13 December 2024 (UTC)
Could there be an option to simply not load images for people with a low-bandwidth connection or who don't want them? Travellers & Tinkers (talk) 16:36, 5 December 2024 (UTC)
- I agree. This way, the options would go as
- Show all images
- Blur all images
- Hide all images
- It would honestly be better with your suggestion. 67.209.128.132 (talk) 00:02, 6 December 2024 (UTC)
- Of course, it will do nothing to appease the "These pics shouldn't be on WP at all" people. Gråbergs Gråa Sång (talk) 06:52, 6 December 2024 (UTC)
- “Commons be thataway” is what we should tell them Dronebogus (talk) 18:00, 11 December 2024 (UTC)
- I suggest that the "hide all images" display file name if possible. Between file name and caption (which admittedly are often similar, but not always), there should be sufficient clue whether an image will be useful (and some suggestion, but not reliably so, if it may offend a sensibility.) -- Nat Gertler (talk) 17:59, 11 December 2024 (UTC)
- Of course, it will do nothing to appease the "These pics shouldn't be on WP at all" people. Gråbergs Gråa Sång (talk) 06:52, 6 December 2024 (UTC)
- For low-bandwidth or expensive bandwidth -- many folks are on mobile plans which charge for bandwidth. -- Nat Gertler (talk) 14:28, 11 December 2024 (UTC)
Regarding not limiting image management choices to Wikipedia: that's why it's better to manage this on the client side. Anyone needing to limit their bandwidth usage, or to otherwise decide individually on whether or not to load each photo, will likely want to do this generally in their web browsing. isaacl (talk) 18:43, 6 December 2024 (UTC)
- Definitely a browser issue. You can get plug-ins for Chrome right now that will do exactly this, and there's no need for Wikipedia/Mediawiki to implent anything. — The Anome (talk) 18:48, 6 December 2024 (UTC)
I propose something a bit different: all images on the bad images list can only be viewed with a user account that has been verified to be over 18 with government issued ID. I say this because in my view there is absolutely no reason for a minor to view it. Jayson (talk) 23:41, 8 December 2024 (UTC)
- Well, that means readers will be forced to not only create an account, but also disclose sensitive personal information, just to see encyclopedic images. That is pretty much the opposite of a free encyclopedia. Chaotic Enby (talk · contribs) 23:44, 8 December 2024 (UTC)
- I can support allowing users to opt to blu4 or hide some types of images, but this needs to be an opt-in only. By default, show all images. And I'm also opposed to any technical restriction which requires self-identification to overcome, except for cases where the Foundation deems it necessary to protect private information (checkuser, oversight-level hiding, or emails involving private information). Please also keep in mind that even if a user sends a copy of an ID which indicates the individual person's age, there is no way to verify that it was the user's own ID whuch had been sent. Animal lover |666| 11:25, 9 December 2024 (UTC)
- Also, the bad images list is a really terrible standard. Around 6% of it is completely harmless content that happened to be abused. And even some of the “NSFW” images are perfectly fine for children to view, for example File:UC and her minutes-old baby.jpg. Are we becoming Texas or Florida now? Dronebogus (talk) 18:00, 11 December 2024 (UTC)
- You could've chosen a much better example like dirty toilet or the flag of Hezbollah... Traumnovelle (talk) 19:38, 11 December 2024 (UTC)
- Well, yes, but I rank that as “harmless”. I don’t know why anyone would consider a woman with her newborn baby so inappropriate for children it needs to be censored like hardcore porn. Dronebogus (talk) 14:53, 12 December 2024 (UTC)
- The Hezbollah flag might be blacklisted because it's copyrighted, but placed in articles by uninformed editors (though one of JJMC89's bots automatically removes NFC files from pages). We have File:InfoboxHez.PNG for those uses. JayCubby 16:49, 13 December 2024 (UTC)
- You could've chosen a much better example like dirty toilet or the flag of Hezbollah... Traumnovelle (talk) 19:38, 11 December 2024 (UTC)
- Also, the bad images list is a really terrible standard. Around 6% of it is completely harmless content that happened to be abused. And even some of the “NSFW” images are perfectly fine for children to view, for example File:UC and her minutes-old baby.jpg. Are we becoming Texas or Florida now? Dronebogus (talk) 18:00, 11 December 2024 (UTC)
- I can support allowing users to opt to blu4 or hide some types of images, but this needs to be an opt-in only. By default, show all images. And I'm also opposed to any technical restriction which requires self-identification to overcome, except for cases where the Foundation deems it necessary to protect private information (checkuser, oversight-level hiding, or emails involving private information). Please also keep in mind that even if a user sends a copy of an ID which indicates the individual person's age, there is no way to verify that it was the user's own ID whuch had been sent. Animal lover |666| 11:25, 9 December 2024 (UTC)
- I support this proposal. It’s a very clean compromise between the “think of the children” camp and the “freeze peach camp”. Dronebogus (talk) 17:51, 11 December 2024 (UTC)
- Let me dox myself so I can view this image. Even Google image search doesn't require something this stringent. Lee Vilenski (talk • contribs) 19:49, 11 December 2024 (UTC)
- oppose should not be providing toggles to censor. ValarianB (talk) 15:15, 12 December 2024 (UTC)
- What about an option to disable images entirely? It might use significantly less data. JayCubby 02:38, 13 December 2024 (UTC)
- This is an even better idea as an opt-in toggle than the blur one. Load no images by default, and let users click a button to load individual images. That has a use beyond sensitivity. ꧁Zanahary꧂ 02:46, 13 December 2024 (UTC)
- Yes I like that idea even better. I think in any case we should use alt text to describe the image so people don’t have to play Russian roulette based on potentially vague or nonexistent descriptions, i.e. without alt text an ignorant reader would have no idea the album cover for Virgin Killer depicts a nude child in a… questionable pose. Dronebogus (talk) 11:42, 13 December 2024 (UTC)
- An option to replace images with alt text seems both much more useful and much more neutral as an option. There are technical reasons why a user might want to not load images (or only selectively load them based on the description), so that feels more like a neutral interface setting. An option to blur images by default sends a stronger message that images are dangerous.--Trystan (talk) 16:24, 13 December 2024 (UTC)
- Also it'd negate the bandwidth savings somewhat (assuming an image is displayed as a low pixel-count version). I'm of the belief that Wikipedia should have more features tailored to the reader. JayCubby 16:58, 13 December 2024 (UTC)
- At the very least, add a filter that allows you to block all images on the bad image list, specifically that list and those images. To the people who say you shouldnt have to give up personal info, I say that we should go the way Roblox does. Seems a bit random, hear me out: To play 17+ games, you need to verify with gov id, those games have blood alcohol, unplayable gambling and "romance". I say that we do the same. Giving up personal info to view bad things doesn't seem so bad to me. Jayson (talk) 03:44, 15 December 2024 (UTC)
- Building up a database of people who have applied to view bad things on a service that's available in restrictive regimes sounds like a way of putting our users in danger. -- Nat Gertler (talk) 07:13, 15 December 2024 (UTC)
- Roblox =/= Wikipedia. I don’t know why I have to say this, nor did I ever think I would. And did you read what I already said about the “bad list”? Do you want people to have to submit their ID to look at poop, a woman with her baby, the Hezbollah flag, or graffiti? How about we age-lock articles about adult topics next? Dronebogus (talk) 15:55, 15 December 2024 (UTC)
- Ridiculous. Lee Vilenski (talk • contribs) 16:21, 15 December 2024 (UTC)
- So removing a significant thing that makes Wikipedia free is worth preventing underaged users from viewing certain images? I wouldn't say that would be a good idea if we want to make Wikipedia stay successful. If a reader wants to read an article, they should expect to see images relevant to the topic. This includes topics that are usually considered NSFW like Graphic violence, Sexual intercourse, et cetera. If a person willingly reads an article about an NSFW topic, they should acknowledge that they would see topic-related NSFW images. ZZZ'S 16:45, 15 December 2024 (UTC)
- At the very least, add a filter that allows you to block all images on the bad image list, specifically that list and those images. To the people who say you shouldnt have to give up personal info, I say that we should go the way Roblox does. Seems a bit random, hear me out: To play 17+ games, you need to verify with gov id, those games have blood alcohol, unplayable gambling and "romance". I say that we do the same. Giving up personal info to view bad things doesn't seem so bad to me. Jayson (talk) 03:44, 15 December 2024 (UTC)
- Also it'd negate the bandwidth savings somewhat (assuming an image is displayed as a low pixel-count version). I'm of the belief that Wikipedia should have more features tailored to the reader. JayCubby 16:58, 13 December 2024 (UTC)
- Yes, if this happens it should be through a disable all images toggle, not an additional blur. There have been times that would have been very helpful for me. CMD (talk) 03:52, 15 December 2024 (UTC)
- This is an even better idea as an opt-in toggle than the blur one. Load no images by default, and let users click a button to load individual images. That has a use beyond sensitivity. ꧁Zanahary꧂ 02:46, 13 December 2024 (UTC)
- Support the proposal as written. I'd imagine WMF can add a button below the already-existing accessibility options. People have different cultural, safety, age, and mental needs to block certain images. Ca talk to me! 13:04, 15 December 2024 (UTC)
- I'd support an option to replace images with the alt text, as long as all you had to do to see a hidden image was a single click/tap (we'd need some fallback for when an image has no alt text, but that's a minor issue). Blurring images doesn't provide any significant bandwidth benefits and could in some circumstances cause problems (some blurred innocent images look very similar to some blurred blurred images that some people regard as problematic, e.g. human flesh and cooked chicken). I strongly oppose anything that requires submitting personal information of any sort in order to see images per NatGertler. Thryduulf (talk) 14:15, 15 December 2024 (UTC)
- Fallback for alt text could be filename, which is generally at least slightly descriptive. -- Nat Gertler (talk) 14:45, 15 December 2024 (UTC)
- These ideas (particularly the toggle button to blur/hide all images) can be suggested at m:Community Wishlist. Some1 (talk) 15:38, 15 December 2024 (UTC)
Class icons in categories
This is something that has frequently occurred to me as a potentially useful feature when browsing categories, but I have never quite gotten around to actually proposing it until now.
Basically, I'm thinking it could be very helpful to have content-assessment class icons appear next to article entries in categories. This should be helpful not only to readers, to guide them to the more complete entries, but also to editors, to alert them to articles in the category that are in need of work. Thoughts? Gatoclass (talk) 03:02, 7 December 2024 (UTC)
- If we go with this, I think there should be only 4 levels - Stub, Average (i.e. Start, C, or B), GA, & FA.
- There are significant differences between Start, C, and B, but there's no consistent effort to grade these articles correctly and consistently, so it might be better to lump them into one group. Especially if an article goes down in quality, almost nobody will bother to demote it from B to C. ypn^2 04:42, 8 December 2024 (UTC)
- Isn't that more of an argument for consolidation of the existing levels rather than reducing their number for one particular application?
- Other than that, I think I would have to agree that there are too many levels - the difference between Start and C class, for example, seems quite arbitrary, and I'm not sure of the usefulness of A class - but the lack of consistency within levels is certainly not confined to these lower levels, as GAs can vary enormously in quality and even FAs. But the project nonetheless finds the content assessment model to be useful, and I still think their usefulness would be enhanced by addition to categories (with, perhaps, an ability to opt in or out of the feature).
- I might also add that including content assessment class icons to categories would be a good way to draw more attention to them and encourage users to update them when appropriate. Gatoclass (talk) 14:56, 8 December 2024 (UTC)
- I believe anything visible in reader-facing namespaces needs to be more definitively accurate than in editor-facing namespaces. So I'm fine having all these levels on talk pages, but not on category pages, unless they're applied more rigorously.
- On the other hand, with FAs and GAs, although standards vary within a range, they do undergo a comprehensive, well-documented, and consistent process for promotion and demotion. So just like we have an icon at the top of those articles (and in the past, next to interwiki links), I could hear putting them in categories. [And it's usually pretty obvious whether something's a stub or not.] ypn^2 18:25, 8 December 2024 (UTC)
- Isn't the display of links Category pages entirely dependent on the Mediawiki software? We don't even have Short descriptions displayed, which would probably be considerably more useful.Any function that has to retrieve content from member articles (much less their talkpages) is likely to be somewhat computationally expensive. Someone with more technical knowledge may have better information. Folly Mox (talk) 18:01, 8 December 2024 (UTC)
- Yes, this will definitely require MediaWiki development, but probably not so complex. And I wonder why this will be more computationally expensive than scanning articles for [ [Category: ] ] tags in the first place. ypn^2 18:27, 8 December 2024 (UTC)
And I wonder why this will be more computationally expensive than scanning articles for [ [Category: ] ] tags in the first place
my understanding is that this is not what happens. When a category is added to or removed from an article, the software adds or removes that page as a record from a database, and that database is what is read when viewing the category page. Thryduulf (talk) 20:14, 8 December 2024 (UTC)
- Yes, this will definitely require MediaWiki development, but probably not so complex. And I wonder why this will be more computationally expensive than scanning articles for [ [Category: ] ] tags in the first place. ypn^2 18:27, 8 December 2024 (UTC)
- I think that in the short term, this could likely be implemented using a user script (displaying short descriptions would also be nice). Longer-term, if done via an extension, I suggest limiting the icons to GAs and FAs for readers without accounts, as other labels aren't currently accessible to them. (Whether this should change is a separate but useful discussion). — Frostly (talk) 23:06, 8 December 2024 (UTC)
- I'd settle for a user script. Who wants to write it? :) Gatoclass (talk) 23:57, 8 December 2024 (UTC)
- As an FYI for whoever decides to write it, Special:ApiHelp/query+pageassessments may be useful to you. Anomie⚔ 01:04, 9 December 2024 (UTC)
- @Gatoclass, the Wikipedia:Metadata gadget already exists. Go to Special:Preferences#mw-prefsection-gadgets-gadget-section-appearance and scroll about two-thirds of the way through that section.
- I strongly believe that ordinary readers don't care about this kind of inside baseball, but if you want it for yourself, then use the gadget or fork its script. Changing this old gadget from "adding text and color" to "displaying an icon" should be relatively simple. WhatamIdoing (talk) 23:43, 12 December 2024 (UTC)
- As an FYI for whoever decides to write it, Special:ApiHelp/query+pageassessments may be useful to you. Anomie⚔ 01:04, 9 December 2024 (UTC)
- I'd settle for a user script. Who wants to write it? :) Gatoclass (talk) 23:57, 8 December 2024 (UTC)
Space-saving front page change
- The following discussion is an archived record of a request for comment. Please do not modify it. No further edits should be made to this discussion. A summary of the conclusions reached follows.
Right now, the front-page has a huge "Welcome to Wikipedia" message, presumably to remind readers that typing en.wikipedia.org leads to Wikipedia. This displaces about half the DYKs and "On this day" content. Given this is possibly the single most valuable piece of screen real estate on the internet, I think we should be spending it on something that provides more information.
I have two alternatives:
- Remove the banner entirely.
- Move it to the bottom of the page, replacing "Welcome to Wikipedia" with "Brought to you by Wikipedia". An example can be found at User:Closed Limelike Curves/Main Page.
This is already done on mobile, but would be extended to desktop.
Support (move to bottom of page)
- Support as proposer. Mild preference for removing the message entirely as redundant.– Closed Limelike Curves (talk) 05:32, 8 December 2024 (UTC)
- Support * Pppery * it has begun... 07:39, 8 December 2024 (UTC)
- Support option 2 - looks better without removing the banner completely. '''[[User:CanonNi]]''' (talk • contribs) 14:08, 8 December 2024 (UTC)
Oppose
- Oppose. Welcoming users and explaining what Wikipedia is is a valid purpose for the Main Page. Sdkb talk 07:36, 8 December 2024 (UTC)
- Oppose. While the message isn't information-dense like the rest of the Main Page, it is much more welcoming for a new visitor, and easier on the eyes, than immediately starting with four blocks of text. Chaotic Enby (talk · contribs) 13:09, 8 December 2024 (UTC)
- Oppose per above. C F A 13:58, 8 December 2024 (UTC)
- Oppose per Sdkb. – DreamRimmer (talk) 14:08, 8 December 2024 (UTC)
- Oppose, always good to put out a welcome mat. Reader and site friendly (note: using Monobook on a laptop I'm not aware of how the page looks on mobile). Randy Kryn (talk) 14:23, 8 December 2024 (UTC)
- Doesn't a welcome mat usually go on the floor, instead of the ceiling? – Closed Limelike Curves (talk) 17:26, 8 December 2024 (UTC)
- Yes, but it's the first thing you see. Cremastra ‹ u — c › 17:33, 8 December 2024 (UTC)
- Doesn't a welcome mat usually go on the floor, instead of the ceiling? – Closed Limelike Curves (talk) 17:26, 8 December 2024 (UTC)
- Oppose - because it's too important. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:24, 8 December 2024 (UTC)
- And for those curious about why there isn't, say, content portals? Lookie here. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:33, 8 December 2024 (UTC)
- Oppose per Sdkb. and Randy Kryn. Thryduulf (talk) 14:59, 8 December 2024 (UTC)
- Oppose The Welcome message is valuable and it makes sense for it to be at the top; the message includes a link to Wikipedia for those unfamiliar with the site, and "anyone can edit" directs readers (and prospective editors) to Help:Introduction to Wikipedia. The article count statistic is a fun way to show how extensive the English Wikipedia has become. (My only suggestion would be to include a stat about the number of active editors in the message, preferably after the article count stat.) Some1 (talk) 15:06, 8 December 2024 (UTC)
- Oppose This proposal essentially restricts informing readers about one of Wikipedia’s core ideas: anyone can edit. The current text on the main page is important because it reminds readers that we’re a free encyclopedia where anyone can contribute. The article count also matters—it shows how much Wikipedia has grown since 2001 and how many topics it covers.Another point to consider is that moving it to the bottom isn't practical. I don't think readers typically scroll that far down—personally, I rarely do. This could lead to fewer contributions from new users.The AP (talk) 15:29, 8 December 2024 (UTC)
- Oppose (strongly). Saying welcome to Wikipedia is just basic courtesy and draws readers in. That's the least important part. Why on earth would we want to hide the fact that we're the free encyclopedia anyone can edit? We need more information about how to edit on the MP, not less! We want to say, front and centre, that we're a volunteer-run free encyclopedia. Remove it, and we end up looking like Britannica. The banner says who we are, what we do, and what we've built, in a fairly small space with the help of links that draw readers in and encourage them to contribute. Aesthetically, I also think it pleasantly frames the main content; it is a preamble, a unchanging pale grey first course. Removing or moving it for the sake of space is like ripping the dust cover off a hardcoverbecause it takes up too much space and readers shouldn't be encumbered with reading a blurb or looking at the cover art (although cover art is often pretty bad these days...) I really don't see any benefit to tearing it off the Main Page. Cremastra ‹ u — c › 17:31, 8 December 2024 (UTC)
Why on earth would we want to hide the fact that we're the free encyclopedia anyone can edit?
We're not, it's still in the giant logo in the top-left. (Are we sure 2 banners is enough? Maybe we need a 3rd one.) – Closed Limelike Curves (talk) 17:35, 8 December 2024 (UTC)
Discussion
Do you have another good reason that the top of the MP should be taken down? Do you have a alternative banner in mind? Moreover, this needs a much wider audience: the ones on the board. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:27, 8 December 2024 (UTC)
- On which board? This is both at the village pump and at WP:CENT, so it should reach as much people as possible. Chaotic Enby (talk · contribs) 15:13, 8 December 2024 (UTC)
- Them. They may not take too kindly to this, and we all should know by now. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 15:26, 8 December 2024 (UTC)
- This is a strange concern; of course a community consensus can change the main page's content. It doesn't seem to be happening, but that has nothing to do with the WMF. ~ ToBeFree (talk) 16:16, 8 December 2024 (UTC)
- Them. They may not take too kindly to this, and we all should know by now. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 15:26, 8 December 2024 (UTC)
Do you have a alternative banner in mind?
- I avoided specific replacements because I didn't want to get bogged down in the weeds of whether we should make other changes. The simplest use of this space would be to increase the number of DYK hooks by 50%, letting us clear out a huge chunk of the backlog. – Closed Limelike Curves (talk) 17:43, 8 December 2024 (UTC)
Cleaning up NA-class categories
We have a long-standing system of double classification of pages, by quality (stub, start, C, ...) and importance (top, high, ...). And then there are thousands of pages that don't need either of these; portals, redirects, categories, ... As a result most of these pages have a double or even triple categorization, e.g. Portal talk:American Civil War/This week in American Civil War history/38 is in Category:Portal-Class United States articles, Category:NA-importance United States articles, and Category:Portal-Class United States articles of NA-importance.
My suggestion would be to put those pages only in the "Class" category (in this case Category:Portal-Class United States articles), and only give that category a NA-rating. Doing this for all these subcats (File, Template, ...) would bring the at the moment 276,534 (!) pages in Category:NA-importance United States articles back to near-zero, only leaving the anomalies which probably need a different importance rating (and thus making it a useful cleanup category).
It is unclear why we have two systems (3 cat vs. 2 cat), the tags on Category talk:2nd millennium in South Carolina (without class or NA indication) have a different effect than the tags on e.g. Category talk:4 ft 6 in gauge railways in the United Kingdom, but my proposal is to make the behaviour the same, and in both cases to reduce it to the class category only (and make the classes themselve categorize as "NA importance"). This would only require an update in the templates/modules behind this, not on the pages directly, I think. Fram (talk) 15:15, 9 December 2024 (UTC)
- Are there any pages that don't have the default? e.g. are there any portals or Category talk: pages rated something other than N/A importance? If not then I can't see any downsides to the proposal as written. If there are exceptions, then as long as the revised behaviour allows for the default to be overwritten when desired again it would seem beneficial. Thryduulf (talk) 16:36, 9 December 2024 (UTC)
- As far as I know, there are no exceptions. And I believe that one can always override the default behaviour with a local parameter. @Tom.Reding: I guess you know these things better and/or knows who to contact for this. Fram (talk) 16:41, 9 December 2024 (UTC)
- Looking a bit further, there do seem to be exceptions, but I wonder why we would e.g. have redirects which are of high importance to a project (Category:Redirect-Class United States articles of High-importance). Certainly when one considers that in some cases, the targets have a lower importance than the redirects? E.g. Talk:List of Mississippi county name etymologies. Fram (talk) 16:46, 9 December 2024 (UTC)
- I was imagining high importance United States redirects to be things like USA but that isn't there and what is is a very motley collection. I only took a look at one, Talk:United States women. As far as I can make out the article was originally at this title but later moved to Women in the United States over a redirect. Both titles had independent talk pages that were neither swapped nor combined, each being rated high importance when they were the talk page of the article. It seems like a worthwhile exercise for the project to determine whether any of those redirects are actually (still?) high priority but that's independent of this proposal. Thryduulf (talk) 17:17, 9 December 2024 (UTC)
- Category:Custom importance masks of WikiProject banners (15) is where to look for projects that might use an importance other than NA for cats, or other deviations. ~ Tom.Reding (talk ⋅dgaf) 17:54, 9 December 2024 (UTC)
- Most projects don't use this double intersection (as can be seen by the amount of categories in Category:Articles by quality and importance, compared to Category:GA-Class articles). I personally feel that the bot updated page like User:WP 1.0 bot/Tables/Project/Television is enough here and requires less category maintenance (creating, moving, updating, etc.) for a system that is underused. Gonnym (talk) 17:41, 9 December 2024 (UTC)
- Support this, even if there might be a few exceptions, it will make them easier to spot and deal with rather than having large unsorted NA-importance categories. Chaotic Enby (talk · contribs) 18:04, 9 December 2024 (UTC)
- Strongly agree with this. It's bizarre having two different systems, as well as a pain in the ass sometimes. Ideally we should adopt a single consistent categorization system for importance/quality. – Closed Limelike Curves (talk) 22:56, 16 December 2024 (UTC)
Okay, does anyone know what should be changed to implement this? I presume this comes from Module:WikiProject banner, I'll inform the people there about this discussion. Fram (talk) 14:49, 13 December 2024 (UTC)
- So essentially what you are proposing is to delete Category:NA-importance articles and all its subcategories? I think it would be best to open a CfD for this, so that the full implications can be discussed and consensus assured. It is likely to have an effect on assessment tools, and tables such as User:WP 1.0 bot/Tables/Project/Africa would no longer add up to the expected number — Martin (MSGJ · talk) 22:13, 14 December 2024 (UTC)
- There was a CfD specifically for one, and the deletion of Category:Category-Class Comics articles of NA-importance doesn't seem to have broken anything so far. A CfD for the deletion of 1700+ pages seems impractical, an RfC would be better probably. Fram (talk) 08:52, 16 December 2024 (UTC)
- Well a CfD just got closed with 14,000 categories, so that is not a barrier. It is also the technically correct venue for such discussions. By the way, all of the quality/importance intersection categories check that the category exists before using it, so deleting them shouldn't break anything. — Martin (MSGJ · talk) 08:57, 16 December 2024 (UTC)
- And were all these cats tagged, or how was this handled? Fram (talk) 10:21, 16 December 2024 (UTC)
- Wikipedia:Categories for discussion/Log/2024 December 7#Category:Category-Class articles. HouseBlaster took care of listing each separate cateory on the working page. — Martin (MSGJ · talk) 10:43, 16 December 2024 (UTC)
- I have no idea what the "working page" is though. Fram (talk) 11:02, 16 December 2024 (UTC)
- Wikipedia:Categories for discussion/Log/2024 December 7#Category:Category-Class articles. HouseBlaster took care of listing each separate cateory on the working page. — Martin (MSGJ · talk) 10:43, 16 December 2024 (UTC)
- And were all these cats tagged, or how was this handled? Fram (talk) 10:21, 16 December 2024 (UTC)
- Well a CfD just got closed with 14,000 categories, so that is not a barrier. It is also the technically correct venue for such discussions. By the way, all of the quality/importance intersection categories check that the category exists before using it, so deleting them shouldn't break anything. — Martin (MSGJ · talk) 08:57, 16 December 2024 (UTC)
- There was a CfD specifically for one, and the deletion of Category:Category-Class Comics articles of NA-importance doesn't seem to have broken anything so far. A CfD for the deletion of 1700+ pages seems impractical, an RfC would be better probably. Fram (talk) 08:52, 16 December 2024 (UTC)
I'm going to have to oppose any more changes to class categories. Already changes are causing chaos across the system with the bots unable to process renamings and fixing redirects whilst Special:Wantedcategories is being overwhelmed by the side effects. Quite simply we must have no more changes that cannot be properly processed. Any proposal must have clear instructions posted before it is initiated, not some vague promise to fix a module later on. Timrollpickering (talk) 13:16, 16 December 2024 (UTC)
- Then I'm at an impasse. Module people tell me "start a CfD", you tell me "no CfD, first make changes at the module". No one wants the NA categories for these groups. What we can do is 1. RfC to formalize that they are unwanted, 2. Change module so they no longer get populated 3. Delete the empty cats caused by steps 1 and 2. Is that a workable plan for everybody? Fram (talk) 13:39, 16 December 2024 (UTC)
- I don't think @Timrollpickering was telling you to make the changes at the module first, rather to prepare the changes in advance so that the changes can be implemented as soon as the CfD reaches consensus. For example this might be achieved by having a detailed list of all the changes prepared and published in a format that can be fed to a bot. For a change of this volume though I do think a discussion as well advertised as an RFC is preferable to a CfD though. Thryduulf (talk) 14:43, 16 December 2024 (UTC)
- Got it in one. There are just too many problems at the moment because the modules are not being properly amended in time. We need to be firmer in requiring proponents to identify the how to change before the proposal goes live so others can enact it if necessary, not close the discussion, slap the category on the working page and let a mess pile up whilst no changes to the module are implemented. Timrollpickering (talk) 19:37, 16 December 2024 (UTC)
- I don't think @Timrollpickering was telling you to make the changes at the module first, rather to prepare the changes in advance so that the changes can be implemented as soon as the CfD reaches consensus. For example this might be achieved by having a detailed list of all the changes prepared and published in a format that can be fed to a bot. For a change of this volume though I do think a discussion as well advertised as an RFC is preferable to a CfD though. Thryduulf (talk) 14:43, 16 December 2024 (UTC)
Category:Current sports events
I would like to propose that sports articles should be left in the Category:Current sports events for 48 hours after these events have finished. I'm sure many Wikipedia sports fans (including me) open CAT:CSE first and then click on a sporting event in that list. And we would like to do so in the coming days after the event ends to see the final standings and results.
Currently, this category is being removed from articles too early, sometimes even before the event ends. Just like yesterday. AnishaShar, what do you say about that?
So I would like to ask you to consider my proposal. Or, if you have a better suggestion, please comment. Thanks, Maiō T. (talk) 16:25, 9 December 2024 (UTC)
- Thank you for bringing up this point. I agree that leaving articles in the Category:Current sports events for a short grace period after the event concludes—such as 48 hours—would benefit readers who want to catch up on the final standings and outcomes. AnishaShar (talk) 18:19, 9 December 2024 (UTC)
- Sounds reasonable on its face. Gatoclass (talk) 23:24, 9 December 2024 (UTC)
- How would this be policed though? Usually that category is populated by the {{current sport event}} template, which every user is going to want to remove immediately after it finishes. Lee Vilenski (talk • contribs) 19:51, 11 December 2024 (UTC)
- @Lee Vilenski: First of all, the Category:Current sports events has nothing to do with the Template:Current sport; articles are added to that category in the usual way.
- You ask how it would be policed. Simply, we will teach editors to do it that way – to leave an article in that category for another 48 hours. AnishaShar have already expressed their opinion above. WL Pro for life is also known for removing 'CAT:CSE's from articles. I think we could put some kind of notice in that category so other editors can notice it. We could set up a vote here. Maybe someone else will have a better idea. Maiō T. (talk) 20:25, 14 December 2024 (UTC)
- Would it not be more suitable for a "recently completed sports event" category. It's pretty inaccurate to say it's current when the event finished over a day ago. Lee Vilenski (talk • contribs) 21:03, 14 December 2024 (UTC)
Okay Lee, that's also a good idea. We have these two sports event categories:
- Category:Scheduled sports events
- Category:Current sports events
- Category:Recent sports events can be a suitable addition to those two. Edin75, you are also interested in categories and sporting events; what is your opinion? Maiō T. (talk) 18:14, 16 December 2024 (UTC)
User-generated conflict maps
In a number of articles we have (or had) user-generated conflict maps. I think the mains ones at the moment are Syrian civil war and Russian invasion of Ukraine. The war in Afghanistan had one until it was removed as poorly-sourced in early 2021. As you can see from a brief review of Talk:Syrian civil war the map has become quite controversial there too.
My personal position is that sourcing conflict maps entirely from reports of occupation by one side or another of individual towns at various times, typically from Twitter accounts of dubious reliability, to produce a map of the current situation in an entire country (which is the process described here), is a WP:SYNTH/WP:OR. I also don't see liveuamap.com as necessarily being a highly reliable source either since it basically is an WP:SPS/Wiki-style user-generated source, and when it was discussed at RSN editors there generally agreed with that. I can understand it if a reliable source produces a map that we can use, but that isn't what's happening here.
Part of the reason this flies under the radar on Wikipedia is it ultimately isn't information hosted on EN WP but instead on Commons, where reliable sourcing etc. is not a requirement. However, it is being used on Wikipedia to present information to users and therefore should fall within our PAGs.
I think these maps should be deprecated unless they can be shown to be sourced entirely to a reliable source, and not assembled out of individual reports including unreliable WP:SPS sources. FOARP (talk) 16:57, 11 December 2024 (UTC)
- A lot of the maps seem like they run into SYNTH issues because if they're based on single sources they're likely running into copyright issue as derivative works. I would agree though that if an image does not have clear sourcing it shouldn't be used as running into primary/synth issues. Der Wohltemperierte Fuchs talk 17:09, 11 December 2024 (UTC)
- Though simple information isn't copyrightable, if it's sufficiently visually similar I suppose that might constitute a copyvio. JayCubby 02:32, 13 December 2024 (UTC)
- I agree these violate OR and at least the spirit of NOTNEWS and should be deprecated. I remember during the Wagner rebellion we had to fix one that incorrectly depicted Wagner as controlling a swath of Russia. Levivich (talk) 05:47, 13 December 2024 (UTC)
Google Maps: Maps, Places and Routes
Google Maps have the following categories: Maps, Places and Routes
for example: https://www.google.com/maps/place/Sheats+Apartments/@34.0678041,-118.4494914,3a,75y,90t/data=!...........
most significant locations have a www.google.com/maps/place/___ URL
these should be acknowledged and used somehow, perhaps geohack
69.181.17.113 (talk) 00:22, 12 December 2024 (UTC)
Allowing page movers to enable two-factor authentication
I would like to propose that members of the page mover user group be granted the oathauth-enable
permission. This would allow them to use Special:OATH to enable two-factor authentication on their accounts.
Rationale (2FA for page movers)
The page mover guideline already obligates people in that group to have a strong password, and failing to follow proper account security processes is grounds for revocation of the right. This is because the group allows its members to (a) move pages along with up to 100 subpages, (b) override the title blacklist, and (c) have an increased rate limit for moving pages. In the hands of a vandal, these permissions could allow significant damage to be done very quickly, which is likely to be difficult to reverse.
Additionally, there is precedent for granting 2FA access to users with rights that could be extremely dangerous in the event of account compromise, for instance, template editors, importers, and transwiki importers have the ability to enable this access, as do most administrator-level permissions (sysop, checkuser, oversight, bureaucrat, steward, interface admin).
Discussion (2FA for page movers)
- Support as proposer. JJPMaster (she/they) 20:29, 12 December 2024 (UTC)
- Support (but if you really want 2FA you can just request permission to enable it on Meta) * Pppery * it has begun... 20:41, 12 December 2024 (UTC)
- For the record, I do have 2FA enabled. JJPMaster (she/they) 21:47, 12 December 2024 (UTC)
- Oops, that says you are member of "Two-factor authentication testers" (testers = good luck with that). Johnuniq (talk) 23:52, 14 December 2024 (UTC)
- A group name which is IMO seriously misleading - 2FA is not being tested, it's being actively used to protect accounts. * Pppery * it has begun... 23:53, 14 December 2024 (UTC)
- meta:Help:Two-factor authentication still says "currently in production testing with administrators (and users with admin-like permissions like interface editors), bureaucrats, checkusers, oversighters, stewards, edit filter managers and the OATH-testers global group." Hawkeye7 (discuss) 09:42, 15 December 2024 (UTC)
- A group name which is IMO seriously misleading - 2FA is not being tested, it's being actively used to protect accounts. * Pppery * it has begun... 23:53, 14 December 2024 (UTC)
- Oops, that says you are member of "Two-factor authentication testers" (testers = good luck with that). Johnuniq (talk) 23:52, 14 December 2024 (UTC)
- For the record, I do have 2FA enabled. JJPMaster (she/they) 21:47, 12 December 2024 (UTC)
- Support as a pagemover myself, given the potential risks and need for increased security. I haven't requested it yet as I wasn't sure I qualified and didn't want to bother the stewards, but having
oathauth-enable
by default would make the process a lot more practical. Chaotic Enby (talk · contribs) 22:30, 12 December 2024 (UTC)- Anyone is qualified - the filter for stewards granting 2FA is just "do you know what you're doing". * Pppery * it has begun... 22:46, 12 December 2024 (UTC)
- Question When's the last time a page mover has had their account compromised and used for pagemove vandalisn? Edit 14:35 UTC: I'm not doubting the nom, rather I'm curious and can't think of a better way to phrase things. JayCubby 02:30, 13 December 2024 (UTC)
- Why isn't everybody allowed to enable 2FA? I've never heard of any other website where users have to go request someone's (pro forma, rubber-stamp) permission if they want to use 2FA. And is it accurate that 2FA, after eight years, is still "experimental" and "in production testing"? I guess my overall first impression didn't inspire me with confidence in the reliability and maintenance. Adumbrativus (talk) 06:34, 14 December 2024 (UTC)
- Because the recovery process if you lose access to your device and recovery codes is still "contact WMF Trust and Safety", which doesn't scale. See also phab:T166622#4802579. Anomie⚔ 15:34, 14 December 2024 (UTC)
- We should probably consult with WMF T&S before we create more work for them on what they might view as very low-risk accounts. Courtesy ping @JSutherland (WMF). –Novem Linguae (talk) 16:55, 14 December 2024 (UTC)
- No update comment since 2020 doesn't fill me with hope. I like 2FA, but it needs to be developed into a usable solution for all. Lee Vilenski (talk • contribs) 00:09, 15 December 2024 (UTC)
- I ain't a technical person, but could a less secure version of 2fa be introduced, where an email is sent for any login on new devices? JayCubby 01:13, 15 December 2024 (UTC)
- Because the recovery process if you lose access to your device and recovery codes is still "contact WMF Trust and Safety", which doesn't scale. See also phab:T166622#4802579. Anomie⚔ 15:34, 14 December 2024 (UTC)
- Support per nom. PMV is a high-trust role (suppressredirect is the ability to make a blue link turn red), and thus this makes sense. As a side note, I have changed this to bulleted discussion; # is used when we have separate sections for support and oppose. HouseBlaster (talk • he/they) 07:19, 14 December 2024 (UTC)
- Oppose As a pagemover myself, I find pagemover is an extremely useful and do not wish to lose it. It is nowhere near the same class as template editor. You can already ask the stewards for 2FA although I would recommend creating a separate account for the purpose. After all these years, 2FA remains experimental, buggy and cumbersome. Incompatible with the Microsoft Authenticator app on my iphone. Hawkeye7 (discuss) 23:59, 14 December 2024 (UTC)
- The proposal (as I read it) isn't "you must have 2FA", rather "you have the option to add it". Lee Vilenski (talk • contribs) 00:06, 15 December 2024 (UTC)
- @Hawkeye7, Lee Vilenski is correct. This would merely provide page movers with the option to enable it. JJPMaster (she/they) 00:28, 15 December 2024 (UTC)
- Understood, but I do not want it associated with an administrator-level permission, which would mean I am not permitted to use it, as I am not an admin. Hawkeye7 (discuss) 09:44, 15 December 2024 (UTC)
- It's not really that. It would be an opt-in to allow users (in the group) to put 2FA on their account - at their own digression.
- The main reasons why 2FA is currently out to admins and the like is because they are more likely to be targeted for compromising and are also more experienced. The 2FA flag doesn't require any admin skills/tools and is only incedentally linked. Lee Vilenski (talk • contribs) 12:58, 15 December 2024 (UTC)
- Understood, but I do not want it associated with an administrator-level permission, which would mean I am not permitted to use it, as I am not an admin. Hawkeye7 (discuss) 09:44, 15 December 2024 (UTC)
- @Hawkeye7, Lee Vilenski is correct. This would merely provide page movers with the option to enable it. JJPMaster (she/they) 00:28, 15 December 2024 (UTC)
- The proposal (as I read it) isn't "you must have 2FA", rather "you have the option to add it". Lee Vilenski (talk • contribs) 00:06, 15 December 2024 (UTC)
- It probably won't make a huge difference because those who really desire 2FA can already request the permission to enable it for their account, and because no page mover will be required to do so. However, there will be page movers who wouldn't request a global permission for 2FA yet would enable it in their preferences if it was a simple option. And these page movers might benefit from 2FA even more than those who already care very strongly about the security of their account. ~ ToBeFree (talk) 03:18, 15 December 2024 (UTC)
- Support and I can't think of any argument against something not only opt-in but already able to be opted into. Gnomingstuff (talk) 08:09, 15 December 2024 (UTC)
Photographs by Peter Klashorst
Back in 2023 I unsuccessfully nominated a group of nude photographs by Peter Klashorst for deletion on Commons. I was concerned that the people depicted might not have been of age or consented to publication. Klashorst described himself as a "painting sex-tourist"[31] because he would travel to third-world countries to have sex with women in brothels, and also paint pictures of them[32][33]. On his Flickr account, he posted various nude photographs of African and Asian women, some of which appear to have been taken without the subjects' knowledge. Over the years, other Commons contributors have raised concerns about the Klashorst photographs (e.g. [34][35][36]).
I noticed recently that several of the Klashorst images had disappeared from Commons but the deletions hadn't been logged. I believe this happens when the WMF takes an office action to remove files. I don't know for sure whether that's the case, or why only a small number of the photographs were removed this way.
My proposal is that we stop using nude or explicit photographs by Klashorst in all namespaces of the English Wikipedia. This would affect about thirty pages, including high-traffic anatomy articles such as Buttocks and Vulva. gnu57 18:29, 16 December 2024 (UTC)
- @Genericusername57: This seems as if it's essentially a request for a community sanction, and thus probably belongs better on the administrators' noticeboard. Please tell me if I am mistaken. JJPMaster (she/they) 23:12, 16 December 2024 (UTC)
Idea lab
Toward helping readers understand what Wiki is/isn’t
I’ve often noticed confusion on the part of both general readers and editors about what Wikipedia articles are AND aren’t. Truth be told, I suspect all of us editors probably had it not only before becoming editors but also well into our Wiki work.
So I got thinking that perhaps a cute (but not overly so!) little information box that would fly in or otherwise attract attention upon accessing a new article could help halt some common misunderstandings or lack of awareness of general readers. Because I think most editors here at the Pump would be aware of many such examples, I hope you’ll forgive my not providing e.g.’s.
(Of course if such an info box were put in place, there’d also need to be a way for readers not to see it again if they so wish.)
I started to check elsewhere at the Pump to see if a similar idea had ever been submitted before, but I couldn’t figure out a relevant search term. And I didn’t want to suggest an outright proposal if anything similar had in fact ever been proposed. So IDEA LAB just seemed a good place to start the ball rolling. Looking forward to seeing where it leads. Augnablik (talk) 10:58, 17 November 2024 (UTC)
- I'm a strong supporter of providing more information about how Wikipedia works for readers, especially if it helps them get more comfortable with the idea of editing. Readers are editors and editors are readers—this line should be intentionally blurred. I don't know if a pop up or anything similar to that is the right way to go, but I do think there's something worth considering here. One thing I've floated before was an information panel featured prominently on the main page that briefly explains how every reader is an editor and gives some basic resources. Thebiguglyalien (talk) 17:49, 17 November 2024 (UTC)
- The problem with putting stuff on the main page is that many (probably most) readers get to Wikipedia articles from a search engine, rather than via the main page. Phil Bridger (talk) 17:57, 17 November 2024 (UTC)
- Another issue is a large number of these users tend to be on mobile devices, which have known bugs with regards to things like this. —Jéské Couriano v^_^v threads critiques 20:45, 17 November 2024 (UTC)
- The main page gets 4 to 5 million page views each day. And even so, I would guess that people who go out of their way to read the main page are better candidates to become frequent editors than people who treat Wikipedia like it's part of Google. Thebiguglyalien (talk) 15:12, 18 November 2024 (UTC)
- I wasn't thinking of the main page. What I had in mind was that whenever someone requests to go to an article — irrespective of how he or she entered Wikipedia — the information box would fly in or otherwise appear. Augnablik (talk) 17:30, 18 November 2024 (UTC)
- I know you weren't thinking of the main page. My reply was to Thebiguglyalien. Phil Bridger (talk) 20:23, 18 November 2024 (UTC)
- So I see now. Sorry. Augnablik (talk) 09:44, 20 November 2024 (UTC)
- I know you weren't thinking of the main page. My reply was to Thebiguglyalien. Phil Bridger (talk) 20:23, 18 November 2024 (UTC)
- The problem with putting stuff on the main page is that many (probably most) readers get to Wikipedia articles from a search engine, rather than via the main page. Phil Bridger (talk) 17:57, 17 November 2024 (UTC)
- What sort of confusion are you seeking to dispel? Looking over WP:NOT, basically everything on there strikes me as "well, DUH!". I honestly can't understand why most of it has had to be spelled out. --User:Khajidha (talk) (contributions) 13:04, 18 November 2024 (UTC)
- @Khajidha, i don't see the box as ONLY to dispel confusion but ALSO to point out some strengths of Wikipedia that probably readers wouldn't have been aware of.
- A few things that came to my mind: although Wikipedia is now one of the world's most consulted information sources, articles should be considered works in progress because ... however, there are stringent requirements for articles to be published, including the use of strong sources to back up information and seasoned editors to eagle-eye them; writing that is objective and transparent about any connection between writers and subjects of articles ... and (this last could be controversial but I think it would be helpful for readers in academia) although not all universities and academic circles accept Wiki articles as references, they can serve as excellent pointers toward other sources.
- if the idea of presenting an information box including the above (and more) is adopted, a project team could work on exactly what it would say and look like. Augnablik (talk) 18:58, 18 November 2024 (UTC)
- I think that considerably overstates reality (the requirements are not stringent, sources do not have to be strong, many things are not checked by anyone, much less by seasoned editors, hiding COIs is moderately common...).
- BTW, there has been some professional research on helping people understand Wikipedia in the past, and the net result is that when people understand Wikipedia's process, they trust it less. This might be a case of Careful What You Wish For. WhatamIdoing (talk) 19:20, 18 November 2024 (UTC)
- Ooops. Well, if stringent requirements, etc., overstate reality, then official Wiki guidance and many Teahouse discussions are needlessly scaring many a fledgling editor! 😱 Augnablik (talk) 19:40, 18 November 2024 (UTC)
- All of these points also fall into the "well, DUH!" category. I did, however, want to respond to your statement that "not all universities and academic circles accept Wiki articles as references". I would be very surprised if any university or serious academic project would accept Wikipedia as a reference. Tertiary sources like encyclopedias have always been considered inappropriate at that level, as far as I know. --User:Khajidha (talk) (contributions) 19:38, 18 November 2024 (UTC)
- Point taken about encyclopedias being generally unacceptable in academic writing.
- But as we’re having this discussion in an idea lab, this is the perfect place to toss the ball back to you, Khajidha, and ask how you would describe Wikipedia for new readers so they know how it can be advantageous and how it can’t?
- As I see it, that sort of information is a real need for those who consult Wikipedia — just as customers appreciate quick summaries or reviews of products they’re considering purchasing — to get a better handle on “what’s in it for me.” Augnablik (talk) 20:05, 18 November 2024 (UTC)
- I think the logo at the top left already does a pretty good job: "Wikipedia: The Free Encyclopedia". Especially if you look at the expanded form we use elsewhere: "Welcome to Wikipedia, the free encyclopedia that anyone can edit."--User:Khajidha (talk) (contributions) 12:39, 19 November 2024 (UTC)
- @Khajidha, a mere tag saying "The Free Encyclopedia" seems to me just a start in the right direction. The addition of "that anyone can edit" adds a little more specificity, although you didn't mention anything about writing as well as editing. Still, I think these tags are too vague as far as what readers need more insight about.
- I'm working on a list of things I'd like to bring to readers' attention, but I'd like to put it away tonight and finish tomorrow. At that point, I'll humbly request you to "de-DUH" your evaluation of my idea. Augnablik (talk) 17:01, 20 November 2024 (UTC)
- Seems to me the problem is that people don't understand what an encyclopedia is. That's a "them" problem, not an "us" problem. And what exactly do these readers think editing the encyclopedia would be that doesn't incude writing it? User:Khajidha (talk) (contributions) 17:45, 20 November 2024 (UTC)
- Wikipedia is very different from the historical concept of encyclopedia. The open editing expands the pool of editors, at the expense of accuracy. -- Shmuel (Seymour J.) Metz Username:Chatul (talk)
- Wikipedia may have put traditional general encyclopedias out of business, or at least made them change their business model drastically, but it does not define what an encyclopedia is. One example is that Wikipedia relies largely on secondary sources, but traditional encyclopedias, at least for the most important articles, employed subject matter experts who wrote largely on the basis of primary sources. It is our job to explain the difference. Phil Bridger (talk) 20:03, 20 November 2024 (UTC)
- After a little longer gap between than what I thought it would take to create a list of things I believe all readers need to be aware of from the git-go about what Wikipedia is and isn't, due to some challenges in other departments of life, here's what I came up with. It would be in sections, similar to what you see below, each surrounded by a clip art loop, perhaps golden brown, and perhaps a few other pieces of clip art to set it off visually.I wish I knew how to separate paragraphs with line spacing ... I know this looks a little squished.
- Seems to me the problem is that people don't understand what an encyclopedia is. That's a "them" problem, not an "us" problem. And what exactly do these readers think editing the encyclopedia would be that doesn't incude writing it? User:Khajidha (talk) (contributions) 17:45, 20 November 2024 (UTC)
- I think the logo at the top left already does a pretty good job: "Wikipedia: The Free Encyclopedia". Especially if you look at the expanded form we use elsewhere: "Welcome to Wikipedia, the free encyclopedia that anyone can edit."--User:Khajidha (talk) (contributions) 12:39, 19 November 2024 (UTC)
- _____________________________________
- New to reading Wikipedia articles? Here are some helpful things for you to be aware of about Wikipedia. They'll help you get more clearer ideas of how you can use the articles to best advantage.
- If you'd like to go into more depth about all this, and more, just go to the article in Wikipedia about itself by typing WIKIPEDIA in the Wikipedia search field.
- Wikipedia is a different kind of encyclopedia.
- — Its articles can be written and edited by anyone.
- — They’re supposed to be based completely on reliable outside sources.
- — They can be updated at any time, thus allowing for quick corrections or additions if needed.
- — Wikipedia is free.
- That’s the main difference between Wikipedia and traditional encyclopedias.
- BUT:
- All encyclopedias serve as starting points where readers can find out about information — especially the main thinking about particular subjects — then follow up as they wish.
- Students and researchers: keep in mind that schools and professional research journals don’t accept encyclopedias as references for written papers, but do encourage using them to get some ideas with which to go forward.
- Wikipedia has become popular for good reason.
- — Wikipedia is the world’s largest-ever encyclopedia.
- — It’s consistently ranked among the ten websites people visit most.
- — Because it’s all online, it’s easy to access.
- — Because it’s highly interactive, it’s easy to move around from topic to topic.
- Quality standards for writing articles are in place and in action behind the scenes.
- — Wikipedia has high standards for choosing the subjects of articles.
- — Wikipedia also has high standards for writing articles, especially freedom from bias.
- — Certain editors are assigned to ensure that articles follow Wikipedia standards.
- — Although differences of opinions naturally arise about whether a particular article does so, there are sets of procedures to work them out and arbiters to step in as needed. Augnablik (talk) 10:18, 25 November 2024 (UTC)
- The
<br />
tag should take care of line spacing. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:49, 25 November 2024 (UTC)- Is this possible to do in Visual Editor instead (I hope)? Augnablik (talk) 13:52, 25 November 2024 (UTC)
- Why would you put information about "reading Wikipedia articles" in an editing environment?
- Also, several things you've written are just wrong. Wikipedia is not considered a "highly interactive" website. "Certain editors" are not "assigned to ensure" anything. Wikipedia does not have "high standards for writing articles", and quite a lot of readers and editors think we're seriously failing in the "freedom from bias" problem. We might do okay-ish on some subjects (e.g., US political elections) but we do fairly poorly on other subjects (e.g., acknowledging the existence of any POV that isn't widely discussed in English-language sources). WhatamIdoing (talk) 20:14, 28 November 2024 (UTC)
- Is this possible to do in Visual Editor instead (I hope)? Augnablik (talk) 13:52, 25 November 2024 (UTC)
- Actually, I think a more magnetic format for this tool I'm hoping can one day be used on Wikipedia would be a short series of animated "fly-ins" rather than a static series of points with a loop around each set thereof. Augnablik (talk) 13:51, 25 November 2024 (UTC)
- @Augnablik, personally, I think your idea would be great and would help bring new editors to the project, especially with these messages, which seem more focused on article maintenance (more important nowadays imo) than article creation.
- JuxtaposedJacob (talk) | :) | he/him | 02:32, 5 December 2024 (UTC)
- The
- Idea Labmates …
- Because I had such high hopes of being on the trail of something practical to help prevent some of the main misunderstandings with which readers come to Wikipedia — and at the same time to foster awareness of how to use it to better advantage — I wonder if a little spark could get the discussion going again. Or does the idea not seem worth pursuing further? Augnablik (talk) 11:05, 30 November 2024 (UTC)
- I guess not.
- At least for now.
- 📦 Archive time. Augnablik (talk) 02:53, 3 December 2024 (UTC)
- I hope you won't be disheartened by this experience, and if you have any other good ideas will share them with us. There are two stages to getting an idea implemented in a volunteEr organisation:
- Getting others to accept that it is a good idea.
- Persuading someone to implement it.
- You have got past stage 1 with me, and maybe others, but I'm afraid that, even if I knew how to implement it, it wouldn't be near the top of my list of priorities. Phil Bridger (talk) 09:17, 3 December 2024 (UTC)
- Thank you, Phil. No, not disheartened … I think of it as an idea whose time has not yet come. I’m in full agreement about the two stages of idea implementation, plus a couple more in between to lead from one to the other.
- When we in the creative fields recognize that continuum and get our egos out of the way, great things begin to happen. Mine is hopefully drying out on the line.😅 Augnablik (talk) 09:41, 3 December 2024 (UTC)
- I hope you won't be disheartened by this experience, and if you have any other good ideas will share them with us. There are two stages to getting an idea implemented in a volunteEr organisation:
New users, lack of citation on significant expansion popup confirmation before publishing
There are many edits often made where a large amount of information is added without citations. For new users, wouldn't it be good if it was detected when they go to publish an edit lacking citations with a large amount of text, and came up with a popup of some sort directing them to WP:NOR, and asking them to confirm if they wish to make the edit? I think you should be able to then turn it off easily (as in ticking don't remind me again within the popup), but my impression is that many make edits without being familiar with the concept of rules such as WP:NOR. 𝙏𝙚𝙧𝙧𝙖𝙞𝙣𝙢𝙖𝙣地形人 (talk) 01:36, 19 November 2024 (UTC)
- You're describing mw:Edit check. Aaron Liu (talk) 02:15, 19 November 2024 (UTC)
- We can deploy it. Trizek_(WMF) (talk) 13:15, 19 November 2024 (UTC)
- Ooh, I didn't know we and dewiki didn't get deployment. Is there a reason? Aaron Liu (talk) 14:18, 19 November 2024 (UTC)
- If I'm thinking of the right tool, there was a discussion (at one one of the village pumps I think) that saw significant opposition to deployment here, although I can't immediately find it. I seem to remember the opposition was a combination of errors and it being bundled with VE? I think Fram and WhatamIdoing were vocal in that discussion so may remember where it was (or alternatively what I'm mixing it up with). Thryduulf (talk) 15:21, 19 November 2024 (UTC)
- @Aaron Liu, Edit check is available on the visual editor. Having it on wikitext won't make sense as the goal is to teach users to add citations, not to teach them both about citations and wikitext. Let's reduce complexity. :)
- And the visual editor is still not the default editor at de.wp or en.wp. I advised to work on deploying both in parallel so that newcomers would have a better editing experience all at once (less wikitext, more guidance). Why am I not working on it now? Because it would take time. Now that the visual editor was used for years at all other wikis to make millions of edits, we should consider making it the default editor at English Wikipedia for new accounts. It could be a progressive deployment. I've not yet explored past reasons why English Wikipedia didn't wanted to have the visual editor being deployed, again for time reasons. But we would support any community initiative regarding VE deployment for sure.
- We could deploy Edit check without VE, but I'm afraid of a low impact on newcomers: they are less likely to be helped as long as VE remains the second editor.
- @Thryduulf, there were a discussion about Edit check in the past, you are correct. It covered multiple topics actually. I let you re-read it if you like; I didn't found "significant opposition" there, more questions about Edit Check, VE, citations and more, concerns on Edit Check and VE integration, and support for a better experience for newcomers (as long as it doesn't impact existing personal experiences).
- Trizek_(WMF) (talk) 15:37, 19 November 2024 (UTC)
- If you didn't see "significant oppo sition" there, then perhaps reread it? The tool you deployed elsewhere had no measurable positive impact (when tested on Simple or French Wikipedia). As for past reasons why enwiki didn't want VE deployed, let's give you one: because when VE was deployed here, it was extremely buggy (as in, making most articles worse when used, not better), but the WMF refused to undo their installation here (despite previous assurances) and used enwiki as a testing ground, wasting tons of editor hours and creating lots of frustration and distrust towards the WMF. This was only strengthened by later issues (Flow, Gather, Wikidata short descriptions) which all followed the same pattern, although in those cases we eventually, after lots of acrimonious debates and broken WMF promises, succeeded in getting them shut down). As shown in the linked discussion, here again we have two instances of WMF deployments supported by test results where in the first instance reality showed that these were probably fabricated or completely misunderstood, as in reality the results were disastrous; and in the second instance, Edit Check, reality showed that the tool wasn't an improvement over the current situation, but even when spelled out this was "misread" by the WMF. Please stop it. Fram (talk) 15:50, 19 November 2024 (UTC)
- Could you provide a couple of links to comments from people other than yourself, and which specifically opposed EditCheck (not the 'make the visual editor the default' or 'Citoid has some problems' sub-threads)? I just skimmed through the 81 comments from 19 editors in the proposal that Robertsky made, and while I might have missed something, your first comment, which was the 69th comment in the list, was the first one to oppose the idea of using software to recommend that new editors add more citations.
- Most of the discussion is not about EditCheck or encouraging refs. Most of it is about whether first-time editors should be put straight into the visual editor vs asking them to choose. The responses there begin this way:
- "I thought Visual Editor is already the default for new accounts and unregistered editors?" [37]
- "In theory, this sounds like a great idea. I'm eager to try it out..." [38]
- "I'd support making Visual Editor the default..." [39]
- "Agree 100%." [40]
- "I totally agree that VE should be the default for new users." [41]
- which is mostly not about whether to use software to encourage newbies to add more citations (the second quotation is directly about EditCheck; not quoted are comments, including mine, about whether it's technically necessary to make the visual editor 'the default' before deploying EditCheck [answer: no]).
- Then the thread shifts to @Chipmunkdavis wanting the citation templates to have "an easily accessed and obvious use of an
|author=
field, instead forcing all authors into|last
and|first
", which is about how the English Wikipedia chooses to organize its CS1 templates, and @Thryduulf wanting automatic ref names that are "human-friendly" (to take the wording RoySmith used), both of which entirely unrelated to whether to use software to encourage new editors to add more citations. - I see some opposition to putting new editors into the visual editor, and I see lots of complaints about automated refs, but I don't see any opposition from anyone except you to EditCheck itself. Please provide a couple of examples, so I can see what I missed? WhatamIdoing (talk) 17:57, 19 November 2024 (UTC)
- "which is about how the English Wikipedia chooses to organize its CS1 templates" is perhaps one way to say that the VE has no functionality to accept the synonyms, which I discovered in a few disparate conversations following that thread. I still have a tab open to remind me to put a note about phab on this, it's really not ideal have VE editors shackled with the inability to properly record author names. CMD (talk) 01:42, 20 November 2024 (UTC)
- VisualEditor is perfectly capable of accepting actual aliases such as
|author=
, and even non-existent parameters such as|fljstu249=
if you want (though I believe the citation templates, unlike most templates, will emit error messages for unknown parameters). It just isn't going to "suggest" them when the CS1 maintainers have told it not to do so. WhatamIdoing (talk) 05:12, 20 November 2024 (UTC)- If you know how to solve the problem, please solve the problem. Per Help talk:Citation Style 1/Archive 95, "The solution to the ve-can't/won't-use-already-existing-alias-list problem lies with MediaWiki, not with editors adding yet more parameters to TemplateData". As it stands, VE doesn't do it, and I've seen no indication that they consider it an issue. CMD (talk) 12:00, 20 November 2024 (UTC)
- If you want this wikitext:
{{cite news |author=Alice Expert |date=November 20, 2024 |title=This is the title of my news story |work=The Daily Whatever}}
- which will produce this citation:
- Alice Expert (November 20, 2024). "This is the title of my news story". The Daily Whatever.
- then (a) I just did that in the Reply tool's visual mode, so it obviously can be done without any further coding in MediaWiki, VisualEditor, or anything else, and (b) you need to convince editors that they want "Alice Expert" at the start instead of "Expert, Alice" of citations. WhatamIdoing (talk) 21:07, 20 November 2024 (UTC)
- No, I don't have to convince editors that they want "Alice Expert" instead of "Expert, Alice". The issue is, as covered in the original discussion with some good input from others, non-western name formats. It is a cultural blindspot to assume all names fall into "Expert, Alice" configurations, and it seems that it is a blindspot perpetuated by the current VE expectations. CMD (talk) 01:39, 21 November 2024 (UTC)
- More precise link to the conversation: Help talk:Citation Style 1/Archive 95#Allowing Visual Editor/Citoid Citation tool to use more than one name format Trizek_(WMF) (talk) 11:02, 21 November 2024 (UTC)
- @Chipmunkdavis, I guess I'm having trouble understanding what you want.
- You said in the linked discussion that "My understanding is that the VE tool does not allow for the use of aliases". I'm telling you: Your understanding is wrong. It's obviously possible to get
|author=
in the visual editor, because I did that. Either this diff is a lie, or your understanding is mistaken. I'm going with the latter.|author=Mononym
is already possible. So what change are you actually asking for? - The linked discussion seems, to my eyes, to be a long list of people telling you that if you don't like the description used in the TemplateData (NB: not MediaWiki and not VisualEditor), then you should change the description in the TemplateData (NB: not MediaWiki and not VisualEditor) yourself. You say the devs told you that, and I count at least two other tech-savvy editors who told you to WP:SOFIXIT already. Neither the part that says "Last name" nor the part that says "The surname of the author; don't wikilink, use 'author-link'; can suffix with a numeral to add additional authors" is part of either the visual editor or MediaWiki. Any editor who can edit Template:Cite news/doc can change those words to anything they want. WhatamIdoing (talk) 20:22, 21 November 2024 (UTC)
- Having to type source wikitext completely defeats the purpose of the visual editor; why not just type in the wikitext editor. This "solution" is a blaring technicality.Perhaps you should read the last four replies in the linked discussion. Aaron Liu (talk) 00:00, 22 November 2024 (UTC)
- Right, this is the sort of odd reply this topic inexplicably gets. You can just type in source code in the visual editor, I mean, why have visual editor at all. Just change the description so people can pretend someone's name is their last name, now being suggested yet again as a simple SOFIXIT, and no I'm not going to deliberately and formally codify that we should mislabel people's names, for what I did think before these various discussions were obvious reasons. CMD (talk) 02:08, 22 November 2024 (UTC)
- @Chipmunkdavis, what I'd like to clarify is:
- If I type
|author=Sting
vs|last=Sting
, will this make any difference to anyone (human or machine) that ►is not looking at the wikitext. That last bit about not seeing the wikitext is the most important part. If the complaint is entirely about what's in the wikitext, then Wikipedia editors should treat it as the equivalent of a whitespacing 'problem': it's okay to clean it up to your preferred style if you're otherwise doing something useful, but it's not okay to force your preferred style on others just for the sake of having it be 'the right way'. - The options are:
- Those two are used as exact synonyms by the CS1 code, in which case it make no practical difference which alias is used, or
- Those two are handled differently by the CS1 code (e.g. emitting different microformatting information), in which case the CS1 code should not declare them to be aliases. AIUI aliases are only supposed to be used for exact substitutes.
- So which is it? WhatamIdoing (talk) 20:25, 28 November 2024 (UTC)
- Misnaming someone is not a style choice. (It is literally an item explicitly mentioned in the UCOC.) Even if it wasn't, your professed solution is that a new editor open up the visual editor and see "Last name: The surname of the author; don't wikilink, use 'author-link' instead; can suffix with a numeral to add additional authors. Please also use this field for names which don't have a first name last name structure."? That doesn't seem sensible or effective. CMD (talk) 00:12, 29 November 2024 (UTC)
- Where does the "misnaming" happen? To be clear, I'm expecting an answer that either sounds like one of these two:
- "Only in the wikitext, but that's still very bad".
- "In a reader/user-facing location, namely _____" (where the blank might be filled in with something like "in the COinS microformatting").
- Which is it? WhatamIdoing (talk) 07:49, 30 November 2024 (UTC)
- I would refer to the previous discussions above and elsewhere where it has already been extensively covered that both of those options are true. It would in the wikitext, and is currently in the visual editor citation creator. CMD (talk) 08:34, 30 November 2024 (UTC)
- "Both of those options are true" is not possible, when you name as the two locations:
- a place readers do not see ("in the wikitext") and
- another place readers do not see ("in the visual editor citation creator").
- So again: Where is the place readers see this "misnaming"? WhatamIdoing (talk) 19:06, 30 November 2024 (UTC)
- It feels deeply uncivil to say "So again" for a question you haven't asked before. It is really surprising to see "misnaming" quoted as if it's something incorrect; it's hard to word this but that comes off as a shocking level of continued cultural insensitivity. At this point the various questions at hand have been answered multiple times in the different discussions, and we're wandering again towards odd red herrings that have little relation to the fact that VE source generator users are forced into a single naming system, something long solved by the non-VE source generator. I recommend the link RoySmith provided in the previous discussions if you haven't already[42], and remain hopeful one day that others will try to care about the non Alice Experts of the world. CMD (talk) 02:13, 1 December 2024 (UTC)
- Sorry, I thought I had already been quite clear about that point:
- Are we now agreed that no readers and no actual article content are affected by this? WhatamIdoing (talk) 00:39, 4 December 2024 (UTC)
- This is coming off as deliberately obtuse. The issue is for the person using the visual editor, the new editors we are trying to cultivate. CMD (talk) 15:58, 13 December 2024 (UTC)
- New editors see the VE citation creator, and that is the concern. Aaron Liu (talk) 03:56, 1 December 2024 (UTC)
- People using the visual editor's template editor never see
|last=
on the CS1 templates. That is only visible to people using wikitext. - People using the visual editor's template editing tools see the locally defined TemplateData label "Last name", which CMD is free to change at any time to anything he can get consensus for, e.g., "Last name, sole name, or non-Western style name". WhatamIdoing (talk) 00:44, 4 December 2024 (UTC)
- People using the visual editor's template editor never see
- It feels deeply uncivil to say "So again" for a question you haven't asked before. It is really surprising to see "misnaming" quoted as if it's something incorrect; it's hard to word this but that comes off as a shocking level of continued cultural insensitivity. At this point the various questions at hand have been answered multiple times in the different discussions, and we're wandering again towards odd red herrings that have little relation to the fact that VE source generator users are forced into a single naming system, something long solved by the non-VE source generator. I recommend the link RoySmith provided in the previous discussions if you haven't already[42], and remain hopeful one day that others will try to care about the non Alice Experts of the world. CMD (talk) 02:13, 1 December 2024 (UTC)
- "Both of those options are true" is not possible, when you name as the two locations:
- I would refer to the previous discussions above and elsewhere where it has already been extensively covered that both of those options are true. It would in the wikitext, and is currently in the visual editor citation creator. CMD (talk) 08:34, 30 November 2024 (UTC)
- Where does the "misnaming" happen? To be clear, I'm expecting an answer that either sounds like one of these two:
- Misnaming someone is not a style choice. (It is literally an item explicitly mentioned in the UCOC.) Even if it wasn't, your professed solution is that a new editor open up the visual editor and see "Last name: The surname of the author; don't wikilink, use 'author-link' instead; can suffix with a numeral to add additional authors. Please also use this field for names which don't have a first name last name structure."? That doesn't seem sensible or effective. CMD (talk) 00:12, 29 November 2024 (UTC)
- Right, this is the sort of odd reply this topic inexplicably gets. You can just type in source code in the visual editor, I mean, why have visual editor at all. Just change the description so people can pretend someone's name is their last name, now being suggested yet again as a simple SOFIXIT, and no I'm not going to deliberately and formally codify that we should mislabel people's names, for what I did think before these various discussions were obvious reasons. CMD (talk) 02:08, 22 November 2024 (UTC)
- Editing the templatedata for |last= has been verily rejected in the discussion CMD already linked. Aaron Liu (talk) 17:10, 4 December 2024 (UTC)
- So we want text that is defined in TemplateData to say something different, but the method of changing that must not involve changing the text that is defined in TemplateData.
- I don't think that is a solvable problem, sorry. WhatamIdoing (talk) 23:11, 4 December 2024 (UTC)
- It's an eminently solvable problem, the radio button idea has already been raised. Just takes a bit of actually thinking getting people's name's right is an issue, and not changing the actual question at hand. CMD (talk) 15:57, 13 December 2024 (UTC)
- 1. How did you do that?
2. The author parameter is useful and used iff the author has no last name; e.g., byline being an organization, mononymous person, no author stated, etc. This is documented at the citation-style help pages. Aaron Liu (talk) 22:11, 20 November 2024 (UTC)- The
|author=
parameter behaves the same as the|last=
parameter, so there's little point in changing the wikitext to say|author=
. - (In this case, I took the quick and dirty approach of typing out the template by hand, and pasting it in. The Reply tool's visual mode normally won't let you insert a template at all, because block-formatted templates completely screw up the discussion format. Normally, if there's no TemplateData to provide you with the options, then you'd click on the "+Add undocumented parameter" button and type in whatever you wanted. If there is TemplateData, then see my earlier comment that "It just isn't going to "suggest" them when the CS1 maintainers have told it not to do so.") WhatamIdoing (talk) 23:08, 20 November 2024 (UTC)
- The
- It's semantically different, like the em tag vs italicizing and whatnot. And as I've said before, the documentation doesn't suggest it so that the clueless will not use both |last and |author. Aaron Liu (talk) 23:57, 20 November 2024 (UTC)
- I've never had much sympathy for prioritizing COinS. If it's an area that interests you, then I suggest watching Wikipedia:WikiProject Microformats. WhatamIdoing (talk) 00:30, 21 November 2024 (UTC)
- No, I don't have to convince editors that they want "Alice Expert" instead of "Expert, Alice". The issue is, as covered in the original discussion with some good input from others, non-western name formats. It is a cultural blindspot to assume all names fall into "Expert, Alice" configurations, and it seems that it is a blindspot perpetuated by the current VE expectations. CMD (talk) 01:39, 21 November 2024 (UTC)
- If you want this wikitext:
Aaron Liu (talk) 12:22, 20 November 2024 (UTC)If someone adds |authorn= as a separate parameter, I fear that we will see an increase in the number of articles that populate Category:CS1 errors: redundant parameter because OMG!-there's-an-empty-box-in-the-form;-I-must-fill-it. This is why I suggested radio buttons for aliases; something that MediaWiki would needs implement.
- If you know how to solve the problem, please solve the problem. Per Help talk:Citation Style 1/Archive 95, "The solution to the ve-can't/won't-use-already-existing-alias-list problem lies with MediaWiki, not with editors adding yet more parameters to TemplateData". As it stands, VE doesn't do it, and I've seen no indication that they consider it an issue. CMD (talk) 12:00, 20 November 2024 (UTC)
- VisualEditor is perfectly capable of accepting actual aliases such as
- "which is about how the English Wikipedia chooses to organize its CS1 templates" is perhaps one way to say that the VE has no functionality to accept the synonyms, which I discovered in a few disparate conversations following that thread. I still have a tab open to remind me to put a note about phab on this, it's really not ideal have VE editors shackled with the inability to properly record author names. CMD (talk) 01:42, 20 November 2024 (UTC)
- You missed that none of them tested it or checked it on other wikipedia versions, and that no support came along after I had tested it and posted my results? No surprise here... Fram (talk) 19:17, 19 November 2024 (UTC)
- No comments came along after that either, so we don't really know. Aaron Liu (talk) 19:18, 19 November 2024 (UTC)
- There's a big gap between "The discussion stopped" and "There was significant opposition in this discussion".
- In terms of EditCheck, I found most of the discussion to be off-topic, but I can honestly only find one editor (you) who opposed it in that discussion. I assume your failure to provide links to any other statement of opposition means you also honestly can't find a single comment in that discussion from anyone who agreed with you – just an absence of further comments, and an unprovable assumption on your part that its due to everyone agreeing with you. WhatamIdoing (talk) 19:28, 19 November 2024 (UTC)
- Didn't stop you from making any assumptions or presentings things in the most WMF-favorable light. Seems like VE all over again, only then you had the excuse of being paid by the WMF. Fram (talk) 19:45, 19 November 2024 (UTC)
- I don't think I presented the discussion in the most WMF-favorable light. The discussion started off pretty enthusiastic, but it was mostly enthusiastic about something other than EditCheck itself. It then turned into a long digression into something completely unrelated.
- (My own contributions to that discussion were technical in nature: It doesn't require the visual editor as the default; code may already exist for an unrelated change that someone wants; stats may already exist for something close to the numbers someone else wants.) WhatamIdoing (talk) 19:56, 19 November 2024 (UTC)
- Didn't stop you from making any assumptions or presentings things in the most WMF-favorable light. Seems like VE all over again, only then you had the excuse of being paid by the WMF. Fram (talk) 19:45, 19 November 2024 (UTC)
- No comments came along after that either, so we don't really know. Aaron Liu (talk) 19:18, 19 November 2024 (UTC)
- (ec) Fram, this is precisely because I reread the conversation that I wrote my previous message. We have the right to disagree, but it should remain civil and not convey accusations of bad faith. The way you try to depict me as a dishonest person is not acceptable at all.
- I let other participants have a look at the previous discussion we linked, also take a look at the data we provided, and make their own opinion. We aren't the two people who will decide of a deployment here: I'm just the messenger, and you are not the person who has the final word on behalf of everyone. Trizek_(WMF) (talk) 18:00, 19 November 2024 (UTC)
- Tough luck. You posted a dishonest reply last time we had this discussion. If it had been a genuine error in that previous discussion, you should have just said so. Instead, you not only let your error stand, but then come here and claim that there was no significant opposition to Edit Check in that previous discussion, ignoring the one person who tested it and posted results. And like I said in that discussion, the data the WMF provides is not to be trusted at all, as seen from other deployments. Which I already stated and you again ignore completely. But, like I said, the WMF (and previous WMF employees like Whatamidoing) are very good at civil bullshit, while I am not so good at the civil part but rather good at cutting through the bullshit. Fram (talk) 19:17, 19 November 2024 (UTC)
- Since there are non-native English speakers in this discussion, I'd like to clarify that "dishonest", in English, means that the person deliberately told the opposite of the truth. For example, it is dishonest to say "I love Windows ME", when you actually hate it.
- However:
- Having incorrect or outdated information is not "dishonest".
- Caring about a particular benefit more than a different problem is not "dishonest".
- Disagreeing with you, or with a hypothetical average reasonable person, is not "dishonest".
- There's a reason that English has an idiom about an "honest mistake": It's because it's possible to be factually wrong without being dishonest. For example, if you say "Oh, User:Example said something yesterday", but upon further inspection, it was a different user, or a different day. Or even if you say "The previous discussion shows significant opposition to EditCheck", but upon further inspection, nobody except you publicly opposed it. Such a sentence is only dishonest if the speaker believes, at the time of speaking, that the statement is factually wrong. Unless the speaker believes themselves to be speaking falsehoods, it's not actually dishonest; it's only a mistake or an error.
- Additionally, I think it would be a good idea to review Wikipedia:No personal attacks#What is considered to be a personal attack?. I suggest paying specific attention to these two points:
- "Using someone's affiliations as an ad hominem means of dismissing or discrediting their views" – Claiming, or even implying, that WMF staff have a tendency to be dishonest is probably a violation of this point in the policy.
- "Insulting or disparaging an editor is a personal attack regardless of the manner in which it is done." – Claiming that anyone is "dishonest", especially when the difference between your view and theirs is a matter of opinion, is very likely a violation of this policy. It doesn't officially matter if the manner in which you say this is "you are dishonest" or "your replies are dishonest"; it's still insulting and disparaging another editor.
- WhatamIdoing (talk) 19:45, 19 November 2024 (UTC)
- Like I said, one can post all distruths one wants as long as one does it civilly. Reminds me of the discussions we had about VE when it was disastrously deployed but all you did as a liaison was defend the WMF no matter what. And I didn't say their replies were dishonest because they are a WMF employee, just that it is typical behaviour for many of them apparently. Perhaps reread the breakdown of the Gather discussions I gave below, or reread the countless discussions about Flow, VE, descriptions, ... There are some good apples among them, but not too many. Fram (talk) 19:51, 19 November 2024 (UTC)
- I believe you'll find my view of visual editor circa July
20232013 right here in the barnstar I gave you. I wouldn't describe it as "defend the WMF no matter what", but perhaps you will look at it and refresh your memory of the time. WhatamIdoing (talk) 20:00, 19 November 2024 (UTC)- 2013, not 2023. July was early days in VE testing, when I still thought you were helpful. A few months later I had become wiser. Fram (talk) 20:20, 19 November 2024 (UTC)
- If you need a reminder, here is just one of many examples from that terrible period: Wikipedia:VisualEditor/Feedback/Archive 2013 13#Diligent testing by Fram, my comment of 08:03 12 December.
- I believe you'll find my view of visual editor circa July
- Like I said, one can post all distruths one wants as long as one does it civilly. Reminds me of the discussions we had about VE when it was disastrously deployed but all you did as a liaison was defend the WMF no matter what. And I didn't say their replies were dishonest because they are a WMF employee, just that it is typical behaviour for many of them apparently. Perhaps reread the breakdown of the Gather discussions I gave below, or reread the countless discussions about Flow, VE, descriptions, ... There are some good apples among them, but not too many. Fram (talk) 19:51, 19 November 2024 (UTC)
- Tough luck. You posted a dishonest reply last time we had this discussion. If it had been a genuine error in that previous discussion, you should have just said so. Instead, you not only let your error stand, but then come here and claim that there was no significant opposition to Edit Check in that previous discussion, ignoring the one person who tested it and posted results. And like I said in that discussion, the data the WMF provides is not to be trusted at all, as seen from other deployments. Which I already stated and you again ignore completely. But, like I said, the WMF (and previous WMF employees like Whatamidoing) are very good at civil bullshit, while I am not so good at the civil part but rather good at cutting through the bullshit. Fram (talk) 19:17, 19 November 2024 (UTC)
- For what its worth, I do think a RfC can be made once the proposed details of the deployment is firmed up:
- Do we make VE as the default for new editors?
- Do we enable EditCheck as it is?
- Aside, if we retain the current arrangement, i.e. letting new/anon editors selecting their preferred editor, can we change the buttons to be more balanced in colours and sizing? These do affect one's preference in choosing which button to click. – robertsky (talk) 18:16, 19 November 2024 (UTC)
- robertsky, that's two RFCs, and – respectfully – conflating the two questions was a primary contributor to how far off the rails this conversation got last time.The UX alterations are probably best brought up at meta or mw for the skins devs to consider. Folly Mox (talk) 18:55, 19 November 2024 (UTC)
- Gather was dropped after 3 months (without any "broken WMF promises" nor any time for them to have given such promises or to have acrimoniously debated), and Wikidata SDs seem to be deployed and working completely fine. Aaron Liu (talk) 18:25, 19 November 2024 (UTC)
- Gather was deployed in March 2015 and immediately got severe backlash at the announcement: Wikipedia:Administrators' noticeboard/Archive270#Extension:Gather launching on beta. No good answers followed. So three weeks later we get Wikipedia:Administrators' noticeboard/Archive270#Moderation of Collections?, where we get (laughable) promises of what the WMF will do to solve some of the most basic problems of this tool they rolled out on enwiki but hadn't really thought about at all it seems. Instead, they created a new Flow page on enwiki for this tool (Wikipedia:Miscellany for deletion/Wikipedia:Gather/User Feedback) despite Flow being removed from enwiki long before this. So in January 2016 (hey, that's already 10 months, not 3), Wikipedia:Village pump (proposals)/Archive 130#Disabling Gather? was started. On 22 Januuary 2016, an answer was promised by the WMF "next week" (section "A WMF reply next week"): "by next week, the Gather team will have a major update to share about the feature". Things escalated, so another WMF person came along 6 days later to promise "we will be putting together this analysis starting now with the intention of sharing publicly next week with a decision the week after." (section "A Response from the WMF"). So instead of some great announcement after 1 week, we are now 6 days further and will get big news 2 weeks later... So, more than 2 weeks later, 12 February, we get "the analysis has taken longer than I anticipated. I'll post the results as soon as I can." So, on the 19th, they posted a "proposal" to which others replied "that proposal is an insult to the community." and "his smacks of yet more stalling tactics and an attempt to save face". Only when the RfC was closed with truly overwhelming supprt to disable it did they finally relent.
- Do you really need a similar runthrough of Wikidata short descriptions, which are (or should be) disabled everywhere on enwiki and replaced by local descriptions instead? Or will you admit that perhaps you didn't remember details correctly? Fram (talk) 19:41, 19 November 2024 (UTC)
- Yeah man I don't remember anything well, I wasn't there. I'm just reading random things I can find to see what you're talking about, such as the MediaWiki page that states development was suspended by July 2015, but as you've pointed out, that is different from disabling, and thank you for helping me to find. Thanks for your links on Gather.
By no fault of its own, Shortdesc helper made me conflate WD descriptions and SDs. Aaron Liu (talk) 20:42, 19 November 2024 (UTC)
- Yeah man I don't remember anything well, I wasn't there. I'm just reading random things I can find to see what you're talking about, such as the MediaWiki page that states development was suspended by July 2015, but as you've pointed out, that is different from disabling, and thank you for helping me to find. Thanks for your links on Gather.
- I never suggested deploying it on the source editor. Having not fully read the above discussion yet, it currently seems unreasonable that it's not deployed in the visual editor on enwiki and dewiki (while preserving the current "level of defaultness" of the visual editor itself instead of increasing the defaultness). Aaron Liu (talk) 16:30, 19 November 2024 (UTC)
- @Aaron Liu, I never implied you suggested it, I was just one step ahead telling you that it is not available on source editor. :) We can deploy Edit check without changing the "level of defaultness" of the visual editor itself, but the impact might not be the same. Trizek_(WMF) (talk) 18:09, 19 November 2024 (UTC)
- If you didn't see "significant oppo sition" there, then perhaps reread it? The tool you deployed elsewhere had no measurable positive impact (when tested on Simple or French Wikipedia). As for past reasons why enwiki didn't want VE deployed, let's give you one: because when VE was deployed here, it was extremely buggy (as in, making most articles worse when used, not better), but the WMF refused to undo their installation here (despite previous assurances) and used enwiki as a testing ground, wasting tons of editor hours and creating lots of frustration and distrust towards the WMF. This was only strengthened by later issues (Flow, Gather, Wikidata short descriptions) which all followed the same pattern, although in those cases we eventually, after lots of acrimonious debates and broken WMF promises, succeeded in getting them shut down). As shown in the linked discussion, here again we have two instances of WMF deployments supported by test results where in the first instance reality showed that these were probably fabricated or completely misunderstood, as in reality the results were disastrous; and in the second instance, Edit Check, reality showed that the tool wasn't an improvement over the current situation, but even when spelled out this was "misread" by the WMF. Please stop it. Fram (talk) 15:50, 19 November 2024 (UTC)
- (ec) Probably Wikipedia:Village pump (proposals)/Archive_213#Deploying_Edit Check on this wiki. Having reread that thread, it combines all WMF rollout issues into one it seems, from starting with false requirements over a testing environment which isn't up-to-date at all to completely misreading everything that is said into something supposedly positive, ignoring the stuff that contradicts their "this must be pushed no matter what" view. But all in a very civil way, there's that I suppose... Fram (talk) 15:39, 19 November 2024 (UTC)
- What an utterly weird objective for that tool "Newcomers and Junior Contributors from Sub-Saharan Africa will feel safe and confident enough while editing to publish changes they are proud of and that experienced volunteers consider useful." Very neocolonial. Fram (talk) 15:25, 19 November 2024 (UTC)
- Indeed. I provided some detailed feedback about this, based on my experience of African editors and topics – see Dark Continent. Andrew🐉(talk) 16:02, 19 November 2024 (UTC)
- Different parts of the world have different responses to UX changes. A change that is encouraging in a high-resource setting (or an individualistic culture, or various other things) may be discouraging in others. It is therefore important to test different regions separately. The Editing team, with the strong encouragement of several affiliates, decided to test sub-Saharan Africa first. WhatamIdoing (talk) 19:50, 19 November 2024 (UTC)
- I can't help it if you don't see how insulting and patronizing it is to write "Junior Contributors from Sub-Saharan Africa will feel safe and confident enough while editing". Fram (talk) 20:26, 19 November 2024 (UTC)
- The experienced contributors from sub-Saharan Africa who helped write that goal did not feel it was insulting or patronizing. WhatamIdoing (talk) 21:11, 19 November 2024 (UTC)
- I can't help it if you don't see how insulting and patronizing it is to write "Junior Contributors from Sub-Saharan Africa will feel safe and confident enough while editing". Fram (talk) 20:26, 19 November 2024 (UTC)
- Different parts of the world have different responses to UX changes. A change that is encouraging in a high-resource setting (or an individualistic culture, or various other things) may be discouraging in others. It is therefore important to test different regions separately. The Editing team, with the strong encouragement of several affiliates, decided to test sub-Saharan Africa first. WhatamIdoing (talk) 19:50, 19 November 2024 (UTC)
- Indeed. I provided some detailed feedback about this, based on my experience of African editors and topics – see Dark Continent. Andrew🐉(talk) 16:02, 19 November 2024 (UTC)
- If I'm thinking of the right tool, there was a discussion (at one one of the village pumps I think) that saw significant opposition to deployment here, although I can't immediately find it. I seem to remember the opposition was a combination of errors and it being bundled with VE? I think Fram and WhatamIdoing were vocal in that discussion so may remember where it was (or alternatively what I'm mixing it up with). Thryduulf (talk) 15:21, 19 November 2024 (UTC)
- Ooh, I didn't know we and dewiki didn't get deployment. Is there a reason? Aaron Liu (talk) 14:18, 19 November 2024 (UTC)
- We can deploy it. Trizek_(WMF) (talk) 13:15, 19 November 2024 (UTC)
Redone my check at Simple wiki, looking at the most recent edits which automatically triggered this tool[43]. 39 instances were automatically indicated as "declined", the other 11 contain 3 edits which don't add a reference anyway[44][45][46] and 6 edits which actually add a reference[47][48][49][50][51][52] (though 3 of these 6 are fandom, youtube and enwiki). And then there is this and this, which technically add a source as well I suppose... Still, 3 probably good ones, 3 probably good faith bad ones, 3 false positives, and 2 vandal ref additions. Amazingly, this is almost the exact same result as during the previous discussion[53]. Fram (talk) 16:21, 19 November 2024 (UTC)
- I think just creating one good source addition is enough cause for deployment (without making VE the default editor), especially since it doesn't appear to be causing additional harm. Aaron Liu (talk) 16:59, 19 November 2024 (UTC)
- If it doesn't create more good source additions than we had before the tool, then there is no reason to deploy something which adds a popup which people usually don't use anyway. Without the popup, there also were new editors adding sources, it's not as if we came from zero. No benefit + additional "noise" for new editors => additional harm. Fram (talk) 17:10, 19 November 2024 (UTC)
- Editors who got a popup did not originally give a source when attempting to publish. That is more good source additions. Aaron Liu (talk) 17:11, 19 November 2024 (UTC)
- @Aaron Liu, have you had a read at the data we gathered around Edit check? Trizek_(WMF) (talk) 18:12, 19 November 2024 (UTC)
- I'm not sure what that has to do with my reply. Fram was disputing that the source additions were good and useful, and I was replying to him that some of them were good, hence edit check should be deployed (plus I'm fairly sure there's another check in the works to check reference URLs against the local RSP) Aaron Liu (talk) 18:21, 19 November 2024 (UTC)
- What you observed (Editors who got a popup did not originally give a source when attempting to publish) is shown in the data we shared.
- We already deployed checks to verify if a link added is not listed in rejection lists and make it more actionable by newcomers. Some users at other wikis expressed a need to have a list of accepted links (the ones that match RSP), but other said that it could prevent new good sources from being added. Thoughts?
- Trizek_(WMF) (talk) 18:37, 19 November 2024 (UTC)
- Isn't that the programmed heuristic for when the popup appears? I don't get what this has to do with any stats.Only URLs in the spamlist are blocked. Edit check should strongly warn against adding sources found generally unreliable by consensus summarized at RSP. Aaron Liu (talk) 18:59, 19 November 2024 (UTC)
- I'm not sure to understand, sorry. Stats are about users adding a citation when asked compared from where not asked. It is not connected to RSP.
- I take note that you are in favor of expanding reliability information when the user adds a link. Trizek_(WMF) (talk) 20:01, 19 November 2024 (UTC)
- Isn't that the programmed heuristic for when the popup appears? I don't get what this has to do with any stats.Only URLs in the spamlist are blocked. Edit check should strongly warn against adding sources found generally unreliable by consensus summarized at RSP. Aaron Liu (talk) 18:59, 19 November 2024 (UTC)
- I'm not sure what that has to do with my reply. Fram was disputing that the source additions were good and useful, and I was replying to him that some of them were good, hence edit check should be deployed (plus I'm fairly sure there's another check in the works to check reference URLs against the local RSP) Aaron Liu (talk) 18:21, 19 November 2024 (UTC)
- @Aaron Liu, have you had a read at the data we gathered around Edit check? Trizek_(WMF) (talk) 18:12, 19 November 2024 (UTC)
- Editors who got a popup did not originally give a source when attempting to publish. That is more good source additions. Aaron Liu (talk) 17:11, 19 November 2024 (UTC)
- If it doesn't create more good source additions than we had before the tool, then there is no reason to deploy something which adds a popup which people usually don't use anyway. Without the popup, there also were new editors adding sources, it's not as if we came from zero. No benefit + additional "noise" for new editors => additional harm. Fram (talk) 17:10, 19 November 2024 (UTC)
- Also, I wonder what you think of the lower revert rate from WMF's study. Aaron Liu (talk) 19:19, 19 November 2024 (UTC)
- Like I said, of the 11 supposed additions, 5 need reverting (as far as the source goes) and 3 didn't add a source. I don't trust WMF numbers at all, but 5/8 needing a revert is hardly an overwhelming success. Even assuming that the 3 good ones wouldn't have added a source otherwise, one then has to make the same conclusion for the others, and the 5 bad ones wouldn't have been included otherwise either. So where is the net benefit and the no harm? Fram (talk) 19:54, 19 November 2024 (UTC)
I'm new to all this, could you elaborate on why?I don't trust WMF numbers at all
The 5 bad ones would have included no source at all if Edit Check wasn't there. I don't see how adding a blatantly terrible source is worse than adding text without a source at all. Both are checked the exact same way: eye-scanning.the 5 bad ones wouldn't have been included otherwise either
So there you go, net benefit and no harm. Aaron Liu (talk) 20:11, 19 November 2024 (UTC)- No. I explained it already in the previous discussion. You have made false claims about Gather and so on, but can't be bothered to reply when I take the time to give a detailed answer; but now you are apparently "new to all this" suddenly and want me to again take some time to enlighten you. No. And an unsourced statement is obvious to see, a statement sourced to a bad source is much less obvious. Fram (talk) 20:24, 19 November 2024 (UTC)
- @Aaron Liu, I think this is a "reasonable people can disagree" thing. Some RecentChanges patrollers just revert any new unsourced claim, so if it's unsourced, it's quick to get out of the queue. Faster reverting means success to them, whereas encouraging people to add sources is like whispering a reminder to someone during a game of Mother, May I?: It removes an easy 'win' for the reverter.
- OTOH, having a source attached to bad information has other advantages. It's easy to determine whether it's a copyvio if you have the source, and if you're looking at an article you know something about (e.g., your own watchlist rather than the flood in Special:RecentChanges), then having the source often means that you can evaluate it that much faster ("This is a superficially plausible claim, but I wouldn't trust that website if it said the Sun usually rises in the East").
- For content that shouldn't be reverted, then IMO encouraging a source is always a good thing. For content that should be reverted, there are tradeoffs. WhatamIdoing (talk) 21:22, 19 November 2024 (UTC)
- I miss things, especially on a workday. Sorry about that.
I think the mobile short-descriptions thing is believable, as users . This is a case of the methodology being technically correct but misleading, which I don't see for the edit check study, unless you're willing to provide an argument.
IMO, only slightly. Often, only users of experience patrol pages when reading them. (The unacquainted are also sometimes able to realize something's probably wrong with a swath of unsourced text, hence they make up part of the aforementioned "slightly".) And blatantly bad sources jump out to those experienced from the references section. Sources in the middle ground can often link to good sources, though there is a debate on how good it is to have both additionally middle-ground and bad sources vs. no sources at all. Personally, I think it's better. Aaron Liu (talk) 21:47, 19 November 2024 (UTC)an unsourced statement is obvious to see, a statement sourced to a bad source is much less obvious.
- Now that a number of people have spoken out on the subject (a few not against it, one other strictly against), what's the next step? Trizek_(WMF) (talk) 11:06, 21 November 2024 (UTC)
- To make a specific proposal then the next step would be a formal Request for Comment. Andrew🐉(talk) 11:40, 21 November 2024 (UTC)
- This is not something I can lead at the moment, but I can assist anyone who would like to start the process. Trizek_(WMF) (talk) 10:03, 22 November 2024 (UTC)
- To make a specific proposal then the next step would be a formal Request for Comment. Andrew🐉(talk) 11:40, 21 November 2024 (UTC)
- Now that a number of people have spoken out on the subject (a few not against it, one other strictly against), what's the next step? Trizek_(WMF) (talk) 11:06, 21 November 2024 (UTC)
- No. I explained it already in the previous discussion. You have made false claims about Gather and so on, but can't be bothered to reply when I take the time to give a detailed answer; but now you are apparently "new to all this" suddenly and want me to again take some time to enlighten you. No. And an unsourced statement is obvious to see, a statement sourced to a bad source is much less obvious. Fram (talk) 20:24, 19 November 2024 (UTC)
- Like I said, of the 11 supposed additions, 5 need reverting (as far as the source goes) and 3 didn't add a source. I don't trust WMF numbers at all, but 5/8 needing a revert is hardly an overwhelming success. Even assuming that the 3 good ones wouldn't have added a source otherwise, one then has to make the same conclusion for the others, and the 5 bad ones wouldn't have been included otherwise either. So where is the net benefit and the no harm? Fram (talk) 19:54, 19 November 2024 (UTC)
Workshopping the RfC question
Given that there are several editors here interested in the feature turning on, I would like to propose the following question and a brief/neutral backgrounder to be asked for the RfC:
Should mw:Edit check be turned on?
Background: Edit Check is Wikimedia Foundation's product to encourage new editors to add citations to their edits, by prompting them with pop-ups before publishing. The pop-ups will work under the following default conditions (points 2 - 4 can be configured further):
- If editing is done through Visual Editor.
- ≥40 consecutive characters added.
- All accounts with < 100 edits
- All sections*
For point 4, I also propose to modify the configuration to exclude this feature from the following sections:
- lead section, as we don't not require leads to have citations
- Notes section, usually handled by {{efn}} in content body, etc.
- References section, no citation required
- External links section, no citation required
- See also section, no citation required
- Further reading section, no citation required (thanks, Chipmunkdavis)
- And any other sections (that I have missed out, and in the future) that do not require citations.
For future changes of the configuration setting, this can be done on wiki through modifying MediaWiki:Editcheck-config.json file after discussing in an appropriate venue. This also means that other than the initial activation, we do not require further changes in the backend (and if we would want to rollback before deactivating in a server update, we can set the max edit count to 1 as a temporary measure).
Prior discussions about this feature can be found at and Village pump (idea_lab) and Wikipedia:Village_pump_(proposals)/Archive_213#Deploying_Edit_Check_on_this_wiki.
@Trizek (WMF): do correct the above if there's anything that I have stated incorrectly. Also, with regards to the configuration settings, can mw:Community_Configuration be utilised as well? – robertsky (talk) 14:24, 30 November 2024 (UTC)
- @Robertsky, all is correct. Also, at the moment, Edit check has not been integrated to Community Configuration but, as you mention, the
json
file attached to Edit check allows your community to decide on de/activation. Trizek_(WMF) (talk) 09:11, 2 December 2024 (UTC) - Further reading section. Idly thinking, is the 100 edits namespace configurable? Further, just to check, "≥40 consecutive characters added." means "≥40 consecutive characters added without a ref tag" or similar? CMD (talk) 09:24, 2 December 2024 (UTC)
- @Chipmunkdavis
- The 100 edits is not namespace configurable. From the codes, it is checked against
wgUserEditCount
JavaScript variable. There is no JavaScript variable(s) for breakdown of edit counts by namespace at the moment, going by this documentation. - I suppose so as well.
- The 100 edits is not namespace configurable. From the codes, it is checked against
- – robertsky (talk) 11:55, 2 December 2024 (UTC)
- 1. Correct. We can have “only activate this check in this namespace” though.
- 2. Correct as well. Any type of reference tag or any template that uses
<ref>
at some point is detected. Trizek_(WMF) (talk) 17:31, 2 December 2024 (UTC)- @Robertsky, some minor details, as we apparently both looked at the example rather than the actual default:
- The default is ≥50 consecutive characters added, which can be configured to 40,
maximumEditcount
is [number edits or fewer]. If set at 100, it is ≤100 edits, rather than <100 edits. (It is really a detail.)
- Trizek_(WMF) (talk) 14:35, 5 December 2024 (UTC)
- @Robertsky, some minor details, as we apparently both looked at the example rather than the actual default:
- @Chipmunkdavis
- Take a look at the possibilitiess under Heading names in Wikipedia:Manual of Style/Layout#Notes and references. Whether or not to exclude some heading names will often depend on where they occur in the article. Donald Albury 16:34, 2 December 2024 (UTC)
Independent Politicians
Where possible add a section where general info is for sn independent politicians indicate what political position they are ie center, left, right etc 2001:BB6:514B:A300:D35B:58F7:1327:A55 (talk) 00:39, 22 November 2024 (UTC)
- I don’t know what that means Dronebogus (talk) 10:59, 24 November 2024 (UTC)
- {{Infobox officeholder}} already has a parameter for "Other political affiliations", which might be what you are looking for. Otherwise, yes, a section in the article text can be written if there are enough sources to position the person on the political spectrum, but there shouldn't be a strict guideline mandating it to be present, especially since these affiliations can be controversial or contested. Chaotic Enby (talk · contribs) 12:02, 24 November 2024 (UTC)
- There are also many politicians whose views do not neatly fit into a simple left-centre-right box, especially as right-of-centre UK politics is roughly equivalent to the left wing of mainstream US politics. Thryduulf (talk) 00:49, 5 December 2024 (UTC)
- The Nolan chart would be slightly better, but as you say, would have to be adjusted for different countries. Donald Albury 02:14, 5 December 2024 (UTC)
- There are also many politicians whose views do not neatly fit into a simple left-centre-right box, especially as right-of-centre UK politics is roughly equivalent to the left wing of mainstream US politics. Thryduulf (talk) 00:49, 5 December 2024 (UTC)
Dead pixels, an expansion to WP:CITEWATCH, a new noticeboard?
Bit of a long one... more of an essay at this point really, but IMO, it might be worth it to prevent editor burnout and bring in new users, so here goes: You know how once one spots a dead pixel, they can't seem to ignore it? Then one starts wondering whether the monitor vendor has either: gotten sloppy with their work... or if they just got unlucky given the volume of monitors that get put out by the vendor. Then the dread of calling the warranty department...
Just like the analogy above, the news and research outputted by reliable sources is generally problem free. But because of the volume of information, occasionally errors will get in. Sometimes even unscrupulous outlets gets in. But unless one is motivated or knowledgeable enough, few will go through the effort of comparing what the reference says to its references (reference-in-reference). This is the dead pixel problem I'm talking about, and just like a dead pixel, annoys the crap of the person who sees it, for better or worse. Then comes the process of "fixing" it: currently, original research issues, and reference-in-reference issues are handled in science by PubPeer, whose extension is used by a paltry number of users. Response times by authors take days, maybe years even with relentless journalism. At any rate, most people who feel compelled to edit Wikipedia due to accuracy problems have probably never heard of PubPeer. And as for issues with regular journalists, I suppose one could turn to opinions by third parties like NewsGuard? And meanwhile, they can usually get away with publishing contradictory health news without being called out for it.
All made more worrying given that impact Wikipedia has on the real-world non-Wikipedians, like judges and scientists. Recent political developments, as well as LLM usage (see WP:CNET), mean that once reliable sources could suddenly hallucinate or contradict other sources on a whim. Maybe the errors made daily won't be indicative of LLM usage... but they could. In any case, we don't currently track these issues, so whose to say what patterns unreliable sources follow?
Mistake or no mistake, when the inaccuracy is inevitably spotted (probably by us, I wonder why...), an attempt will probably be made to re-balance the article or add footnotes following WP:Inaccuracy. This works great... if you are the only editor of a given article. For everyone else, because not everyone will necessarily see a dead pixel as a big deal, the actions may seem disproportionate and/or violate certain consensus policies, and the talk page will go on and on, maybe then to WP:DRN driving away casual but knowledgeable editors, all of which will be seen by hardly anyone, let alone the original author of the source. One could then go to WP:RSN, but that noticeboard is really only equipped to handle the most problematic and fringe sources, not really the daily errors that get published by sources day to day. We burn out, and the world, by and large, hardly notices the dispute.
To solve this, I propose some sort of objective-ish tracking in WP:CITEWATCH of reference-in-reference accuracy (in line with Wikipedia's policy of WP:NOR) as well as other issues like typos, linking issues, cotogenesis, copyright violations, notable omissions, and most importantly, corrections (sure sign of a RS), and the time elapsed from error spotting to correction for refs, all heuristics that, when aggregated, could be indicative of a sloppy copyeditors or cursory peer review. Editors could put in a template with the relevant issue, hidden by default until patrolled. If there is a dispute, a new reference-in-reference noticeboard, split into categories (typos, copyright violations, etc.)
Bonus benefits - we might finally:
- Know which MDPI journals have decent peer review, allowing them to build a reputation?
- Create an easy place to show that the consensus on WP:ALJAZEERA is justified?
- Create an incentive to keep more accountable (especially on health related topics)? As well as obscure sources on obscure topics that may only be read by the Wikipedian.
- Reduce biting and attrition by creating an easy place for sub-WP:RSN issues to be reported, counted and easily exportable to PubPeer or elsewhere?
- Problematically high counts could then easily be reported to WP:RSN without the need for extensive, hard-to-read discussion.
Other references
| ||||
---|---|---|---|---|
Retraction Watch: Basis for "notable omissions": |
Unfinished ideas, subject to change
|
---|
Older pre-internet sources might be less affected by ref-in-ref errors, since the reader could be reasonably expected to check the sources, necessity. WVhere ref-in-ref notices go to the reader: Probably inside ref tags, after the chosen citation template? This proposal could involve multiple changes to various guideline pages. WP:Inaccuracy will probably be changed the most by this proposal. Patrolling - Mostly in anticipation of misunderstanding of policy, and WP:NOR. Noiceboard name - RRN, reference-in-reference noticeboard? To avoid flooding the noticeboard, require discussion on talk page first? Split noticeboard into categories? Categories - Categories will be separated into errors that will be reported to readers when patrolled, and those that will just be tracked by an expanded SOURCEWATCH table, for later discussion on RSN.
Perverse incentives? Less citing of sources overall? Counterpoints: Existing incentives to cite to increase impact or whatever. Could be solved with another category:
|
Rolling this out might take an extended period of time, and will probably involve the WMF as well as new templates, modules, instructions, etc. Thoughts on this, as well as how improvements could be broken up or rolled out? ⸺(Random)staplers 03:56, 25 November 2024 (UTC)
- This seems to have a lot of thought, but frankly I have no idea what this proposal is actually proposing. Ca talk to me! 07:30, 1 December 2024 (UTC)
- Is the proposal about placing discussions of reliability about the cited source inside the citations themselves? Ca talk to me! 07:32, 1 December 2024 (UTC)
- I must be tired I think, but I do not think I understood anything at all about the idea, whether it is the how or the why. The entire way the reliability of sources is approached is that no matter how trusted they are, no publication ever gets a blank check on any subject, and to me it does not seem like there is an issue of under-reporting perceived inaccuracies or bias either, so I am not sure I see the point. Choucas Bleutalkcontribs 12:51, 5 December 2024 (UTC)
New main page section: Wikipedia tips
I think a page informing the readers of Wikipedia features would be helpful, since the public largely do not know much about Wikipedia's backend even though billions visit this site. Topics featured can be looking a page history, talk page discussions, WP:Who Wrote That?, etc. I imagine it woule be placed under the Today's featured picture, since we want to showcase quality work first. I've made a demo here: User:Ca/sadbox. Ca talk to me! 13:11, 29 November 2024 (UTC)
- Looks good. And it's fine if we recycle them fairly rapidly, since these are things can be easily reused – in fact, I suggest cycling this weekly instead of daily. Cremastra ‹ u — c › 15:56, 29 November 2024 (UTC)
- Perhaps we could do something like {{Wikipedia ads}} and simply post a new random tip upon a purge. Ca talk to me! 16:07, 29 November 2024 (UTC)
- The Main Page is deliberately aimed at readers, not editors. Its purpose is to direct readers to interesting encyclopaedic content, not show them how to edit pages. The Main Page is also very full already, so adding anything would require removing something else. I think it's highly unlikely that this idea would achieve consensus at T:MP. However I'm sure there's a place for something like this in Wikipedia: space. Modest Genius talk 12:42, 3 December 2024 (UTC)
- To be fair, the whole point of Wikipedia is that readers are potential editors. Helping readers take that step would definitely help us keep a steady, or even growing, user base. Chaotic Enby (talk · contribs) 12:52, 3 December 2024 (UTC)
- I am not sure what you mean by
The Main Page is also very full
. There isn't a size limit to Internet pages? In any case, I want the content of the tips to be reader-focused, not editor-focused. Things like creating an account to change website display, identifying who-wrote-what, etc. Ca talk to me! 13:14, 3 December 2024 (UTC)
- There has been a Tip of the Day project since 2004. You can use the {{totd}} template to display the day's tip, as follows. Perhaps there should be a link to this in the Other areas of Wikipedia section of the main page? Or it might go in the top banner, where the portals used to be, as that looks quite empty now. Andrew🐉(talk) 17:34, 3 December 2024 (UTC)
- This is perfect! It seems like people already done the work for me. However, there is some need to retheme the banner so that it fits in with the rest of the main page. Ca talk to me! 23:49, 4 December 2024 (UTC)
- I think the OP's plan is a terrific idea. The vast majority of readers never even think about actually editing a page (despite the ubiquitous edit links). Having a big, NOTICEABLE "tip of the day" seems a great way of changing this.
- An example of a good place for this would be just above "In the News", to the right of "Welcome to Wikipedia", about two inches wide and one inch high. Obviously just one possibility out of many.
- But just having another small link to some variation of Help:How to edit seems futile and unnecessary.
- I would strongly recommend having a two-week trial of the OP's suggestion, and then check the metrics to see whether to continue or not. ——— ypn^2 21:33, 4 December 2024 (UTC)
- Considering that I think most other of the WMF projects have something on their main page about contributing, there is a distinct lack of it on en.wiki. This could be a page spanning box with the usual links of how to get started along with the top of the day floating right in that box. Whether that box leads or ends the page is of debate but it would make sense to have something for that. Masem (t) 00:03, 5 December 2024 (UTC)
- Concur. I'm averse to directly using the WP Tip of the Day (as suggested above), since that's directly to people who are *already* editors, albeit novice ones. What we really want is for people to hit the "edit" button for the first time. I suggest cycling through a few messages, along the lines of:
- See a typo in one of our articles? Fix it! Learn how to edit Wikipedia.
- This is your encyclopedia, too. Learn how to edit Wikipedia.
- Want to lend a hand? Join an international volunteer effort, whether for a day or for a decade – learn how to edit Wikipedia.
- Obviously these will need some finetuning, since I'd really rather not have something as cringy as "for a day or for a decade" on the Main Page, but I think the idea is there. These one-liners should be prominently displayed at the top. Cremastra ‹ u — c › 00:13, 5 December 2024 (UTC)
- I agree that the messaging needs to be toward not-yet-editors, but perhaps they can be more specific? e.g.:
- Did you know that you can italicize words by surrounding them with two appostrophe's? For example,
The ''Titanic'' hit an iceberg and sank in 1912.
appears asThe Titanic hit an iceberg and sank in 1912.
- See something that needs a source? Just add
{{citation needed}}
after the questionable sentence, or better yet, add a source yourself using<ref>www.website.com/page</ref>
! - ypn^2 00:24, 5 December 2024 (UTC)
- Not sure we want to be showing people how to make bare URL references. Cremastra ‹ u — c › 00:28, 5 December 2024 (UTC)
- Rather see editors include material sourced to a bare url than to add without any source or even just give up with trying to add something because the ref system is hard to learn. We have bots that can do basic url to ref formats so that is less a concern. Masem (t) 00:30, 5 December 2024 (UTC)
- Not sure we want to be showing people how to make bare URL references. Cremastra ‹ u — c › 00:28, 5 December 2024 (UTC)
Adding a timeline of level 3 vital people
I suggested this on the other page, but there was no reply, perhaps their chats are inactive. You can find the draft I made of the timeline here: User:Wikieditor662/Vital sandbox.
Note: I believe that the names for the time periods are not perfect (biased towards west) and there are other areas to improve before publishing but I think it's best to see whether it should be included before going further.
What do you guys think? Is this something worth adding? Wikieditor662 (talk) 04:56, 4 December 2024 (UTC)
- @Wikieditor662 Definitely looks cool and is well-designed. Perhaps you can clarify what exactly we're trying to accomplish with this (e.g., where would you like to have this displayed)? Is the purpose to identify potential changes to the Vital list, or to find vital articles to improve, or just to graphically illustrate the "more exciting" and "less exciting" periods of human history? Or something else? ——— ypn^2 21:22, 4 December 2024 (UTC)
- @Ypn^2 I'm very glad you like the design! It's meant to give a visual representation of the people on there, and to show when these people existed and how they could have interacted with each other. Now that you bring it up, this could also be a useful way for editors to see where there can be some improvements.
- As for the location, the timeline could be its own page, and perhaps we could copy and paste a part of it (such as the overview) under the "people" section of vitality articles level 3.
- Also, if this turns out to be a good idea, we could also create more specific timelines like this to help visualize other areas, for example level 4 / 5 philosophers, and perhaps put a part of that timeline under the History of philosophy page.
- Thanks again and feel free to let me know what you think! Wikieditor662 (talk) 22:32, 4 December 2024 (UTC)
- When it is so broad, I wonder whether the inclusion criteria would be considered original research. JuxtaposedJacob (talk) | :) | he/him | 01:21, 5 December 2024 (UTC)
- I understand this concern, and I think it's important to keep in mind that the vital articles' levels are structured to help define the priority levels for articles. Changes for who's included onto here require deep discussions and reliable reasons as to why they should be included or excluded. Wikieditor662 (talk) 03:43, 5 December 2024 (UTC)
- @Wikieditor662: One tiny point of criticism on the design front: in the overview and ancient history sections, the blue names on dark brown are really hard to read due to very low contrast. AddWittyNameHere 04:23, 5 December 2024 (UTC)
- Thanks, I've tried for a while and couldn't figure out how to change the blue text to a different color. If you know how, please let me know. I did, however, make the border of the text more black, so it should be a little easier to see now, although it may not be perfect. Wikieditor662 (talk) 06:02, 5 December 2024 (UTC)
- @Wikieditor662: One tiny point of criticism on the design front: in the overview and ancient history sections, the blue names on dark brown are really hard to read due to very low contrast. AddWittyNameHere 04:23, 5 December 2024 (UTC)
- I understand this concern, and I think it's important to keep in mind that the vital articles' levels are structured to help define the priority levels for articles. Changes for who's included onto here require deep discussions and reliable reasons as to why they should be included or excluded. Wikieditor662 (talk) 03:43, 5 December 2024 (UTC)
- When it is so broad, I wonder whether the inclusion criteria would be considered original research. JuxtaposedJacob (talk) | :) | he/him | 01:21, 5 December 2024 (UTC)
- @Ypn^2 @Fram @Chaotic Enby @Folly Mox (tagging relevant people, if you wish to be tagged / not be tagged please let me know) I've come up with a way we could conceptualize the eras outside of just Europe, which will work sort of like this:
- For every era, we come up with one global, and one regional (for continent) eras. If a person matches a local era, then they'll go there, even if it's outside the bounds of the global era. However, if we can't find their regional era, then they'll go within the bounds of the global era. The global and local eras will have the same colors. Here are the eras:
- Within these eras, in the individual timelines (unlike the overview, which could be broader) we can also break each era down into periods and color them slightly differently, blending in both the current era and the era that its closest to. The eras / periods may slightly differ depending on things like location and profession.
- Here are the color codes for the overview, which should be more specific within individual timelines, and a person spanning across two eras will be colored in between these two eras. These are the colors which are (for the most part) currently in the timeline sandbox.
- Prehistory: Black
- Ancient: Brown
- Post - classical history: Gold
- Early modern: Blue
- Middle modern: Green
- Late modern: Yellow
- Long nineteenth: Dark pink
- Early 19th century: Orange
- Contemporary: bright pink
- Some example of transitional color eras:
- Code for Postclassical: PCH
- Code for Renaissance: Ren
- Transren: colored exactly in between ren and PC
- mostlyren: colored between Ren and transren
- lateren: colored between Ren and mostlyren
- (the same will work for the rest of the eras for the most part)
- Global era: Prehistory (3 million BC - 3,000 BC) - Black - Nobody currently on there, but this could be in case someone gets added onto there one day - between humans' formation and writing being invented. This can extend much later, for example, Australia extends to its Prehistoric period until 1788, which is when it was first colonized (unless you include the age of discovery, which started in 1400).
- - Prehistoric Libya: before 600 BC
- Global era: Ancient - Brown - (3000 BC - 500 AD)
- Time periods:
- Neolithic / pre-early ancient - 10,000 BC - 2,000 BC (can also be a part of prehistory) - color: mostly prehistoric, less ancient
- Bronze age / early ancient - 3,300 BC - 1,200 BC (can also be a part of prehistory) - color: prehistoric-ancient
- - For Bronze Age Europe this is 3,000 BC - 1,050 BC - color:
- - Iran: Kura–Araxes culture - 3,400 BC - 2,000 BC
- - India: 3,300 BC - 1,800 BC
- Iron age / middle ancient - 1,200 BC - 550 BC (can also be a part of prehistory) - mostly ancient, less prehistoric
- - For Iron Age Europe this is 1,050 - 776 BC (for consistency)
- - Iran (for them this is still a part of pre-history): 2,000 BC - 1,000 BC
- - India: 1,800 BC - 200 BC
- Late ancient (or sometimes late iron ages) - 550 BC - 476 AD (every established era during this time, such as late antiquity is not global) - color: ancient
- - For Ancient Egypt this is 664 BC - 900 AD
- - For Europe this is 776 BC - 476 AD
- - Iran: 1,000 BC - 651 AD
- - Classical India: 200 BC - 500 AD
- Regional eras:
- - Classical antiquity for Ancient Greece and Ancient Rome (8th Century BC - 5th Century BC) - color: ancient
- Late antiquity - 3rd century AD - 8th century AD (can include areas other than Greece and Rome, such as Europe) color: ancient-postclassical
- - Early Libya:
- Carthaginian Libya - 600 BC - 200 BC - color: mostly ancient, less pre-historic
- Roman Libya: 200 BC - 487 AD color: ancient
- - Mesoamerica:
- Archaic period - 8000 BC - 2600 BC (can include prehistory) - color: mostly prehistoric, less ancient
- Mesoamerican Preclassic period - 2000 BC - 250 AD - color:ancient
- Mesoamerican Classic period - 250 AD - 900 AD - color:ancient-postclassical
- - Ancient China
- Xia Dynasty era - 2070 BC - 1600 BC color: prehistoric-ancient
- Shang Dynasty era - 1600 BC - 1046 BC color: mostly ancient, less prehistoric
- Middle ancient - 1046 BC - 220 AD color: ancient
- Three kingdoms era - 220 AD - 580 AD color: ancient-postclassical
- Archaic Japan:
- Jōmon period - 13,000 BC - 300 BC color: prehistoric-ancient
- Yayoi period - 450 BC - 250 AD color: ancient
- Kofun period - 250 AD - 538 AD color: ancient-postclassical
- Archaic Mesopotamia:
- Early Dynastic Period (Mesopotamia) - 2900 BC - 2270 BC color: mostly prehistoric, less ancient
- Middle Archaic Period - 2270 BC - 1178 BC color: ancient
- Late Archaic Period - 1177 BC - 549 BC color: mostly ancient, less prehistoric
- Imperial Period - 549 BC - 651 AD color: ancient-postclassical
- Global era: post-classical history - Gold - (500 AD - 1500 AD) - abbreviated PCH
- Time periods:
- Early Postclassical - 476 - 800 - color: Early PCH (abbreviated EPCH)
- - This is still ancient for Egypt
- - For countries affected by the Byzantine Empire this starts at 330 AD
- - Iran: Muslim conquest of Persia era - 651 - 820 AD
- - Vandal Libya: 487 - 600
- Middle Postclassical - 800 - 1200 - color: PCH (PCH)
- - For Egypt this starts at 868
- - Iran: 820 - 1219
- - Islamic Libya: 600 - 1200
- Late Postclassical - 1200 - 1500 - color: Late PCH (abbreviated LPCH)
- - For Egypt this ends at 1517
- - For Mongolia, this is replaced by the Mongol Empire era - 1206 - 1380
- - For the Byzantine Empire this ends at 1453
- - Iran: 1219 - 1501
- Regional eras:
- Postclassic Period - 900 - 1521 AD (Mesoamerica)
- Time periods:
- Early Postclassic - 900 - 1200 - color: EPCH
- Late Postclassic - 1200 - 1521 - color: LPCH
- Imperial China:
- Early Imperial China - 580 - 960 - color: EPCH
- Middle Imperial China - 960 - 1271 - color: PCH
- Yan Dynasty era / Late Imperial China - 1271 - 1368 - color: LPCH
- Middle ages - 476 - 1500 (Europe)
- Europe Time periods:
- Early middle ages - late 5th century - 10th century - color: EPCH
- - For Scandanavia this is the Viking age - 793 - 1066
- High middle ages - 1,000 - 1,300 color: MPCH
- Late middle ages - 1,300 - 1,500 color: LPCH
- Feudal Japan:
- Asuka and Nara period - 643 - 794 - color: EPCH
- Heian period - 795 - 1185 - color: PCH
- Kamakura period - 1185 - 1333 - color: LPCH
- Global era: Early modern - 1400 - 1600 - Blue (time period ended early to add the "middle modern" and have it be more specific)
- First early modern: 1400 - 1500 - color: first early modern (abbreviated FEM)
- Second early modern: 1500 - 1550 - color: early modern (abbreviated EM)
- Third early modern: 1550 - 1600 - color: third early modern (abbreviated LEM)
- Regional eras:
- Ming Dynasty era - 1368 - 1644 (China) color: EM
- Age of exploration - 1418 - 1620 (For explorers) color: EM
- Renaissance - 1400 - 1600 (Europe)
- Time periods:
- Early Renaissance - 1400 - 1490 - color: FEM
- - For England this is still the middle ages - color: LPCH
- High Renaissance - 1490 - 1527 - color: EM
- - For England this is the Tudor period - 1485 - 1558 in this case
- Late Renaissance - 1527 - 1600 - color: LEM
- - For Poland, this is the Polish Golden Age - 1507 - 1572
- - For England, this is the Elizabethan era - 1558 - 1603
- Samurai Japan
- Muromachi period: 1333 - 1573 - color: EM
- Azuchi–Momoyama period - 1573 - 1603 - color: LEM
- Global era: Middle modern - 1600 - 1750 - Green
- First middle modern - 1600 - 1650 - color: first middle modern (abbreviated FMM)
- Second middle modern - 1650 - 1700 - color: second middle modern (abbreviated MM)
- Third middle modern - 1700 - 1750 - color: third middle modern (abbreviated TMM)
- Regional eras:
- Baroque - 1600 - 1750 - Europe
- Time periods:
- Early Baroque - 1600 - 1650 - color: FMM
- - For the British Isles this is the Jacobean era - 1603 - 1625
- Middle Baroque - 1650 - 1730 - color: MM
- - British Isles: Caroline era - 1625 - 1649
- Rococo / Late Baroque - 1730 - 1769 color: TMM
- - British Isles: British Interregnum and Stuart restoration - 1649 - 1714
- - Iran: Afsharid Iran - 1736 - 1750
- Global era: Late Modern - 1750 - 1800 - color: yellow (abbreviated LM)
- Regional eras:
- Age of Revolution - 1765 - 1848 - Europe and the Americas - color: LM-LNC
- Neoclassicism - 1730 - 1830 - Europe - color:LM
- - For the United Kingdom this is the Georgian era - 1714 - 1830
- Convict era - 1788 - 1868 - Australia - color: LM-LNC
- Zand Iran - 1750 - 1794 - LM
- Global era: Long nineteenth century - 1789 - 1914 - Color: Dark Pink (abbreviated LNC)
- Time periods:
- Early LNC: 1789 - 1830 color: Depends, usually either Early LNC (ELNC) - TMM, or with one more than the other
- Middle LNC: 1830 - 1860 color: LNC
- Late LNC: 1860 - 1900 color: Late LNC (LLNC)
- Post-Late LNC (PLLNC) - 1900 - 1914 color: PLLNC - Early 19th century
- Regional eras:
- Federation of Australia - 1890 - 1918 color: LNC
- Qajar Iran - 1794 - 1925 color: LNC
- Europe time periods:
- Early Romantic era - 1770 - 1799 - TMM-ELNC
- Napoleonic era / Middle romantic - 1799 - 1815 color: LNC
- Late Romantic era - 1815 - 1850 -
- Post-Romantic - 1850 - 1900 - PLLC
- - In the British Empire this would be replaced by the Victorian era - 1837 - 1901
- - For Egypt this is the Khedivate of Egypt - 1867 - 1914
- - For classical music this would be replaced by the late Romantic, and the years will slightly differ
- - For France this is the Belle Époque - 1871 - 1914
- - Japan: Meiji period - 1868 - 1912
- Mexico:
- Independence era: 1810 - 1846 - ELNC
- Liberal Mexico: 1846 - 1911 - LNC
- Global era: Early 20th century (E20) - 1900 - 1945 - Orange
- - For Egypt this ends in 1953
- Regional eras:
- Colonial Libya: 1900 - 1950
- Pahlavi Iran - 1925 - 1979
- Republic of China (1912–1949) era
- Modernism - Europe - 1874 - 1960
- Global era: Contemporary History
- Time periods:
- Late 20th century (L20) - 1945 - 2000 - Bright Orange (these colors will be slightly different than the overview because of the background)
- -Modern Mexico: 1910 - 2000 - color: E20-L20
- 21st century (color: 21) - 2000 - today - Bright Pink (this will help out more in the future)
- Regional time periods:
- -Contemporary Mexico: 2000 - Present - color: 21
- Postmodernism - The west - 1960 - Today (exact end date unclear, 20th century still applies for the individual timeline) - color: depends; some combination of L20 and 21
- People's Republic of China - since 1949 - L20-21
- Islamic Republic of Iran era - 1979 - present - L20-21
- Indian Independence era: 1947 - present - L20-21
- Contemporary Japan:
- Shōwa era - 1926 - 1989 - E20-L20
- Heisei period - 1989 - 2019 - L20-21
- Reiwa period - 2019 - present - 21
- Contemporary Libya - 2011 - present - 21
- Contemporary United States - 2008 - present - 21
- Hopefully these changes make the timeline more inclusive to people outside of Europe. Please share your thoughts! Wikieditor662 (talk) 03:54, 16 December 2024 (UTC)
- Don't know why you posted this in the middle of the discussion, and not clear what you want to do with it. In any case, this doesn't belong in the mainspace. If some project wants to use this in projectspace then why not, but "vital-3" articles or any variation thereof are not a notable group. Fram (talk) 08:44, 16 December 2024 (UTC)
- Yeah, this sounds more like a projectspace endeavor. Also, with that amount of subdivisions, I'm not even sure each of them will contain someone. Chaotic Enby (talk · contribs) 10:44, 16 December 2024 (UTC)
- @Fram Yeah I know, it's supposed to be a part of the project space.
- @Chaotic Enby That's fine, a lot of it could be guidelines in case we decide to add someone who's not in a previous category... It could also help in case someone decides to add vitality 4 to the individual timelines one day.
- Do you guys like the categories though overall? Wikieditor662 (talk) 11:26, 16 December 2024 (UTC)
- Yeah, this sounds more like a projectspace endeavor. Also, with that amount of subdivisions, I'm not even sure each of them will contain someone. Chaotic Enby (talk · contribs) 10:44, 16 December 2024 (UTC)
- Don't know why you posted this in the middle of the discussion, and not clear what you want to do with it. In any case, this doesn't belong in the mainspace. If some project wants to use this in projectspace then why not, but "vital-3" articles or any variation thereof are not a notable group. Fram (talk) 08:44, 16 December 2024 (UTC)
Still not clear what you really want to do with it, but it definitely does not belong in the mainspace (as a separate article or as part of other articles), if that was your intention. "level 3 vitality figures" is pure inner Wikipedia talk, not a reliably sourced definition. If you want to use it in other namespaces, then indeed the colours need changing: blue on purple on grey is not readable at all. The names displayed are also weird. "Miguel" for Cervantes? "Joan" for Joan of Arc? Fram (talk) 15:31, 5 December 2024 (UTC)
- I concur with Fram on this point, "vital articles" are only a (more or less effective) classification of which articles are a priority for the encyclopedia, it doesn't correspond to anything in use by sources. Even with
deep discussions and reliable reasons
, having it as a criterion would be original research. Same for any other "homemade" ranking of important people. Chaotic Enby (talk · contribs) 15:36, 5 December 2024 (UTC)- @Fram@Chaotic Enby Are you guys opposed to having this timeline completely, or just parts of it? And also, it's not based on how important people are, but the level of prioritization, which is the reason the vitality levels exist in the first place. Wikieditor662 (talk) 22:36, 5 December 2024 (UTC)
- I'd be opposed to having it in mainspace, as "prioritization in what we should write about" is not in itself encyclopedic information. However, it could be interesting to have it as part of Wikipedia:WikiProject Vital Articles, if you want to go for it. Chaotic Enby (talk · contribs) 22:38, 5 December 2024 (UTC)
- Okay, I see. By the way, does this problem also exist with the currently existing article List of classical music composers by era? Wikieditor662 (talk) 03:01, 6 December 2024 (UTC)
- That's in many ways a pretty bad list, yes. Fram (talk) 08:33, 6 December 2024 (UTC)
- Okay, I see. By the way, does this problem also exist with the currently existing article List of classical music composers by era? Wikieditor662 (talk) 03:01, 6 December 2024 (UTC)
- For the unfamiliar, this follows from Wikipedia:Village pump (idea lab)/Archive 62 § Timeline of significant figures, where advice was heeded, to the OP's credit.Wikieditor662, thanks for updating your visualisation to use an inclusion criterion that will not lead to as much arguing. I still think that this is not appropriate for mainspace and will not become appropriate, since the basis is fundamentally OR— even though the original research is distributed amongst the Wikipedia community rather than your own personally.I notice you've brought this up twice at WT:PVITAL, but not at the much more active WT:VA or WT:V3. You could probably just move it to a WikiProject subpage.I concede that your project is not terribly different from List of classical music composers by era, which I also don't think is a great thing to have in mainspace, but it's twenty-one years old, and predates most of our content guidelines. As an aside, it's probable that most articles in Category:Graphical timelines are problematic: Graphical timeline of the Stelliferous Era is pretty bad; Timeline of three longest supported deck arch bridge spans is also a questionable choice. None of these articles are as contentious as the one proposed here.MOS:NOSECTIONLINKS non-compliances remain, and calling out the Western bias in the chronological taxonomy is not an adequate substitute for addressing them to conform with the periodisation used by WP:VA (which they would probably want for consistency). Folly Mox (talk) 12:10, 6 December 2024 (UTC)
- I still don't think old articles should be "grandfathered in" despite not fitting our more recent content guidelines, and the subjective and nearly unsourced List of classical music composers by era (whose selection is only based on the personal choices of editors, rather than any analysis of sources) shouldn't really be kept in mainspace just because of its age. Chaotic Enby (talk · contribs) 12:31, 6 December 2024 (UTC)
- Valid, and agreed. My intention was to communicate that being kept in mainspace is a lower bar to clear than introducing into mainspace. Thanks for pointing out the unclear bit I ought to have explicated.That said, I don't think I'd be interested in participating at Wikipedia:Articles for deletion/List of classical music composers by era. Folly Mox (talk) 13:29, 6 December 2024 (UTC)
- I still don't think old articles should be "grandfathered in" despite not fitting our more recent content guidelines, and the subjective and nearly unsourced List of classical music composers by era (whose selection is only based on the personal choices of editors, rather than any analysis of sources) shouldn't really be kept in mainspace just because of its age. Chaotic Enby (talk · contribs) 12:31, 6 December 2024 (UTC)
- I'd be opposed to having it in mainspace, as "prioritization in what we should write about" is not in itself encyclopedic information. However, it could be interesting to have it as part of Wikipedia:WikiProject Vital Articles, if you want to go for it. Chaotic Enby (talk · contribs) 22:38, 5 December 2024 (UTC)
- @Fram@Chaotic Enby Are you guys opposed to having this timeline completely, or just parts of it? And also, it's not based on how important people are, but the level of prioritization, which is the reason the vitality levels exist in the first place. Wikieditor662 (talk) 22:36, 5 December 2024 (UTC)
Photo gathering drive for town, village, and city halls
Like how Wikipedia and Wikimedia Commons has the National Register of Historic Places drives for pictures. There should be effort put into getting the town halls, village halls, and city halls pictures. Every town, every village, every borough, every city, and county has a Wikipedia page and I think they should all have a picture posted of the administrative building. Wikideas1 (talk) 08:21, 4 December 2024 (UTC)
- One consideration is that shorter articles have limited space for images, and a photo of the building housing administrative offices of a politically defined place may not be the best representation of that place. It is fine to upload such pictures to Commons, but their use may not be justified in every article about a place. Donald Albury 15:44, 4 December 2024 (UTC)
- I like the concept, but I feel the drive would be better if any picture of a populated place would be admissible. Places like unincorporated communities and ghost towns don't have municipal buildings, but still would be bettered with a picture. Roasted (talk) 18:30, 7 December 2024 (UTC)
- This sounds similar to Wikipedia:Wiki Loves Monuments. @Wikideas1, if you want to pursue this, then you should probably look at similar campaigns in c:Category:Wiki Loves and see if there's one that overlaps with your goal. WhatamIdoing (talk) 03:31, 11 December 2024 (UTC)
- It's all fun and games until the photos get deleted due to a lack of freedom of panorama. If you think somewhere in Wikipedia consistently lacks building photos, there's a good chance it's a copyright issue. CMD (talk) 03:38, 11 December 2024 (UTC)
- I'm not sure if the law of every country would apply. EEpic (talk) 04:50, 11 December 2024 (UTC)
- @EEpic: See Wikipedia:Image use policy#Photographs. We generally respect all copyrights, even if the material would not be copyrightable in every country. Donald Albury 16:56, 11 December 2024 (UTC)
- I'm not sure if the law of every country would apply. EEpic (talk) 04:50, 11 December 2024 (UTC)
Essay on Funding sections
There is a systemic problem: sections on "Funding" for non-profit organizations. They are often disinformation. For example, if an organization is partly funded by the USAID, the organization will be framed as proxy of the US Federal Government. Of, if an organization is funded by the Koch Brothers, it will be framed in a suitably FUD way. This framing is often done through emphasis on certain donors, word choices and so on. Sometimes it's explicit other times subtle. I can show many examples, but prefer not to make it into a single case. The problem is systemic, since the beginning of Wikipedia.
What we need is an essay about Funding sections. Best practices, things to avoid. A link to WP:FUNDING. And some effort to go through these articles and apply the best practices described. -- GreenC 18:31, 4 December 2024 (UTC)
- I'm not sure that we need a separate essay on this, though perhaps a paragraph (or a couple of examples?) at Wikipedia:WikiProject Organizations/Guidelines would be helpful. Generally, the sorts of things you would expect to find in an encyclopedic summary are broad generalities ("The Wikimedia Foundation is largely funded by small donors" vs "The Met is largely funded by large donors and ticket sales") plus sometimes a 'highlights reel' ("The largest donation in the organization's history was..." or "In 2012, there was a controversy over...").
- It's possible that the section should be something like ==Finances== instead of ==Funding==, as financial information about (e.g.,) whether they're going into debt would also be relevant.
- BTW, if you're interested in adding information about organization finances, you might be interested in the idea I describe at Wikipedia:Village pump (technical)#Simple math in template. WhatamIdoing (talk) 03:37, 11 December 2024 (UTC)
Linking years for specific topics
Per MOS:UNLINKDATES, years are not linked by a large majority of articles. Though, many articles do link to "xxxx in ____" articles (e.g. 2000 in television or 1900 in baseball). I do not feel like these types of articles should be linked to. The topics are broad, and in some cases there are better articles to be linked to. Roasted (talk) 18:43, 7 December 2024 (UTC)
- We had a discussion about this recently, although I'm unable to immediately find it. IIRC the consensus was that the links add value in some cases (and thus should be retained) and don't in others (and thus should be removed). If my memory is correct, then this is something that can only be determined at the level of the individual article or small groups of articles. In general you can be WP:BOLD, especially if a single more specific relevant article exists, but explain why you think a change is beneficial and be prepared to discuss if others disagree. I don't believe there is (or should be) a default preference either for or against these links. Thryduulf (talk) 23:32, 7 December 2024 (UTC)
- I remember that discussion, and my recollection is the same as yours. WhatamIdoing (talk) 03:38, 11 December 2024 (UTC)
"Sensitive content" labels (only for media that is nonessential or unexpected for an article's subject)
You see, many Wikipedia articles contain images or other media that are related to the article's subject, but that readers might not want to see, and have no way of avoiding if they are reading the article without prior knowledge of its contents.
For instance, the article Human includes an image which contains nudity. This image is helpful to illustrate the article's subject, but many people who read this seemingly innocuous article would not expect to see such an image, and may have a problem with it.
Of course, if someone decides to read the article Penis and sees an image of a penis, they really can't complain, since the image would just be an (arguably, essential) illustration of the article's subject, and its presence can easily be known by the reader ahead-of-time.
My solution to this is to have editors look for media or sections of an article which could be seen as having a different level of maturity compared to the rest of the article's content, then ensuring that the reader must take additional action in order to see this content, so that readers of a seemingly innocuous article would not have to see content that could be considered "shocking" or "inappropriate" when compared to the rest of the article's content, unless they specifically choose to do so.
I posted this idea here so other people could tell me what they think of it, and hopefully offer some suggestions or improvements. -A Fluffy Kitteh | FluffyKittehz User Profile Page 15:56, 10 December 2024 (UTC)
- As with just about every other proposal related to "sensitive" or "shocking" content it fails to account for the absolutely massive cultural, political, philosophical and other differences in what is meant by those and similar terms. On the human article, at least File:Lucy Skeleton.jpg, File:Anterior view of human female and male, with labels 2.png, File:Tubal Pregnancy with embryo.jpg, File:Baby playing with yellow paint. Work by Dutch artist Peter Klashorst entitled "Experimental".jpg, File:Pataxo001.jpg, File:HappyPensioneer.jpg, File:An old age.JPG, File:Human.svg and quite possibly others are likely to be seen as "shocking" or "sensitive" by some people - and this is not counting those who regard all depictions of living and/or deceased people as problematic. Who gets to decide what content gets labelled and what doesn't? Thryduulf (talk) 16:18, 10 December 2024 (UTC)
- Who gets to decide? Editors, by consensus, just like everything else.
- But more pointfully, @FluffyKittehz, our usual advice is not to do this, and (importantly) to be thoughtful about image placement. For example, decide whether a nude photo is better than a nude line drawing. Decide whether the nude image really needs to be right at the top, or whether it could be a bit lower down, in a more specific section. For example, the nude photos in Human are in Human#Anatomy and physiology, which is less surprising, seen by fewer users (because most people don't scroll down) and more understandable (even people who dislike it can understand that it's relevant to the subject of anatomy).
- BTW, the people in that particular nude photo are paid professional models. They were specifically hired, about a dozen or so years ago, to make non-photoshopped photos in the non-sexualized Standard anatomical position (used by medical textbooks for hundreds of years). I have heard that it was really difficult for the modeling agency to find anyone who would take the job. WhatamIdoing (talk) 03:53, 11 December 2024 (UTC)
Changes to welcome banner
I've copied and restructured content from [RfC]. My initial proposal was to remove this content entirely, but consensus seems to be against that, so I've moved most of the discussion here.
"Anyone can edit"
Welcoming users and explaining what Wikipedia is is a valid purpose for the Main Page. Sdkb talk 07:36, 8 December 2024 (UTC)
- The Welcome message is valuable and it makes sense for it to be at the top; the message includes a link to Wikipedia for those unfamiliar with the site, and "anyone can edit" directs readers (and prospective editors) to Help:Introduction to Wikipedia. The article count statistic is a fun way to show how extensive the English Wikipedia has become. (My only suggestion would be to include a stat about the number of active editors in the message, preferably after the article count stat.) Some1 (talk) 15:06, 8 December 2024 (UTC)
- I think so too. EEpic (talk) 04:46, 11 December 2024 (UTC)
- This proposal essentially restricts informing readers about one of Wikipedia’s core ideas: anyone can edit. The current text on the main page is important because it reminds readers that we’re a free encyclopedia where anyone can contribute. The article count also matters—it shows how much Wikipedia has grown since 2001 and how many topics it covers.Another point to consider is that moving it to the bottom isn't practical. I don't think readers typically scroll that far down—personally, I rarely do. This could lead to fewer contributions from new users.The AP (talk) 15:29, 8 December 2024 (UTC)
- Why on earth would we want to hide the fact that we're the free encyclopedia anyone can edit? We need more information about how to edit on the MP, not less! We want to say, front and centre, that we're a volunteer-run free encyclopedia. Remove it, and we end up looking like Britannica. The banner says who we are, what we do, and what we've built, in a fairly small space with the help of links that draw readers in and encourage them to contribute. Cremastra ‹ u — c › 17:31, 8 December 2024 (UTC)
- I strongly agree with the comments above about the importance of encouraging new readers to edit. However, I'm a bit skeptical that the current approach (a banner taking up a quarter of the screen with some easter egg links) is the most effective way to achieve this—how often do people click on any of them? Anyone have ideas for other ways to accomplish this better while using the same amount of space?– Closed Limelike Curves (talk) 00:05, 11 December 2024 (UTC)
Aesthetic concerns
While the message isn't information-dense like the rest of the Main Page, it is much more welcoming for a new visitor, and easier on the eyes, than immediately starting with four blocks of text. Chaotic Enby (talk · contribs) 13:09, 8 December 2024 (UTC)
- Quick question: what skin do you use? Because on V22 (99% of readers), how much more #$%!ing whitespace do you need?!/joke There's literally no content left!– Closed Limelike Curves (talk) 00:05, 11 December 2024 (UTC)
- Oh, I use V10. Didn't expect V22 to be that drastically different, especially since the previous screenshot didn't seem to show that much of a difference. Chaotic Enby (talk · contribs) 00:21, 11 December 2024 (UTC)
- About 70% of total traffic is mobile, so 99% of readers using Vector 2022 may be an overestimate. Folly Mox (talk) 02:59, 11 December 2024 (UTC)
- That's because of the large donation notice. EEpic (talk) 04:51, 11 December 2024 (UTC)
- We don't control the donation notice, though. – Closed Limelike Curves (talk) 21:45, 16 December 2024 (UTC)
- I use V22, and even with safemode on (which disables my CSS customizations), and then logging out, and then looking at the screenshot on imgur and at the top of this section, I see no problems. Aaron Liu (talk) 14:25, 11 December 2024 (UTC)
What to do with space
Do you have another good reason that the top of the MP should be taken down? Do you have a alternative banner in mind? Moreover, this needs a much wider audience: the ones on the board. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 14:27, 8 December 2024 (UTC)
- On which board? This is both at the village pump and at WP:CENT, so it should reach as much people as possible. Chaotic Enby (talk · contribs) 15:13, 8 December 2024 (UTC)
- Them. They may not take too kindly to this, and we all should know by now. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 15:26, 8 December 2024 (UTC)
- This is a strange concern; of course a community consensus can change the main page's content. It doesn't seem to be happening, but that has nothing to do with the WMF. ~ ToBeFree (talk) 16:16, 8 December 2024 (UTC)
- The WMF board does not need (and is not invited) to sign off on community consensus to change the front page. ꧁Zanahary꧂ 06:23, 14 December 2024 (UTC)
- Them. They may not take too kindly to this, and we all should know by now. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 15:26, 8 December 2024 (UTC)
Do you have a alternative banner in mind?
- I avoided discussing specific replacements because I didn't want to get bogged down in the weeds of whether we should make other changes. The simplest use of this space would be to increase the number of DYK hooks by 50%, letting us clear out a huge chunk of the backlog. – Closed Limelike Curves (talk) 17:43, 8 December 2024 (UTC)
Opt-in content warnings and image hiding
A recent discussion about sensitive images at VPP became quite heated, for reasons, but there actually appears to be little to no opposition to developing opt-in features to help readers avoid images that they don't want to see. Currently the options are very limited: there are user scripts that will hide all images, but you have to know where to find them, how to use them, and there's no granularity; or you can hide specific images by page or filename, which has obvious limitations. I therefore thought I'd bring it here to discuss ideas for improving these options.
My idea would be to implement a template system for tagging images that people might not want to see, e.g. {{Content warning|Violence|[[Image:Man getting his head chopped off.jpg|thumb|right|A man getting his head chopped off]]}}
or {{Content warning|Sex|[[Image:Blowjob.jpg|thumb|right|A blowjob]]}}
. This would add some markup to the image that is invisible by default. Users could then opt-in to either hiding all marked images behind a content warning or just hiding certain categories. We could develop a guideline on what categories of content warning should exist and what kind of images they should be applied to.
A good thing about a system like this is that the community can do almost all of the work ourselves: the tagging is a simple template that adds a CSS class, and the filtering can be implemented through user scripts/gadgets. WMF involvement on e.g. integrating this into the default preferences screen or doing the warning/hiding on the server side would be a nice-to-have, not a must-have. – Joe (talk) 07:34, 11 December 2024 (UTC)
- Oh also, I suggest we strictly limit discussion here to opt-in systems—nothing that will change the current default of all images always being visible as-is—because experience shows that, not only is consensus on this unlikely to change, but even mentioning it has a tendency to heat up and derail discussions. – Joe (talk) 07:36, 11 December 2024 (UTC)
- Would there be a way to tag or list the images themselves, rather than needing to recreate new template coding for each use? CMD (talk) 08:32, 11 December 2024 (UTC)
- That would make sense, but since the images are (mostly) on Commons I couldn't figure out a way of doing it off the top of my head. It would also mean that control of what and how things were tagged would be on another project, which always tends to be controversial on enwiki. – Joe (talk) 08:56, 11 December 2024 (UTC)
- From the experience with spoiler warnings, these things tend to proliferate if they exist at all. I would rather stay with the clean policy of no warnings whatsoever than discuss whether to introduce warnings for certain classes of offensive things. I am personally offended by the use of "His Royal Highness" or similar words when referring to citizens of Germany like Mr Prinz von Preussen, but I think it is better not to have a category of pictures offending German anti-monarchists. Even if we do not do the censoring ourselves, I oppose spending volunteer time on implementing something that can be used as a censorship infrastructure. —Kusma (talk) 09:33, 11 December 2024 (UTC)
- This would retain the policy of no warnings because they would be invisible to anybody who didn't opt-in. Similarly, only volunteers who want to use their time in maintaining this system would do so. – Joe (talk) 10:45, 11 December 2024 (UTC)
- I also was reminded of the spoiler tag fiasco. Only at least we can agree spoiler tags would be on any and all plot summaries. Dronebogus (talk) 17:31, 11 December 2024 (UTC)
- Another recent discussion at Wikipedia:Village_pump_(proposals)#"Blur_all_images"_switch. Gråbergs Gråa Sång (talk) 10:04, 11 December 2024 (UTC)
- Strongest oppose to tagging system, for which there was pretty clear consensus against in the previous discussion. It is against the spirit of Wikipedia and would be a huge headache for an end that goes against the spirit of Wikipedia. This project should not be helping people hide from information. ꧁Zanahary꧂ 15:33, 11 December 2024 (UTC)
- Support: I don't see why would anyone oppose it. And since I have little knowledge on technical stuff, I don't have anything to add to this idea.
- ☆SuperNinja2☆ TALK! 17:59, 11 December 2024 (UTC)
- @Super ninja2: you don’t vote at the Idea Lab. Zanahary is admittedly falling foul of this rule too but I’ll give it a pass as “I am so passionate about this I will vote rhetorically”. Dronebogus (talk) 18:06, 11 December 2024 (UTC)
- Sorry, I didn’t realize we don’t vote here. How are we supposed to voice opposition to an idea? Just exclude the bolded vote? ꧁Zanahary꧂ 18:36, 11 December 2024 (UTC)
- You don't. You criticize and give your opinion to fix. ☆SuperNinja2☆ TALK! 18:49, 11 December 2024 (UTC)
- I don't voice opposition to an idea? Here's my criticism: tagging to appeal to sensitivities that would have certain types of information and imagery hidden is validating those sensitivities, which is not the place of Wikipedia (and is against its spirit), and enables the concealment of informationm which is diametrically opposed to the spirit of Wikipedia. My proposed "fix" is to not pursue this content-tagging idea. ꧁Zanahary꧂ 19:23, 11 December 2024 (UTC)
- You don't. You criticize and give your opinion to fix. ☆SuperNinja2☆ TALK! 18:49, 11 December 2024 (UTC)
- I actually thought so. Saw Zanahary voting and thought maybe I was wrong. ☆SuperNinja2☆ TALK! 18:48, 11 December 2024 (UTC)
- Sorry, I didn’t realize we don’t vote here. How are we supposed to voice opposition to an idea? Just exclude the bolded vote? ꧁Zanahary꧂ 18:36, 11 December 2024 (UTC)
- @Super ninja2: you don’t vote at the Idea Lab. Zanahary is admittedly falling foul of this rule too but I’ll give it a pass as “I am so passionate about this I will vote rhetorically”. Dronebogus (talk) 18:06, 11 December 2024 (UTC)
- I haven’t seen anyone bring this up, but this clearly goes against WP:No disclaimers. Please consider this a constructive note about the obstacles you will face if you try to add content warnings to Wikipedia. ꧁Zanahary꧂ 17:23, 16 December 2024 (UTC)
Having a general Opt-in system of blurring or hiding all images would be no problem. Having one based on tags, content, categories... would be largely unmaintainable. If you create an "opt-in here to hide all sexual images", then you have to be very, very sure that you actually can do this and not give false promises to readers. But as there is no agreement on where to draw the line of what is or isn't sexual, nudity, violence, disturbing, ... this will only lead to endless edit wars without possible resolution. Are the images on Breastfeeding sexual? L'Origine du monde? Liberty Leading the People (ooh, violence as well!)? Putto? Pavilion of Human Passions? Fram (talk) 10:03, 11 December 2024 (UTC)
- Exactly. One of the issues is that some people think there is a thing such as non-sexual nudity, while others think that nudity is always sexual. —Kusma (talk) 10:14, 11 December 2024 (UTC)
- So we could have a category "nudity" instead of or in addition to "sex". Part of the proposal here is coming to a consensus on which categories should exist and on guidelines for their use. I don't see how we can conclude that this is an impossible or impractical task before even trying. We manage to draw lines through grey areas all the time. – Joe (talk) 10:44, 11 December 2024 (UTC)
- "Trying" would be a massive task, so deciding whether it seems feasible or not before we start on it seems the wisest course of action. We get endless discussions and RfC about whether something is a WP:RS or not all the time, to have this kind of discussion about which tags we should have and then which images should be part of it will multiply this kind of discussions endlessly. Should The Adoration of the Magi (Geertgen tot Sint Jans) be tagged as nudity? Buttocks? Is File:Nipple of male human.jpg nudity? File:African Breast SG.jpg? If male nipples are nudity, then File:Michael Phelps wins 8th gold medal.jpg is nudity. If male nipples aren't nudity, but female nipples are nudity, then why one but not the other? Fram (talk) 11:04, 11 December 2024 (UTC)
- TRADITION!! Gråbergs Gråa Sång (talk) 11:07, 11 December 2024 (UTC)
- As with everything, we'd have to reach a consensus about such edge cases either in general or on a case-by-case basis. It's not for me to say how that would go with these examples, but I'd suggest as a general principle we should be descriptive rather than normative, e.g. if there is a dispute about what constitutes male nudity, then break the category down until the labels are uncontroversial – "male nudity (upper body)" and so on. – Joe (talk) 13:50, 11 December 2024 (UTC)
- These aren't edge cases though. The more you have to break it down, the more work it creates, and the disputes will still continue. Will we label all images of women/men/children/other? All images of women showing any flesh or hair at all? Basically, we will need to tag every image in every article with an endless series of tags, and then create a system to let people choose between these endless tags which ones they want to hide, even things most of us might find deeply unsettling to even offer as an option? Do we want people to be able to use Wikipedia but hide all images of transgenders? All images of women? All images of Jews? Everything that isn't halal? In the 4 images shown below, the one in the bathtub is much more sexual than the one in the shower, but the one in the shower shows a nipple, and the other one doesn't. Even to only make meaningful categories to indicate the difference between those two images would be quite a task, and then you get e.g. the other image showing an artwork, which again needs a different indication. It seems like madness to me. Fram (talk) 14:05, 11 December 2024 (UTC)
- There are just so many things that some people don't want to see... Dead Australians or Baháʼu'lláh are among the easier ones that might look near harmless to tag. However, people will also demand more difficult things like "images not appropriate for 12 year olds" that have no neutral definition (and where Europeans and Americans have widely differing opinions: just look for typical film ratings where European censors think sex, nudity, drug use and swearing are ok but violence is not, and American censors will think the opposite). There are also things some people find offensive that I am not at all ok with providing a censorship infrastructure for: images depicting mixed-race couples, images depicting trans people, images depicting same-sex couples. I do not think Wikipedia should help people avoid seeing such images, so I do not want us to participate in building a censorship infrastructure that allows it. —Kusma (talk) 11:18, 11 December 2024 (UTC)
- Alternatives like Hamichlol exists. Gråbergs Gråa Sång (talk) 11:21, 11 December 2024 (UTC)
- The English Wikipedia community would control which categories are used for this system and I am confident they would reject all of these examples. "People will make unreasonable demands" does not sound like a good reason not to do something. – Joe (talk) 13:44, 11 December 2024 (UTC)
I am confident they would reject all of these examples
Why? On what objective grounds are you labelling those examples as "unreasonable"? Why are your preferences "reasonable"? Thryduulf (talk) 14:14, 11 December 2024 (UTC)- Because if there's one thing the English Wikipedia community is known for, it'a always agreeing on everything?
- This project already has enough things for ongoing arguments over. Making lists of what people may want to avoid and ranking every image on whether it falls into that list is a tremendous effort that is bound to fail. (The thread calling for such categorization on the policy page is an excellent example.... a user felt they were harmed by an image of a dead man smiling... only it seems not to be a dead man, we were supposed to police that image based on how they would misinterpret it.) I'm also wondering if we risk civil litigation if we tell people that we're protecting against image-type-X and then someone who opted out of seeing such images views something that they consider X.
- This is just one more impediment to people adding information to the encyclopedia. I can't see that this censorship system would make more people enthusiastic to edit here (and if it did, I'm not sure we'd really want the sort of editor it would encourage.) -- Nat Gertler (talk) 14:39, 11 December 2024 (UTC)
- "Trying" would be a massive task, so deciding whether it seems feasible or not before we start on it seems the wisest course of action. We get endless discussions and RfC about whether something is a WP:RS or not all the time, to have this kind of discussion about which tags we should have and then which images should be part of it will multiply this kind of discussions endlessly. Should The Adoration of the Magi (Geertgen tot Sint Jans) be tagged as nudity? Buttocks? Is File:Nipple of male human.jpg nudity? File:African Breast SG.jpg? If male nipples are nudity, then File:Michael Phelps wins 8th gold medal.jpg is nudity. If male nipples aren't nudity, but female nipples are nudity, then why one but not the other? Fram (talk) 11:04, 11 December 2024 (UTC)
- So we could have a category "nudity" instead of or in addition to "sex". Part of the proposal here is coming to a consensus on which categories should exist and on guidelines for their use. I don't see how we can conclude that this is an impossible or impractical task before even trying. We manage to draw lines through grey areas all the time. – Joe (talk) 10:44, 11 December 2024 (UTC)
One more general problem with the proposal is that you do not know whether people will be forced to "opt in" by "well meaning" system administrators trying to censor what can be accessed from their system. Having machine readable tags on images makes it very easy to do so and also easy to remove people's ability to click through and see the content. We should not encourage volunteer efforts on supporting such censorship infrastructures. —Kusma (talk) 11:46, 11 December 2024 (UTC)
I don't think the specific proposal here, placing templates in articles (even if they default to not obscuring any images), would be workable. It's too big of an opportunity for activist editors to go on mass-article-editing sprees and for people to edit war over a particular instance of the template. You'd also have to deal with templates where simply wrapping the image in a template isn't currently possible, such as Template:Speciesbox. If people really want to pursue this, I think it'd be better to figure out how to tag the images themselves; people will still probably fight over the classifications, but at least it's less likely to spill over into disrupting articles. Anomie⚔ 12:45, 11 December 2024 (UTC)
- The idea was that, since these templates would have no effect if not someone has not opted-in to hiding that specific category of image, people who do not want images to be hidden would be less likely to fight over it or be worried about what "activist editors" are doing. The idea that Wikipedia should not be censored for everyone has solid consensus behind it, but the position some are taking here, that other people should not be allowed an informed choice of what not to see, strikes me as quite extreme. – Joe (talk) 13:40, 11 December 2024 (UTC)
- You were given all the information you need by the very fact that this is an encyclopedia. There WILL be things here to upset you. --User:Khajidha (talk) (contributions) 15:06, 11 December 2024 (UTC)
- I dispute your good-faith but naive assertion that these templates would have "no effect on people who have not opted in". If you tag images systematically, you make it easy to build proxies (or just censored forks) that allow high schools in Florida to ensure their students won't be able to click through to the photo explaining how to use contraceptives. There is no innocent "only opt-in" tagging; any such metadata can and will be used for censorship. Do you really want us to be in the business of enabling censorship? —Kusma (talk) 15:14, 11 December 2024 (UTC)
- Well yes, the proposal literally to enable censorship. For those who want it. It may be that it is used by network administrators as you suggest, we can't stop that, but that's between them and their users. I agree that censorship should not affect what editors include in our content but I find the idea that we can enforce our ideal of Zero Sensitivity Free Speech to a global readership also very naive (and frankly a little creepy; I keep picturing a stereotypical Wikipedian standing in front of a Muslim child screaming "no you WILL look at what we show you, because censorship is bad and also what about Renaissance art"). A silver lining could be that the option of controlling access to our content in a fine grained way may convince some networks to allow partial access to Wikipedia where they would otherwise completely block it. – Joe (talk) 16:58, 12 December 2024 (UTC)
- We are not in the business of enabling censorship, voluntary or otherwise, because voluntary censorship very quickly becomes involuntary cesnsorship. We are in the business of providing access to information, not inhibiting access to information. Thryduulf (talk) 17:07, 12 December 2024 (UTC)
- "We're not in the business of leaving the phrase 'rimjob' to your imagination, Timmy, we're in the business of providing access to artistic depictions of bunny sex!" he screamed, and screamed, and screamed... you guys are really silly sometimes. – Joe (talk) 17:31, 12 December 2024 (UTC)
- We are not in the business of enabling censorship, voluntary or otherwise, because voluntary censorship very quickly becomes involuntary cesnsorship. We are in the business of providing access to information, not inhibiting access to information. Thryduulf (talk) 17:07, 12 December 2024 (UTC)
- Well yes, the proposal literally to enable censorship. For those who want it. It may be that it is used by network administrators as you suggest, we can't stop that, but that's between them and their users. I agree that censorship should not affect what editors include in our content but I find the idea that we can enforce our ideal of Zero Sensitivity Free Speech to a global readership also very naive (and frankly a little creepy; I keep picturing a stereotypical Wikipedian standing in front of a Muslim child screaming "no you WILL look at what we show you, because censorship is bad and also what about Renaissance art"). A silver lining could be that the option of controlling access to our content in a fine grained way may convince some networks to allow partial access to Wikipedia where they would otherwise completely block it. – Joe (talk) 16:58, 12 December 2024 (UTC)
- I've seen enough arguments over people doing mass edits and otherwise fighting over invisible stuff in articles, including complaints of watchlist flooding, to think this would be any different. Anomie⚔ 00:17, 12 December 2024 (UTC)
* I would support an opt-in that turned off or blurred all images and made them viewable with a click. I would absolutely object to anything that used some categorization system to decide which images were potentially offensive to someone somewhere. There would be systemic sexism in such categorization because of different cultural norms. Valereee (talk) 12:56, 11 December 2024 (UTC)
- Here are four images of adult women touching their own breasts. Do we categorize all of them as potentially offensive? Valereee (talk) 13:10, 11 December 2024 (UTC)
- Yes, or at least the three photographs. I'm standing on a crowded subway car and just scrolled past three pics of boobs. Totally unexpected, totally would have minimized/blurred/hidden those if I could, just for the other people around me. It has nothing to do with being offensive, I'm just in a place where pictures of boobs are not really OK to have on my phone right now. And I live in a free country, I can only imagine what it might be like for others. Levivich (talk) 15:16, 11 December 2024 (UTC)
- If you are in a place where images of boobs are not ok to have on your phone, you should turn off or blur images on wikis in general as you can never guarantee there will be a warning. (As an aside, these images are not far from some that I have seen in on ads in subway stations). —Kusma (talk) 16:15, 11 December 2024 (UTC)
- Levivich, I sympathize with the desire not to encounter NSFW content while “at work”. But your standard here is “not safe for a crowded American or British public space”, which admittedly is the default for the Internet as a whole. But on Wikimedia we at least try to respect the fact that not everyone has that standard. Dronebogus (talk) 17:49, 11 December 2024 (UTC)
- It really doesn't feel like we're trying to respect anyone, based on this and related discussions. We seem to be saying to anybody who has personal or cultural sensitivities about any kind of image (so the majority of humankind) that they can either accept our standard of WP:NOTCENSORED or to not see any images at all. We're saying we can't possibly let your kids have the full experience of our educational images while also avoiding photos of dead bodies or graphic depictions of penetrative sex, because what about male nipples? – Joe (talk) 17:04, 12 December 2024 (UTC)
- I don't think anyone is saying that people should not see images at all... simply that if they are concerned about seeing images, they get to be the ones to decide which images they should see by clicking on that image. For them to make it our responsibility to guess which pictures they'll want and be the baddies when we're wrong is not respecting them and their ability to make decisions for themselves. (And I'm not sure that you can say we're giving anyone the "full experience of our educational images" when you are hiding some of them.) -- Nat Gertler (talk) 21:10, 12 December 2024 (UTC)
- Yes because what about male nipples. Because what about female nipples? Lots of more liberal-minded legal guardians wouldn’t oppose children seeing those. Or even full nudity. Or even dead bodies and penetrative sex! And then we have to go the whole opposite direction ad absurdum with women in bikinis, and Venus de Milo, and unveiled females, or female humans in general, and Mohammad, and dead aboriginal Australians and spiders and raw meat and Hindu swastikas and poop. Dronebogus (talk) 11:27, 13 December 2024 (UTC)
- It really doesn't feel like we're trying to respect anyone, based on this and related discussions. We seem to be saying to anybody who has personal or cultural sensitivities about any kind of image (so the majority of humankind) that they can either accept our standard of WP:NOTCENSORED or to not see any images at all. We're saying we can't possibly let your kids have the full experience of our educational images while also avoiding photos of dead bodies or graphic depictions of penetrative sex, because what about male nipples? – Joe (talk) 17:04, 12 December 2024 (UTC)
- If a stranger is offended by an image on your phone, remind them that they are being very rude by looking at it. --User:Khajidha (talk) (contributions) 20:57, 11 December 2024 (UTC)
- Try that with the policeman looking over your shoulder in the country where accessing "indecent" images gets you imprisoned. – Joe (talk) 17:06, 12 December 2024 (UTC)
- Pretty much every image of a human being (and plenty of other subjects) has the potential to be regarded as indecent somewhere. This means there are exactly two options that can achieve your desired outcome: censor all images, or assigned every image, individually, to one or more extremely fine-grained categories. The first already exists, the second is completely impractical. Thryduulf (talk) 17:11, 12 December 2024 (UTC)
- Then DON'T GO TO A WEBSITE THAT YOU SHOULD REASONABLY EXPECT TO HAVE SUCH COTENT. Such as an encyclopedia.--User:Khajidha (talk) (contributions) 00:11, 13 December 2024 (UTC)
- Someone on the subway asked me to stop looking at pictures of naked people on my phone and I said "WHAT?! I'M READING AN ENCYCLOPEDIA!" Levivich (talk) 00:22, 13 December 2024 (UTC)
- I really don’t see why Wikipedia should work around the subway-goer looking at your phone and your ability to appease them. Look at another website if you want something censored and safe for onlookers. ꧁Zanahary꧂ 00:28, 13 December 2024 (UTC)
- I don't really see why you (or anyone) would be opposed to me having a script that lets me turn off those pictures if I want to. Levivich (talk) 00:36, 13 December 2024 (UTC)
- You can have your own script to toggle off every image. You can have a script that runs on an off-wiki index of images you don’t want to see. But to tag images as potentially offensive, I have an issue with, and I hope you understand why even if you don’t agree. ꧁Zanahary꧂ 02:44, 13 December 2024 (UTC)
- I don't really see why you (or anyone) would be opposed to me having a script that lets me turn off those pictures if I want to. Levivich (talk) 00:36, 13 December 2024 (UTC)
- I’m sorry but your situation is just weird. You should know Wikipedia is generally NSFW at this point if you’re complaining about it right now. Dronebogus (talk) 11:45, 13 December 2024 (UTC)
- Seems that the problematic behavior here isn't us having the images or you looking at them, it is the random person looking at someone else's screen. We should not be required to modify our behavior because other people behave badly. --User:Khajidha (talk) (contributions) 15:49, 13 December 2024 (UTC)
- I really don’t see why Wikipedia should work around the subway-goer looking at your phone and your ability to appease them. Look at another website if you want something censored and safe for onlookers. ꧁Zanahary꧂ 00:28, 13 December 2024 (UTC)
- Someone on the subway asked me to stop looking at pictures of naked people on my phone and I said "WHAT?! I'M READING AN ENCYCLOPEDIA!" Levivich (talk) 00:22, 13 December 2024 (UTC)
- Try that with the policeman looking over your shoulder in the country where accessing "indecent" images gets you imprisoned. – Joe (talk) 17:06, 12 December 2024 (UTC)
- You can look at other websites if you're in public and an uncensored one would disturb people who might glance at your phone! ꧁Zanahary꧂ 21:06, 11 December 2024 (UTC)
- And how do we categorize these in order to allow "offensive" images to be blurred, @Levivich? Valereee (talk) 22:42, 16 December 2024 (UTC)
- Yes, or at least the three photographs. I'm standing on a crowded subway car and just scrolled past three pics of boobs. Totally unexpected, totally would have minimized/blurred/hidden those if I could, just for the other people around me. It has nothing to do with being offensive, I'm just in a place where pictures of boobs are not really OK to have on my phone right now. And I live in a free country, I can only imagine what it might be like for others. Levivich (talk) 15:16, 11 December 2024 (UTC)
- Here are four images of adult women touching their own breasts. Do we categorize all of them as potentially offensive? Valereee (talk) 13:10, 11 December 2024 (UTC)
- I'd be ok with such an opt-in too, if it can be made. Perhaps such a link/button could be placed in the main meny or floating header. The hamburger too perhaps, for the mobile readers. Gråbergs Gråa Sång (talk) 13:27, 11 December 2024 (UTC)
- The idea is not to decide what is and isn't potentially offensive, but to add descriptive labels and then let readers decide what they do and do not want to be warned about. So for example we would not categorise any of your examples as "potentially offensive", but as containing "nudity" or "nude women" or whatever level of granularity was agreed upon. This idea is a reaction to the proposal to obscure all images (which is being discussed elsewhere) because a) letting users choose whether to see an image is only useful if they have some indication of what's behind the blurring and b) quite frankly, I doubt anyone will ever use such an indiscriminate option. – Joe (talk) 13:33, 11 December 2024 (UTC)
- One generally does have indications of what is being blurred, both some sense in a blurred image but more importantly by caption. Some ways of hiding all images would ipresent not a blurred image present a filename, and image filenames are largely descriptive. -- Nat Gertler (talk) 15:59, 11 December 2024 (UTC)
- Use alt text, the explicit purpose of which is to present a description of the picture for those that cannot see it, rather than file names which can be completely descriptive without describing anything relevant to why someone might or might not want to view it, e.g. the photo of the statue here is File:Antonin Carlès (1851-1919) - La Jeunesse (1883) (12387743075).png. Thryduulf (talk) 18:22, 11 December 2024 (UTC)
- That is actually a much better idea than blurring, thanks! Having a "see alt text instead of images" option would not only be more practical for people wanting to know if images are sensitive before seeing them, it would also give more of an incentive to add alt text to begin with. Chaotic Enby (talk · contribs) 18:31, 11 December 2024 (UTC)
- Use alt text, the explicit purpose of which is to present a description of the picture for those that cannot see it, rather than file names which can be completely descriptive without describing anything relevant to why someone might or might not want to view it, e.g. the photo of the statue here is File:Antonin Carlès (1851-1919) - La Jeunesse (1883) (12387743075).png. Thryduulf (talk) 18:22, 11 December 2024 (UTC)
- One generally does have indications of what is being blurred, both some sense in a blurred image but more importantly by caption. Some ways of hiding all images would ipresent not a blurred image present a filename, and image filenames are largely descriptive. -- Nat Gertler (talk) 15:59, 11 December 2024 (UTC)
- I would also support an opt-in to blur all images (in fact, User:Chaotic Enby/blur.js does about that). However, categorizing images with labels whose only purpose is for reader to decide whether they are offensive is, by definition, flagging these images as "potentially offensive", as I doubt a completely innocuous image would be flagged that way. And any such categorization can easily be exploited, as above.Also, the ethical concerns: if some people find homosexuality offensive, does that mean Wikipedia should tag all images of gay couples that way? What is the message we bring if gay people have a tag for blurring, but not straight people? Chaotic Enby (talk · contribs) 14:04, 11 December 2024 (UTC)
You might be able to do it using categories, even Commons categories. Instead of (or in addition to) adding images one by one to special maintenance categories, add entire image categories to the maintenance categories. Keep in mind this isn't the kind of thing that needs consensus to do (until/unless it becomes a gadget or preference)--anyone can just write the script. Even the list of categories/images can be maintained separately (e.g. a list of Commons categories can be kept on enwiki or meta wiki or wherever, so no editing of anything on Commons would be needed). It could be done as an expansion of an existing hide-all-images script, where users can hide-some-images. The user can even be allowed to determine which categories/images are hidden. If anyone wants to write such a script, they'd have my support, hmu if you want a tester. Levivich (talk) 15:30, 11 December 2024 (UTC)
- As I commented at Wikipedia:Village pump (proposals)/Archive 214#Censor NSFW/ NSFL content last month unless you get really fine-grained, Commons categories don't work. For example all these images are in subcategories of Commons:Category:Sex:
- To get any sort of useful granularity you have to go multiple levels deep, and that means there are literally thousands (possibly tens of thousands) of categories you need to examine individually and get agreement on. And then hope that the images are never recategorised (or miscategorised), new images added to categories previously declared "safe" (or whatever term you choose) or new categories created. Thryduulf (talk) 15:43, 11 December 2024 (UTC)
- c:Category:Penis. If someone wrote a script that auto-hid images in that category (and sub-cats), I'd install it. We don't need agreement on what the categories are, people can just make lists of categories. The script can allow users to choose whatever lists of categories they want, or make/edit their own list of categories. One thing I agree about: the work is in compiling the lists of categories. Nudity categories are easy; I suspect the violence categories would be tougher to identify, if they even exist. But if they don't, maintenance categories could be created. (Lists of individual images could even be created, but that is probably too much work to attempt.) Levivich (talk) 15:53, 11 December 2024 (UTC)
- Going that private script route, you could also use the category of the article in which it appears in some cases. But I'd worry that folks would try to build categories for the specific reason of serving this script, which would be sliding from choice to policy. -- Nat Gertler (talk) 16:02, 11 December 2024 (UTC)
- Nah, still choice. One option is to create new maintenance categories for the script. Another option is for the script to just use its own list of images/categories, without having to add images to new maintenance categories. Levivich (talk) 16:21, 11 December 2024 (UTC)
- Allowing maintenance categories designed to hide images is very much a policy issue, no matter how many times you say "nah". The moment that "pictures which include Jews" category goes up, we're endorsing special tools for antisemitism. -- Nat Gertler (talk) 17:04, 11 December 2024 (UTC)
- Nah. See, while we have a categories policy, new maintenance categories are not something we "allow" or don't allow -- they're already allowed -- and they don't create a "policy issue" because we already have a policy that covers it. People create new maintenance categories all the time for various reasons -- it's not like we have to have an RFC to make a new template or make a new maintenance category. This is a wiki, have you forgotten? We need consensus to delete stuff, not create stuff.
- And you're totally ignoring the part that I've now said multiple times, which is that no new maintenance categories are required. That's one way to skin this cat, but it can also be done by -- pay attention please -- creating lists of categories and images. See? No maintenance category, no policy issue.
- Anybody creating a list of "pictures which include Jews" would be violating multiple site policies and the UCOC and TOS. This is a wiki, remember? Did we not have Wikipedia because someone might create an antisemitic article? No! We still had a Wikipedia, knowing full well that some people will abuse it. So "somebody might abuse it!" is a really terrible argument against any new feature or script or anything on Wikipedia.
- What are you even opposing here? You have a problem with someone creating a script to hide images? Really? Maybe just ... not ... try to imagine reasons against it? Maybe just let the people who think it's a good idea discuss the implementation, and the people who don't think it's a good idea can just... not participate in the discussion about implementation? Just a thought. It's hard to have a discussion on this website sometimes. Levivich (talk) 17:09, 11 December 2024 (UTC)
- Creating a script to hide images is fine. Curating/categorising images to make them easier to hide is not. You are free to do the first in any way you like, but the second should not be done on Wikipedia or any Wikimedia project. —Kusma (talk) 17:30, 11 December 2024 (UTC)
- Why yes, I can understand why having people who disagree with you about both intent and effect in this matter would be a disruption to the discussion you want to have, with all agreeing with you and not forseeing any problems nor offering any alternate suggestions. I'm not seeing that that would be particularly in the spirit of Wikipedia nor helpful to the project, however. "Someone might abuse it and it might require more editorial effort to work it out, all of which could be a big distraction that do not actually advance the goals of the project" is a genuine concern, no matter how many times you say "nah". -- Nat Gertler (talk) 17:40, 11 December 2024 (UTC)
- How would hiding pictures of Jews be an abuse? ꧁Zanahary꧂ 18:40, 11 December 2024 (UTC)
- Allowing maintenance categories designed to hide images is very much a policy issue, no matter how many times you say "nah". The moment that "pictures which include Jews" category goes up, we're endorsing special tools for antisemitism. -- Nat Gertler (talk) 17:04, 11 December 2024 (UTC)
- Nah, still choice. One option is to create new maintenance categories for the script. Another option is for the script to just use its own list of images/categories, without having to add images to new maintenance categories. Levivich (talk) 16:21, 11 December 2024 (UTC)
- Going that private script route, you could also use the category of the article in which it appears in some cases. But I'd worry that folks would try to build categories for the specific reason of serving this script, which would be sliding from choice to policy. -- Nat Gertler (talk) 16:02, 11 December 2024 (UTC)
- c:Category:Penis. If someone wrote a script that auto-hid images in that category (and sub-cats), I'd install it. We don't need agreement on what the categories are, people can just make lists of categories. The script can allow users to choose whatever lists of categories they want, or make/edit their own list of categories. One thing I agree about: the work is in compiling the lists of categories. Nudity categories are easy; I suspect the violence categories would be tougher to identify, if they even exist. But if they don't, maintenance categories could be created. (Lists of individual images could even be created, but that is probably too much work to attempt.) Levivich (talk) 15:53, 11 December 2024 (UTC)
- If not categories then perhaps that image tagging system commons has? (Where it asks you what is depicted when you upload something). Not sure how much that is actually used though. – Joe (talk) 17:18, 12 December 2024 (UTC)
- Using the sub-cats, you would hide e.g. the image on the right side (which is in use on enwiki). Fram (talk) 16:14, 11 December 2024 (UTC)
- Yeah, given how Wikipedia categorization works (it's really labeling, not categorization), it's well known that if you go deep enough into sub-cats you emerge somewhere far away from the category you started at.
- If the cost of muting the Penis category is having the bunny picture hidden, I'd still install the script. False positives are nbd. Levivich (talk) 16:23, 11 December 2024 (UTC)
- This is a bad example. It is only used on the article about the objectionable painting it is extracted from. Aaron Liu (talk) 20:53, 13 December 2024 (UTC)
- And...? I thought we were hiding objectionable images (and considering that painting as "objectionable" is dubious to start with), not all images on a page where one image is objectionable? Plus, an image that is only used on page X today may be used on page Y tomorrow ("rabbits in art"?). So no, this is not a bad example at all. Fram (talk) 22:54, 13 December 2024 (UTC)
- Using the sub-cats, you would hide e.g. the image on the right side (which is in use on enwiki). Fram (talk) 16:14, 11 December 2024 (UTC)
- This is no better than the discussion running at the other VP and is borderline forum shopping. I’m disappointed in the number (i.e. non-zero) of competent users vehemently defending a bad idea that’s been talked to death. I keep saying that the only way (no hyperbole) this will ever work is an “all or nothing” opt-in to hide all images without prejudice. Which should be discussed at the technical VP IMO. Dronebogus (talk) 17:37, 11 December 2024 (UTC)
- Reactivating the sensitive content tagging idea here feels like forum-shopping to me too. ꧁Zanahary꧂ 18:41, 11 December 2024 (UTC)
oppose as forum-shopping for yet another attempt to try to introduce censorship into the wikipedia. ValarianB (talk) 18:51, 11 December 2024 (UTC)
- If people really want a censored Wikipedia, are't they allowed to copy the whole thing and make their own site? One WITHOUT blackjack and hookers?--User:Khajidha (talk) (contributions) 21:02, 11 December 2024 (UTC)
- Yes, we even provide basic information on how to do it at Wikipedia:FAQ/Forking. Thryduulf (talk) 21:26, 11 December 2024 (UTC)
- Actually forget the Wikipedia and the blackjack! Dronebogus (talk) 14:55, 12 December 2024 (UTC)
- Maybe you missed it, ValarianB, but this is the idea lab, so a) as it says at the top of the page, bold !voted are discouraged and b) the whole point is to develop ideas that are not yet ready for consensus-forming in other forums. – Joe (talk) 17:10, 12 December 2024 (UTC)
- Maybe you missed it, @Joe, but forum shopping, spending time developing ideas that have no realistic chance of gaining consensus in any form, and ignoring all the feedback you are getting and insisting that, no matter how many times and how many ways this exact same thing has been proposed previously, this time it won't be rejected by the community on both philosophical and practical grounds are also discouraged. Thryduulf (talk) 17:16, 12 December 2024 (UTC)
- ...you realise you don't have to participate in this discussion, right? – Joe (talk) 17:20, 12 December 2024 (UTC)
- Why shouldn't they? They strongly oppose the idea. ꧁Zanahary꧂ 18:07, 12 December 2024 (UTC)
- Yes, that's exactly the problem with forum shopping. If you keep starting new discussions and refusing to accept consensus, you might exhaust people until you can force your deeply unpopular idea through.135.180.197.73 (talk) 18:31, 12 December 2024 (UTC)
- Because Thryduulf apparently thinks it's a waste of time to do so. And since the purpose of the idea lab is to develop an idea, not propose or build consensus for anything, I tend to agree that chiming in here just to say you oppose something is a waste of (everyone's) time. – Joe (talk) 18:43, 12 December 2024 (UTC)
- How? If I were workshopping an idea to make Wikipedia cause laptops to explode, a discussion that omits opposition to that idea would be useless and not revealing. ꧁Zanahary꧂ 19:56, 12 December 2024 (UTC)
- Because you're not participating to help develop the idea, your participating to stop other people from developing the idea. Brainstorming is not a debate. Brainstorming an idea does not involve people making arguments for why everyone should stop brainstorming the idea.
- To use an analogy, imagine a meeting of people who want to develop a proposal to build a building. People who do not think the building should be built at all would not ordinarily be invited to such a meeting. If most of the meeting were spent talking about whether or not to build the building at all, there would be no progress towards a proposal to build the building.
- Sometimes, what's needed (especially in the early stages of brainstorming) is for people who want to develop a proposal to build a building, to have the space that they need to develop the best proposal they can, before anybody challenges the proposal or makes the argument that no building should be built at all. Levivich (talk) 20:13, 12 December 2024 (UTC)
- The issue here is that image filtering for this purpose is a PEREN proposal, with many of the faults in such a system already identified. Not many new ideas are being proposed here from past discussions. Masem (t) 20:27, 12 December 2024 (UTC)
- I don't think this model works for a wiki. There's no committee presenting to the public. This project is all of ours, and if there's so much opposition to a proposal that it cannot be discussed without being overwhelmed by opposition, then I don't see it as a problem that the unpopular idea can't get on its feet. ꧁Zanahary꧂ 20:43, 12 December 2024 (UTC)
- Heh. So if three or four people can disrupt an idea lab thread, then that means it was a bad idea... is what you're saying? Levivich (talk) 21:22, 12 December 2024 (UTC)
- Sure. Write up the worst interpretation of my comment and I’ll sign it. ꧁Zanahary꧂ 21:43, 12 December 2024 (UTC)
- Heh. So if three or four people can disrupt an idea lab thread, then that means it was a bad idea... is what you're saying? Levivich (talk) 21:22, 12 December 2024 (UTC)
- How? If I were workshopping an idea to make Wikipedia cause laptops to explode, a discussion that omits opposition to that idea would be useless and not revealing. ꧁Zanahary꧂ 19:56, 12 December 2024 (UTC)
- Because Thryduulf apparently thinks it's a waste of time to do so. And since the purpose of the idea lab is to develop an idea, not propose or build consensus for anything, I tend to agree that chiming in here just to say you oppose something is a waste of (everyone's) time. – Joe (talk) 18:43, 12 December 2024 (UTC)
- There's no problem with users voluntarily discussing an idea and how it might be implemented. They should, of course, remain aware that just because everyone interested in an idea comes up with a way to proceed doesn't mean there's a community consensus to do so. But if they can come up with a plan to implement an add-on feature such as a gadget, for example, that doesn't impose any additional costs or otherwise affect the work of any other editor who isn't volunteering to be involved, then they're free to spend their own time on it. isaacl (talk) 19:33, 12 December 2024 (UTC)
- My personal thought on how this should work is image sorting by category, the onus is completely on the user using the opt-in tool to select categories of images they don't want to see. We don't need to decide for anybody, they can completely make their own decisions, and there's no need for upkeep of a "possibly offensive image list." Just Step Sideways from this world ..... today 02:48, 13 December 2024 (UTC)
- It’s interesting but I don’t support it. People don’t necessarily get how categories work. “Sex” isn’t about sexual intercourse, but it’ll be at the top of everyone’s block lists. And blocking a huge over-category like violence will block a lot of totally inoffensive images. In other words, this is too technical for most people and will satisfy no-one while catching mostly false positives. Which is actually worse than all-or-nothing. Dronebogus (talk) 11:19, 13 December 2024 (UTC)
- A problem with this is that the tail may begin to wag the dog, with inclusion on block lists becoming a consideration in categorizing images and discussions on categorizations. ꧁Zanahary꧂ 15:00, 13 December 2024 (UTC)
- I can see that happening, becoming a WP:ETHNICGALLERY-like timesink. Gråbergs Gråa Sång (talk) 15:07, 13 December 2024 (UTC)
- I say let stupid people who don't understand what word means make their own mistakes. It might even teach them something. So long as it is opt-in only it won't effect anyone else. El Beeblerino if you're not into the whole brevity thing 07:28, 15 December 2024 (UTC)
- I can see that happening, becoming a WP:ETHNICGALLERY-like timesink. Gråbergs Gråa Sång (talk) 15:07, 13 December 2024 (UTC)
- My personal thought on how this should work is image sorting by category, the onus is completely on the user using the opt-in tool to select categories of images they don't want to see. We don't need to decide for anybody, they can completely make their own decisions, and there's no need for upkeep of a "possibly offensive image list." Just Step Sideways from this world ..... today 02:48, 13 December 2024 (UTC)
- ...you realise you don't have to participate in this discussion, right? – Joe (talk) 17:20, 12 December 2024 (UTC)
- Maybe you missed it, @Joe, but forum shopping, spending time developing ideas that have no realistic chance of gaining consensus in any form, and ignoring all the feedback you are getting and insisting that, no matter how many times and how many ways this exact same thing has been proposed previously, this time it won't be rejected by the community on both philosophical and practical grounds are also discouraged. Thryduulf (talk) 17:16, 12 December 2024 (UTC)
Suggestion: we let those who think this is a good idea waste hours of their time devising a plan, and then we oppose it once they bring it to WP:VPPR. I guess they have received enough feedback and can look through the archives to see why this is a bad idea which has been rejected again and again. It's their choice if they want to add one more instance of this perennial proposal, if they believe that either the opposes here are a minority and they represent the silent majority somehow, or if they somehow can find a proposal which sidesteps the objections raised here. Fram (talk) 11:46, 13 December 2024 (UTC)
- That'd be great, thanks. – Joe (talk) 11:49, 13 December 2024 (UTC)
Arbitrary break
So to summarise the constructive feedback so far:
- It'd be better for labels to be attached to images and not to inclusions of them
- It'd be better to use an existing labelling (e.g. categories, captions) rather than a new system
- However it's doubtful if it's feasible to use categories or if they are sufficiently consistent
- An alternative could be to maintain a central list of labels
This suggests to me three, not mutually exclusive approaches: obscure everything any rely on captions and other existing context to convey what's shown (which is being discussed at Wikipedia:Village_pump_(proposals)#"Blur_all_images"_switch); develop a gadget that uses categories (possibly more technically complex); develop a gadget that uses a central list (less technically complex, could build lists from categories). – Joe (talk) 12:11, 13 December 2024 (UTC)
- Ah, the dreaded “arbitrary break”. Dronebogus (talk) 14:09, 13 December 2024 (UTC)
- …this is your summary of feedback so far? How about "many editors believe that marking content as potentially sensitive violates WP:NOTCENSORED and the spirit of an encyclopedia?" ꧁Zanahary꧂ 14:58, 13 December 2024 (UTC)
- Seriously could you two stop? Levivich (talk) 15:10, 13 December 2024 (UTC)
- That viewpoint has been well-heard and understood, and any actual implementation plan that develops will have to take it into account. isaacl (talk) 17:23, 13 December 2024 (UTC)
- My main questions would be what the criteria are for deciding what labels to have, and what steps would be taken to minimize the prejudicial effects of those labels (see Question 7 in this ALA Q&A)? (Asking in good faith to foster discussion, but please feel free to disregard if this is too contrarian to be constructive.)--Trystan (talk) 16:49, 13 December 2024 (UTC)
- That is an excellent link. —Kusma (talk) 17:33, 13 December 2024 (UTC)
- I think it'd be best if the user sets their own exclusion list, and then they can label it however they want. Anyone who wants to could make a list. Lists could be shared by users if they want. Levivich (talk) 18:20, 13 December 2024 (UTC)
- One option would be to start with an existing system from a authorative source. Many universities and publishers have guidelines on when to give content warnings, for example.[58] – Joe (talk) 19:23, 13 December 2024 (UTC)
- This is a review of what content warnings and trigger warnings exist, not guidelines on when they should be used. It examined
electronic databases covering multiple sectors (n = 19), table of contents from multi-sectoral journals (n = 5), traditional and social media websites (n = 53 spanning 36 countries), forward and backward citation tracking, and expert consultation (n = 15)
, and no encyclopedia. ꧁Zanahary꧂ 19:46, 13 December 2024 (UTC)- Yep, that's why I linked it; to show that we have at least 136 potential models. Though if you read further they do also come up with their own "NEON content warning typology" which might not be a bad starting point either. – Joe (talk) 20:02, 13 December 2024 (UTC)
- Do you want to apply it to sensitive articles, too? That seems more in line with the NEON system. ꧁Zanahary꧂ 20:21, 13 December 2024 (UTC)
- No. – Joe (talk) 05:58, 14 December 2024 (UTC)
- @Joe Roe: and why not? Dronebogus (talk) 15:45, 15 December 2024 (UTC)
- It seems like getting something running for images is enough of a challenge, both technically and w.r.t to community consensus. – Joe (talk) 07:59, 16 December 2024 (UTC)
- @Joe Roe: and why not? Dronebogus (talk) 15:45, 15 December 2024 (UTC)
- No. – Joe (talk) 05:58, 14 December 2024 (UTC)
- Since it included NO encyclopedias, it looks to me like we have NO models. Possibly because such things are fundamentally incompatible with the nature of an encyclopedia.--User:Khajidha (talk) (contributions) 23:44, 13 December 2024 (UTC)
- Bet you can't name three encyclopedias that contain a picture of anal sex. Britannica, World Book, and Encarta don't, in any edition. Seems that not having pictures of anal sex is quite compatible with the nature of an encyclopedia. Wikipedia might be the first and only encyclopedia in history that contains graphic images. Levivich (talk) 00:42, 14 December 2024 (UTC)
- Sounds like the problem is ith those others.--User:Khajidha (talk) (contributions) 00:55, 14 December 2024 (UTC)
- But it does make me wonder whether anything that appears only in Wikipedia and not in other general-purpose encyclopedias is accurately described as "the nature of an encyclopedia". That sounds more like "the nature of (the English) Wikipedia". WhatamIdoing (talk) 01:18, 14 December 2024 (UTC)
- Wikipedia has long ago stopped being similar to old general purpose encyclopaedias; it is a sui generis entity constrained only by WP:NOT. We do have massive amounts of specialist topics (equivalent to thousands of specialist encyclopaedias) and try to illustrate them all, from TV episodes to individual Biblical manuscripts to sex positions. —Kusma (talk) 07:40, 14 December 2024 (UTC)
- Or those other encyclopedias are deficient. --User:Khajidha (talk) (contributions) 22:33, 14 December 2024 (UTC)
- But it does make me wonder whether anything that appears only in Wikipedia and not in other general-purpose encyclopedias is accurately described as "the nature of an encyclopedia". That sounds more like "the nature of (the English) Wikipedia". WhatamIdoing (talk) 01:18, 14 December 2024 (UTC)
- feel free to argue on the anal sex page that we shouldn’t have any images of anal sex. We do. ꧁Zanahary꧂ 01:19, 14 December 2024 (UTC)
- I believe that the argument is that since Wikipedia is the only (known) general-purpose encyclopedia to include such photos, then their absence could not be "fundamentally incompatible with the nature of an encyclopedia". If the absence of such photos were "fundamentally incompatible with the nature of an encyclopedia", then Wikipedia is the only general-purpose encyclopedia that has ever existed. WhatamIdoing (talk) 02:10, 14 December 2024 (UTC)
- Why shouldn’t we operate from the idea that Wikipedia is the ideal encyclopedia? To me it clearly is. The spirit of an encyclopedia is obviously better served with photos on the article for anal sex than with a lack of them. ꧁Zanahary꧂ 03:09, 14 December 2024 (UTC)
- Because, as people who have a significant say in what Wikipedia looks like, that would be incredibly solipsistic and automatically lead to the conclusion that all change is bad. – Joe (talk) 06:00, 14 December 2024 (UTC)
- Taken to extremes, all philosophies would pitfall into pointlessness. If we exclude illustrating images because Britannica and World Book do too, then we may as well just fuse with either of those, or shut down Wiki because those others have it covered. Photos of an article subject are educational illustrations, and encyclopedias that lack such photos are weaker for it. ꧁Zanahary꧂ 06:20, 14 December 2024 (UTC)
- The point is that you shouldn't take an outlier and declare that unusual trait to be True™ Nature of the whole group. One does not look at a family of yellow flowers, with a single species that's white, and say "This one has white petals, and I think it's the best one, so yellow petals are 'fundamentally incompatible with the nature of' this type of flower". You can prize the unusual trait without declaring that the others don't belong to the group because they're not also unusual. WhatamIdoing (talk) 22:47, 16 December 2024 (UTC)
- Taken to extremes, all philosophies would pitfall into pointlessness. If we exclude illustrating images because Britannica and World Book do too, then we may as well just fuse with either of those, or shut down Wiki because those others have it covered. Photos of an article subject are educational illustrations, and encyclopedias that lack such photos are weaker for it. ꧁Zanahary꧂ 06:20, 14 December 2024 (UTC)
- Because, as people who have a significant say in what Wikipedia looks like, that would be incredibly solipsistic and automatically lead to the conclusion that all change is bad. – Joe (talk) 06:00, 14 December 2024 (UTC)
- Why shouldn’t we operate from the idea that Wikipedia is the ideal encyclopedia? To me it clearly is. The spirit of an encyclopedia is obviously better served with photos on the article for anal sex than with a lack of them. ꧁Zanahary꧂ 03:09, 14 December 2024 (UTC)
- I believe that the argument is that since Wikipedia is the only (known) general-purpose encyclopedia to include such photos, then their absence could not be "fundamentally incompatible with the nature of an encyclopedia". If the absence of such photos were "fundamentally incompatible with the nature of an encyclopedia", then Wikipedia is the only general-purpose encyclopedia that has ever existed. WhatamIdoing (talk) 02:10, 14 December 2024 (UTC)
- A good reference work/encyclopedia on human sexuality probably does, though I haven’t gone and checked. ꧁Zanahary꧂ 03:11, 14 December 2024 (UTC)
- Well one obvious example would be the Kama Sutra. Nobody complains about that. Dronebogus (talk) 15:42, 15 December 2024 (UTC)
- Sounds like the problem is ith those others.--User:Khajidha (talk) (contributions) 00:55, 14 December 2024 (UTC)
- Bet you can't name three encyclopedias that contain a picture of anal sex. Britannica, World Book, and Encarta don't, in any edition. Seems that not having pictures of anal sex is quite compatible with the nature of an encyclopedia. Wikipedia might be the first and only encyclopedia in history that contains graphic images. Levivich (talk) 00:42, 14 December 2024 (UTC)
- Do you want to apply it to sensitive articles, too? That seems more in line with the NEON system. ꧁Zanahary꧂ 20:21, 13 December 2024 (UTC)
- Yep, that's why I linked it; to show that we have at least 136 potential models. Though if you read further they do also come up with their own "NEON content warning typology" which might not be a bad starting point either. – Joe (talk) 20:02, 13 December 2024 (UTC)
- This is a review of what content warnings and trigger warnings exist, not guidelines on when they should be used. It examined
- The right approach to take here is to use the depicts statement on Commons images (see also c:Commons:Structured data). This should have a fairly high true positive ratio (compared either to picking out specific images or using categories) as the intention of the property is to be pretty concrete about what's appearing in the file (see also c:Commons:Depicts and/or c:Commons:Structured data/Modeling/Depiction - it's not obvious to me which is the Commons preference for how to depict things). You'll need to figure out which Wikidata items you want to offer which indicate a screened image, but that can start in the penis, Muhammad, internal organ, and sex directions and go from there. The gadget will probably want to support querying the subclass chain of the Wikidata item (property P279) so that you can catch the distinction between any penis and the human penis. My impression of the problem in using depicts statements is that the structured data work on Commons is much younger than the categories work is and so you're probably going to end up with more false negatives than not. It's a wiki though, so the right way to improve those cases should be obvious, and can perhaps even start with a database query today tracking which images used in our articles do not yet have depicts statements. The other problem this direction is that it doesn't take into account images hosted locally since those don't have structured data, but I anticipate the vast majority of the kinds of images this discussion entertains are free images. Izno (talk) 10:09, 14 December 2024 (UTC)
- Nobody maintains those things. They’re almost as useless as captions. Dronebogus (talk) 15:43, 15 December 2024 (UTC)
- This is sounds like a very promising approach POV, thanks. I have to say I also had the strong impression that the "depicts" feature was abandonware, but then again maybe having a concrete use for the labels will prompt people to create more of them. – Joe (talk) 08:06, 16 December 2024 (UTC)
- It seems to get used a lot be people using c:Special:UploadWizard – half of uploads? I have the impression that using it might increase the likelihood of the tagged images being found in relevant searches, but I don't know why I believe that. But since I believe it, I'd encourage people to use it, at least for images that they believe people would want to find. WhatamIdoing (talk) 22:50, 16 December 2024 (UTC)
- I don't see consensus in this discussion to create a new tagging/labelling system or to use existing Commons categories to hide images. People can argue until they're blue in the face, but the proposal(s) will ultimately be rejected at a community-wide RfC. That aside, I don't believe anyone here is opposed to having a toggle button that blurs or hides all images, right? The toggle switch could be placed in the Settings menu (on mobile view) or Appearance menu (on desktop view), and it would be switched off by default (meaning if editors want to blur/hide all images, they would have to manually switch it on). Only the WMF team has the ability to create such a feature, so that logged-out users can use it and logged-in users won't need to install user scripts. That idea could be suggested at the m:Community Wishlist. Some1 (talk) 15:31, 15 December 2024 (UTC)
- At the VPPro discussion this was forked from opposition has been expressed. Thryduulf (talk) 15:46, 15 December 2024 (UTC)
- @Some1: This is the idea lab. Discussions here are explicitly not about developing consensus one way or another (see the notice at the top of this page). The blur all images approach is being discussed elsewhere (linked several times above) and I would prefer to keep this on the original topic of labelled content warnings. – Joe (talk) 08:01, 16 December 2024 (UTC)
- I feel like this section is trying to give false legitimacy to a widely opposed idea by saying the longstanding consensus that “content warnings and censorship are bad” (and by extension the opinions of anyone supporting that position) is illegitimate because it’s not “constructive”. People have a right to not help you “construct” an idea that’s against policy and been rejected time and time again. If you don’t want negativity don’t make a controversial proposal. Dronebogus (talk) 15:40, 15 December 2024 (UTC)
- Nobody is asking you to help. Several of us have politely tried to get you to stop bludgeoning the discussion by stating your opposition over and over again. – Joe (talk) 08:04, 16 December 2024 (UTC)
- It's not happening here. You have been told where to go to copy the entire site and modify it to fit your ideas. --User:Khajidha (talk) (contributions) 13:07, 16 December 2024 (UTC)
- Nobody is asking you to help. Several of us have politely tried to get you to stop bludgeoning the discussion by stating your opposition over and over again. – Joe (talk) 08:04, 16 December 2024 (UTC)
Making voluntary "reconfirmation" RFA's less controversial
Recently, there have been two "reconfirmation" RFA's from ex-admin candidates whose resignations weren't under a cloud. The RFA's received quite a few comments about the utility of the RFA's themselves. These are Worm That Turned's recent RFA and the ongoing RFA from Hog Farm. In both, there are multiple recurring comments, such as:
- The candidate could/should have just gone to WP:BN to request the tools back
- The reconfirmations were/are a "waste of community time"
- The reconfirmations are a good thing, in order to increase transparency and give feedback to the candidate
I'm opening the topic here so that we can hash out ideas of making these situations less controversial, as this was a big talking point in both RFA's, and both sides are (in my view) making good points.
My initial proposal to improve this situation would be enacting the following:
- Admins who resigned under their own volition (not under a cloud) who want the role back should be discouraged from opening formal RFA's and instead encouraged open a request at WP:BN
- The standard holding period between a re-syssop request being posted on WP:BN and it being enacted should be increased from 24 hours to 5 days.
- Whenever there is a resyssop request, a short notice should be posted to WP:AN and in WP:CENT. This notice does not explicitly ask for public input, or encourage anyone to support or oppose - just merely makes the request more visible. Anyone is free to comment on the topic at WP:BN, if they feel it necessary.
- The request at WP:BN is enacted at the discretion of the bureaucrats, per the process they currently use, taking any comments that arise into account. It is explicitly not a vote.
This proposal would allow resyssopings to be more open and allow discussion when necessary, without being as public and time-demanding as a full RFA. Any thoughts on this? BugGhost 🦗👻 15:18, 15 December 2024 (UTC)
- Please note: there is now a RFC on a very similar topic happening over at WP:VPP#RfC: Voluntary RfA after resignation BugGhost 🦗👻 23:23, 15 December 2024 (UTC)
- Oppose the first bullet. This seems to presuppose that reconfirmation RFAs are a "waste of community time" or similar, a position I cannot agree with. Reconfirmation RFAs definitively show whether someone does or does not have the trust of the community to be an admin, this is a Good Thing and they should be encouraged not discouraged. RFA is not overloaded (far from it), and nobody is compelled to participate - if you don't have anything useful to say, and don't want to spend any time investigating whether they are trustworthy or not then don't: just trust your fellow community members in the same way that you trust the 'crats. I don't oppose the other points, but absent evidence of a problem that needs to be solved, I don't see any particular benefit in them. Thryduulf (talk) 15:43, 15 December 2024 (UTC)
- The first bullet wasn't intended to concede that they're "a waste of community time" - I personally don't think they're that useful, but I think calling them a waste of time is a bit far, as I do agree with their intended purpose. The reason why it was in quotes was because it's the phrase being debated at the current RFA's comments. The first bullet is simply intended to just say "the venue should be WP:BN, not RFA", and the subsequent bullets are just to make BN more accommodating for that purpose, and attempts to draw the attention of those that do have something to say. This proposal isn't to stop the general concept of reconfirmation or public scrutiny when resyssoping, just to alleviate the concerns that have been raised by a significant number of people in both RFAs. BugGhost 🦗👻 15:59, 15 December 2024 (UTC)
- To further clarify: one intent of this proposal is to make making a BN request a more transparent and accountable route - less (as Hog Farm put it) "back-doorsy", in order to make all resyssopings go under a public lens, so that ex-admins don't feel like they should go under a full RFA to be fairly reapproved. If ex-admins are opening RFAs because they think the BN route doesn't give enough accountability or visibility, we should bake more accountability and visibility in. BugGhost 🦗👻 17:00, 15 December 2024 (UTC)
- If a request needs more accountability and visibility than BN, then RFA is the correct venue to achieve that. Instead of making BN more like RFA, we should be encouraging editors to use RFA instead. This will, as others have pointed out, hopefully have the side effect of decreasing the problems at first-time RFAs. Thryduulf (talk) 21:50, 15 December 2024 (UTC)
- To further clarify: one intent of this proposal is to make making a BN request a more transparent and accountable route - less (as Hog Farm put it) "back-doorsy", in order to make all resyssopings go under a public lens, so that ex-admins don't feel like they should go under a full RFA to be fairly reapproved. If ex-admins are opening RFAs because they think the BN route doesn't give enough accountability or visibility, we should bake more accountability and visibility in. BugGhost 🦗👻 17:00, 15 December 2024 (UTC)
- The first bullet wasn't intended to concede that they're "a waste of community time" - I personally don't think they're that useful, but I think calling them a waste of time is a bit far, as I do agree with their intended purpose. The reason why it was in quotes was because it's the phrase being debated at the current RFA's comments. The first bullet is simply intended to just say "the venue should be WP:BN, not RFA", and the subsequent bullets are just to make BN more accommodating for that purpose, and attempts to draw the attention of those that do have something to say. This proposal isn't to stop the general concept of reconfirmation or public scrutiny when resyssoping, just to alleviate the concerns that have been raised by a significant number of people in both RFAs. BugGhost 🦗👻 15:59, 15 December 2024 (UTC)
- I don't think there's anything here that needs to be fixed. Perhaps over time, the RfA route will become more popular, in which case we may choose to do away with the BN route. Or the opposite will happen, in which case no changes are necessary. Either way, this is much ado about nothing at the moment. – bradv 15:53, 15 December 2024 (UTC)
- I personally don’t see a major problem with re-RFA’s remaining an occasional thing where a former admin prefers it, but if a large number of editors do I think your proposal is a nice way to solve that while providing a slightly more deliberative process for returning admins who feel uncomfortable presuming that there is still consensus for their continued use of the tools.
- Alternatively, we could do a bit of a petition process like with recall for editors who have been gone for more than a short, planned, absence. If few editors oppose it, the bureaucrat-led process can take place, but if more than some threshold of editors call for it, a re-RFA is required to confirm the return of tools.
- That seems kinda potentially unpleasant though, so I’d support the status quo as my first choice, and your proposal as a second choice, and something like what I mentioned as a distant third.
- I do think a humility before the will of the community is laudable in admins, and that the occasional easy-confirm re-RFAs would probably contribute to reducing the temperature of RFAs generally if they weren’t getting bogged down with arguments about the process. — penultimate_supper 🚀 (talk • contribs) 18:09, 15 December 2024 (UTC)
- Personally, I think RFA would be less toxic in general if it was less of a special occasion, and so I don't see any reason to limit these. The people who are upset by these RFAs are people whose opinions I usually both respect and understand, and in this case I can respect them but continue to not understand them. Maybe this is my problem; I'm open to being convinced. -- asilvering (talk) 18:40, 15 December 2024 (UTC)
- I follow Asilvering on this point – if we make RfAs less of a special occasion, it will, down the line, have a positive effect for everyone involved: prospective new admins, admins going through a RRfA, and regular editors now having less pressure to !vote in every single RfA. Chaotic Enby (talk · contribs) 22:08, 15 December 2024 (UTC)
- What if we fast-track them? Uncontroversial reconfirmations don't need to be a week; let's just let the 'crats snowclose them after 48 hours if they can be snowclosed and have right of resysop. theleekycauldron (talk • she/her) 18:52, 15 December 2024 (UTC)
- I like this idea - would still allow community feedback, but would alleviate some of the community time concerns. BugGhost 🦗👻 19:31, 15 December 2024 (UTC)
- Let them redo RfA if they want. Editors need to chill out. For those worrying about "straining editor time" or whatever, there's no need to participate in an RfA. You don't have to follow it. It doesn't have to take any significant portion of your time at all. The 'crats are good enough to know how to handle whatever arguments are made by those who give them and come to a decision. Plus, it's not like this is a super common thing. We just happened to have a couple re-admins in a row. Toxic behavior at RfA is definitely a thing and worrying about re-RfAs contributes a bit to this problem. Jason Quinn (talk) 21:26, 15 December 2024 (UTC)
- I don't see what the controversy is. Requesting the bit back at RfA has always been an option, and I applaud anyone who is willing to go through that again. There are very few people interested in going through RfA, so it is not overloaded and is far from a "waste of time." Anyone who believes it is a waste of their time is free to ignore it, just like everything else on Wikipedia. The only thing making these "reconfirmations" controversial is that a very loud minority is saying they are. WP:BROKE is something those people really should read and take to heart. — Jkudlick ⚓ (talk) 21:57, 15 December 2024 (UTC)
- I agree with a lot of the comments that have already been made. I don't think that this has become enough of a trend that we need to fix anything now, and I very much like Asilvering's comment that we should try to make RfA less of a special occasion. I've been having a kind of "meh" reaction to the complaints about wasting the community's time. I'm ambivalent about allowing snow closes. On the one hand, it might make things easier, but on the other hand, once a candidate decides that they want community feedback, we might as well let the community feed back. I also want to say that I'm against the bullet point about increasing the amount of time at BN: I think that would be counterproductive. --Tryptofish (talk) 21:59, 15 December 2024 (UTC)
- I don't agree with turning the discussion at the bureaucrats' noticeboard from one that examines if the administrator resigned in order to avoid scrutiny into one where the general community discusses if it trusts the editor in question to regain administrative privileges. (The first question is narrowly focused on the sequence of events leading to resignation, while the second is broad, covering all activity both before and after resignation.) While it would be nice if every administrator had a perfect sense of the level of community trust that they hold, in practice I can understand administrators having doubts. I agree with Barkeep49's remarks on their talk page that we should be looking for lower costs ways for the admin to have a better idea of the degree of trust the community has in them. isaacl (talk) 00:46, 16 December 2024 (UTC)
Encouraging reconfirmation RFAs
At Wikipedia:Village pump (policy)#RfC: Voluntary RfA after resignation I commented that reconfirmation RFAs shouldn't be made mandatory, but they should be encouraged where the time since desysop and/or the last RFA has been lengthy. Barkeep49 suggested that this is something that would be worthy of discussion here (VPI) and I agree with that. If there is enthusiasm for this suggestion, an RFC to modify Wikipedia:Administrators#Restoration of the admin tools to include the encouragement can be drafted (unless this discussion shows the addition to be uncontroversial, in which case it can just be added). I do not propose to explicitly define "lengthy", that should be left entirely to the judgment of the administrator concerned, nor to make the statement stronger than "encouraged". Thryduulf (talk) 22:03, 15 December 2024 (UTC)
- I definitely agree with the idea. I don't think an exact time period should be specified (as it isn't mandatory either way), but something in the ballpark of "several years" could be a good benchmark. Chaotic Enby (talk · contribs) 22:06, 15 December 2024 (UTC)
- For the same reasons that editors are saying, above the section break, that this probably doesn't need to be fixed at this time, I see this, too, as something that probably does not need to be fixed. --Tryptofish (talk) 22:12, 15 December 2024 (UTC)
- I have no problem with this proposal, nor would I attempt to define "lengthy" as it draws a relatively hard line where someone could complain that Former Administrator Example resigned the bit x+1 days ago and shouldn't be allowed to go through BN, or resigned x-1 days ago so shouldn't "waste the community's time." If we are to require an RfA after less time than is already prescribed at WP:ADMIN, that would require a separate RFC because it would be changing a policy and would absolutely be controversial. — Jkudlick ⚓ (talk) 02:15, 16 December 2024 (UTC)
- As Thryduulf noted I support this concept and think the generic, intentionally non-prescriptive, "length" is the right way to do it. Best, Barkeep49 (talk) 16:28, 16 December 2024 (UTC)
- I support this. I feel like we're asking people to walk a tightrope when we complain that adminship is a life appointment but also criticize people for confirming that they still have community support/confidence. Valereee (talk) 22:12, 16 December 2024 (UTC)
WMF
Will you be moving operations overseas?
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Trump has a tendency to cause disruptions in a number of different ways. He seriously interfered with a government directed radio station of some sort when he was in office last time (https://www.npr.org/2020/06/18/879873926/trumps-new-foreign-broadcasting-ceo-fires-news-chiefs-raising-fears-of-meddling). Will it be necessary for you to move Wikipedia operations overseas or is it already handled in some other way? I'm sorry to voice my concern this directly, but: I'd rather this didn't turn into conservapedia mkII and have Trump attempt to re-write history. 75.142.254.3 (talk) 19:15, 10 November 2024 (UTC)
- The Wikimedia community is editorially independent of the foundation and has remained so during Trump's first presidency, so I see no reason to be worried. * Pppery * it has begun... 19:22, 10 November 2024 (UTC)
- Do you mean the users or a part of the body of wikipedia itself? As in, could Trump take over the website or otherwise exert significant pressure that would otherwise be alleviated by relocation? If not, then I guess no action necessary.
- 75.142.254.3 (talk) 19:35, 10 November 2024 (UTC)
- The only thing he could do is hire a troll farm of some sort, which I don't expect us to have much trouble defending against. Aaron Liu (talk) 19:58, 10 November 2024 (UTC)
- Are the servers located in the United States? It's looking like the answer is no, and I'm sorry for being paranoid, it's just that he has done things in this country that we didn't anticipate because we didn't expect anyone to have the sort of character that it would be a problem in that position. 75.142.254.3 (talk) 20:01, 10 November 2024 (UTC)
- The primary Wikimedia data centers are located in the U.S., with caching centers distributed around the globe. I think you'd be hard pressed to find a country with better legal protections for online free speech, but as you note, it shouldn't be taken for granted. Legoktm (talk) 20:13, 10 November 2024 (UTC)
- Yeah, the 1st amendment provides stronger protections than almost all countries have; even if Trump tried he'd be hard pressed to find a court that would agree with Wikipedia censorship (unlike in India...). Galobtter (talk) 04:34, 11 November 2024 (UTC)
- You are correct about the strength of free speech protections in the US being more robust than just about anywhere else in the world, from a perspective of well-enshrined constitutional protections and the historical jurisprudence and respect from institutions. That said, if there were to be a concerted push by the incoming president and his allies to suppress certain information streams and target free speech that aligns against him, it would not be the first time that he sent shockwaves through the legal world by finding success in overturning long-established doctrines that were until recently thought iron-clad and inviolable, by appearing before a federal judiciary that is now showing the influence of decades of concerted efforts by the GOP and the Federalist Society to pack those courts to the gills with ideologically-aligned and personally loyal jurists. In short, nothing is certain in the current political and institutional landscape. I just don't think a whole-sale move of the organization and its technical infrastructure is either feasible or likely to substantially obviate the risks. The only answer is to take up the fight when and where it occurs. SnowRise let's rap 20:19, 17 November 2024 (UTC)
- I'd just like to add that the Federalist Society is not opposed to the First Amendment, and indeed has been staunchly supportive of what it is and what it means in terms of campaign finance. Unlike with Roe v Wade, where there was in fact a decades long campaign to overturn it, there's no similar movement to overturn key First Amendment precedents. Having said that, I do worry about Section 230's protections for user generated content, which is very important. Jimbo Wales (talk) 11:52, 22 November 2024 (UTC)
- Well said Jimbo Wales, and yes, 230 is a concern. I'd request and suggest that you arrange a meeting with Donald Trump and Elon Musk at Mar-a-Lago to discuss how it would affect Wikipedia and other online projects. They both seem open to such meetings, and my guess is that it would be beneficial for the project in several ways. Randy Kryn (talk) 12:18, 22 November 2024 (UTC)
They both seem open to such meetings
. They do? Are you sure it's that easy to get a meeting with the president-elect and the richest man in the world? –Novem Linguae (talk) 12:23, 22 November 2024 (UTC)- For Jimbo, pretty sure. Trump takes many meetings, both formal and informal, and I would hope that Musk would be interested in sitting in on their conversation(s). Many things happen in Trump's meetings, and I would assume that a Wales-Trump-Musk sit-down would veer into some interesting directions. Randy Kryn (talk) 13:08, 22 November 2024 (UTC)
- I would not afford either of those an ounce of credibility in any statement they make. Both have shown a willingness to say one thing and do another to an extreme extent, and risking something like this to the whims of people like that is not something I'd personally advise. Though, Trump doesn't appear to be looking too good these days: https://www.youtube.com/watch?app=desktop&v=ir3ULEvRqBU
- I'm speaking somewhat plainly, but trying to be appropriate. As for Musk, when he sent his submarine to go rescue some people from a cave somewhere... his response to some of the events was... notable (not for Wikipedia standards maybe though).
- For Trump, there's too many examples (saying that he doesn't know anything about project 2025, and soo many others).
- A discussion with him and Musk could be attempted, but whether it would deliver anything, and whether to believe him? I couldn't say. 75.142.254.3 (talk) 04:04, 23 November 2024 (UTC)
- For Jimbo, pretty sure. Trump takes many meetings, both formal and informal, and I would hope that Musk would be interested in sitting in on their conversation(s). Many things happen in Trump's meetings, and I would assume that a Wales-Trump-Musk sit-down would veer into some interesting directions. Randy Kryn (talk) 13:08, 22 November 2024 (UTC)
- Well said Jimbo Wales, and yes, 230 is a concern. I'd request and suggest that you arrange a meeting with Donald Trump and Elon Musk at Mar-a-Lago to discuss how it would affect Wikipedia and other online projects. They both seem open to such meetings, and my guess is that it would be beneficial for the project in several ways. Randy Kryn (talk) 12:18, 22 November 2024 (UTC)
- I'd just like to add that the Federalist Society is not opposed to the First Amendment, and indeed has been staunchly supportive of what it is and what it means in terms of campaign finance. Unlike with Roe v Wade, where there was in fact a decades long campaign to overturn it, there's no similar movement to overturn key First Amendment precedents. Having said that, I do worry about Section 230's protections for user generated content, which is very important. Jimbo Wales (talk) 11:52, 22 November 2024 (UTC)
- You are correct about the strength of free speech protections in the US being more robust than just about anywhere else in the world, from a perspective of well-enshrined constitutional protections and the historical jurisprudence and respect from institutions. That said, if there were to be a concerted push by the incoming president and his allies to suppress certain information streams and target free speech that aligns against him, it would not be the first time that he sent shockwaves through the legal world by finding success in overturning long-established doctrines that were until recently thought iron-clad and inviolable, by appearing before a federal judiciary that is now showing the influence of decades of concerted efforts by the GOP and the Federalist Society to pack those courts to the gills with ideologically-aligned and personally loyal jurists. In short, nothing is certain in the current political and institutional landscape. I just don't think a whole-sale move of the organization and its technical infrastructure is either feasible or likely to substantially obviate the risks. The only answer is to take up the fight when and where it occurs. SnowRise let's rap 20:19, 17 November 2024 (UTC)
- Yeah, the 1st amendment provides stronger protections than almost all countries have; even if Trump tried he'd be hard pressed to find a court that would agree with Wikipedia censorship (unlike in India...). Galobtter (talk) 04:34, 11 November 2024 (UTC)
- The Wikimedia Foundation, which hosts Wikipedia, is based in the United States, and has to comply with US laws. Unless a relevant law is passed or legal action is taken, there isn't much Trump can do. ARandomName123 (talk)Ping me! 20:17, 10 November 2024 (UTC)
- If Trump goes authoritarian, which at this point I'm not going to rule out, US Law could be changed on a whim. But, I'm going to try to not be paranoid as much on this and WMF may already have evaluated appropriate courses of action given how they've managed to handle a wide variety of different kinds of disruption already. 75.142.254.3 (talk) 20:20, 10 November 2024 (UTC)
- The bottom line is, we just don't know. I'm sure the WMF has contingencies in place for if US law ever becomes prejudicial to the project. Until he actually becomes president, we don't know what will happen. We just have to wait and see. TheLegendofGanon (talk) 20:22, 10 November 2024 (UTC)
- I might have agreed with you a month ago, but considering the current crisis over the ANI matter, I am not at all confident that the WMF does have a proper contingency plan for a concerted litigation campaign from a Trump presidential administration or aligned parties. And actually, in that case, I could forgive their not having one: in that case, it's hard to predict for once bedrock civil and constitutional principles flying out the window, or know the exact combination of legal angle of attack and political pressure which may lead to such outcomes. Unlike certain other recent scenarios where the manner in which things have played out was mostly predictable, there is a lot that could very much be up in the air. The Justice Department will certainly be headed by a political loyalist for the next four years, and SCOTUS and many of the other federal courts are incredibly friendly to right wing causes, but the MAGA movement as a whole has not tended to attract the sharpest of legal minds for advocates, and not withstanding the election results, there is a lot of cultural attachment remaining in the U.S. for robust free speech protections--which afterall, conservative politicians are typically as happy to invoke and benefit from as anyone. So it's very difficult to know how concerned to be or what angle to expect the erosion of expression rights to set in from, if it does occur. In this case, I would sympathize if the WMF felt as much ina holding pattern as the rest of us. SnowRise let's rap 20:34, 17 November 2024 (UTC)
- It s about moderations, https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. Thus it would mean invoking free speeech against the Free speech of a Trumper wanting to use it s Infowars.com episode as a trusted source. As a first step, moving operations wouldn t be needed, just the legal entity for thr new Federal regulations. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:33, 24 November 2024 (UTC)
- That argument only really applies to social media. We aren't a social media platform. Also, I definitely think you're overreacting. QuicoleJR (talk) 01:49, 24 November 2024 (UTC)
- Elon musk tweets higlight he sees wikipedia as a social media that should have it s said censorship legally fought. At that point, what matter isn t what things are but how they are perceived by the ruling party. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 03:24, 24 November 2024 (UTC)
- That argument only really applies to social media. We aren't a social media platform. Also, I definitely think you're overreacting. QuicoleJR (talk) 01:49, 24 November 2024 (UTC)
- It s about moderations, https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. Thus it would mean invoking free speeech against the Free speech of a Trumper wanting to use it s Infowars.com episode as a trusted source. As a first step, moving operations wouldn t be needed, just the legal entity for thr new Federal regulations. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:33, 24 November 2024 (UTC)
- We know what will happen. Everything is written and Elon is tweeting about it specifically about wikipedia. https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. It won t be possible tjrough Executive order, but things laws can be changed by Congress.
- We should not act like the Sigmund Freuds sister's who throught they could survive in 1939. I hope Wikimedia is seriously thinking about moving overseas several time if needed in order to gain some years rather than being turned into a Darwin Awards receipient. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:19, 24 November 2024 (UTC)
- I fear such containgencies would be to fight legally and then Abide after losing even if this results in wikipedia being turned into an other twitter. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:42, 24 November 2024 (UTC)
- I might have agreed with you a month ago, but considering the current crisis over the ANI matter, I am not at all confident that the WMF does have a proper contingency plan for a concerted litigation campaign from a Trump presidential administration or aligned parties. And actually, in that case, I could forgive their not having one: in that case, it's hard to predict for once bedrock civil and constitutional principles flying out the window, or know the exact combination of legal angle of attack and political pressure which may lead to such outcomes. Unlike certain other recent scenarios where the manner in which things have played out was mostly predictable, there is a lot that could very much be up in the air. The Justice Department will certainly be headed by a political loyalist for the next four years, and SCOTUS and many of the other federal courts are incredibly friendly to right wing causes, but the MAGA movement as a whole has not tended to attract the sharpest of legal minds for advocates, and not withstanding the election results, there is a lot of cultural attachment remaining in the U.S. for robust free speech protections--which afterall, conservative politicians are typically as happy to invoke and benefit from as anyone. So it's very difficult to know how concerned to be or what angle to expect the erosion of expression rights to set in from, if it does occur. In this case, I would sympathize if the WMF felt as much ina holding pattern as the rest of us. SnowRise let's rap 20:34, 17 November 2024 (UTC)
- The Constitution of the United States provides protections that would be very hard for Trump or any other president to circumvent, and the consent of 2/3 of both houses of Congress and 3/4 of the states is required to amend it, so I'm not too worried yet. QuicoleJR (talk) 15:24, 11 November 2024 (UTC)
- Not only that, but we already can handle dealing with edits from congress itself. Gaismagorm (talk) 14:15, 12 November 2024 (UTC)
- Disagree, it would be invoking Free speech against the Free speech rights of the Trumper https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/ though things can be done with Congress appeoval. Clearence Thomas and an other judge are apparently waiting for Trump to step down/retire 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:28, 24 November 2024 (UTC)
- The bottom line is, we just don't know. I'm sure the WMF has contingencies in place for if US law ever becomes prejudicial to the project. Until he actually becomes president, we don't know what will happen. We just have to wait and see. TheLegendofGanon (talk) 20:22, 10 November 2024 (UTC)
- Thanks to a recent bill, the President may now strip the WMF of its non-profit status as long as it supports "terrorism". Aaron Liu (talk) 19:36, 23 November 2024 (UTC)
- Not quite yet. The House passed HR 9495 yesterday, but for it to actually become law there are a few more steps that would need to happen. Anomie⚔ 00:09, 24 November 2024 (UTC)
- It probably won’t pass the Senate this session, and the democrats could also filibuster it when the GOP takes a very slim majority next time. And if it did pass, the main targets would be Palestinian rights groups, which the US already treats inexcusably because it shamelessly supports Israeli war crimes as part of the US-Israel-Iran proxy war. The long game that is international geopolitics makes both Wikipedia and the current office holder’s grievance politics look small. Dronebogus (talk) 10:42, 24 November 2024 (UTC)
- Not quite yet. The House passed HR 9495 yesterday, but for it to actually become law there are a few more steps that would need to happen. Anomie⚔ 00:09, 24 November 2024 (UTC)
- And changing laws is indeed the plan https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. The article tells about executive orders, but I think it would be easy to get Fcc power being enlarged by Congress. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:26, 24 November 2024 (UTC)
- If Trump goes authoritarian, which at this point I'm not going to rule out, US Law could be changed on a whim. But, I'm going to try to not be paranoid as much on this and WMF may already have evaluated appropriate courses of action given how they've managed to handle a wide variety of different kinds of disruption already. 75.142.254.3 (talk) 20:20, 10 November 2024 (UTC)
- The primary Wikimedia data centers are located in the U.S., with caching centers distributed around the globe. I think you'd be hard pressed to find a country with better legal protections for online free speech, but as you note, it shouldn't be taken for granted. Legoktm (talk) 20:13, 10 November 2024 (UTC)
- Strongly Disagree. He hired the guy that plan to enact laws allowing to crack down on mederation on Project. The Framework would give the power to the Fcc to prevent any kind of moderations by platforms as long as it s not death threats. Wikipedia Articles would be legally compelled to accept Breibart New or Infowars as a trusted source. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:05, 23 November 2024 (UTC)
- What? What laws? Aaron Liu (talk) 19:37, 23 November 2024 (UTC)
- Project2025 https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. Though as suggested by the article, this would require a vote from Congress 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:25, 24 November 2024 (UTC)
- Nah cause someones gonna use for extreme left leaning content eventually and they will go back. Also I'm sure that it will be such a big screwup in countless of other ways that they will be forced to go back. Gaismagorm (talk) 02:05, 24 November 2024 (UTC)
- Look at twitter. It s not exteme left who did won but far right. Indeed, we can notice the strange marriage between Healthy food and anti regulationists. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:08, 24 November 2024 (UTC)
- What? Gaismagorm (talk) 02:10, 24 November 2024 (UTC)
- Trumpers now promote less pesticides with Robert Kennedy jr. In my Euoroppean country, the far right still boast that non poisned food is for the richs who have enough to eat anything 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:19, 24 November 2024 (UTC)
- Ah Okay. Gaismagorm (talk) 02:20, 24 November 2024 (UTC)
- Anyway, that wash your wishes of wikipedia not going in the right directions as the result of Trump. Moving legally is a lengthy operation that should be srudied in order to be ready when things become required. We can have the WMF as hardware user in the United States were the data is legammy managed from an the new country the WMF have moved to. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:32, 24 November 2024 (UTC)
- Ah Okay. Gaismagorm (talk) 02:20, 24 November 2024 (UTC)
- Trumpers now promote less pesticides with Robert Kennedy jr. In my Euoroppean country, the far right still boast that non poisned food is for the richs who have enough to eat anything 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:19, 24 November 2024 (UTC)
- What? Gaismagorm (talk) 02:10, 24 November 2024 (UTC)
- Look at twitter. It s not exteme left who did won but far right. Indeed, we can notice the strange marriage between Healthy food and anti regulationists. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:08, 24 November 2024 (UTC)
- What? What laws? Aaron Liu (talk) 19:37, 23 November 2024 (UTC)
- Are the servers located in the United States? It's looking like the answer is no, and I'm sorry for being paranoid, it's just that he has done things in this country that we didn't anticipate because we didn't expect anyone to have the sort of character that it would be a problem in that position. 75.142.254.3 (talk) 20:01, 10 November 2024 (UTC)
- The only thing he could do is hire a troll farm of some sort, which I don't expect us to have much trouble defending against. Aaron Liu (talk) 19:58, 10 November 2024 (UTC)
- As a basic precaution there should be a Wikipedia mirror with daily backups hosted on a server geolocated in a country with a higher democracy index and a higher internet freedom index than the US. I'd suggest Iceland, personally.—S Marshall T/C 04:23, 13 November 2024 (UTC)
- Honestly, it's unneeded. Look, I get worrying about this situation but I doubt the situation will get so bad where wikipedia needs to move overseas. As stsated above, wikimedia also likely already has a plan for if this happens. Gaismagorm (talk) 11:40, 13 November 2024 (UTC)
- In any event, I do believe the backups at least are already quite robust in that respect. I'm less certain about the current situation for the mirrors, but I'm sure that information is probably transparently located somewhere on-site or on Meta. SnowRise let's rap 20:39, 17 November 2024 (UTC)
- Data dumps are publics. But passwod hashes are not. We can clone but admins would be unable to login. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:07, 23 November 2024 (UTC)
- In any event, I do believe the backups at least are already quite robust in that respect. I'm less certain about the current situation for the mirrors, but I'm sure that information is probably transparently located somewhere on-site or on Meta. SnowRise let's rap 20:39, 17 November 2024 (UTC)
- What’s so great about Iceland? I don’t like the idea of being subject to the whims of a country with the population of a small city that’s floated the idea of banning internet pornography at least once. The most obvious choice would be Switzerland. Dronebogus (talk) 01:04, 20 November 2024 (UTC)
- Iceland's a fantastic place, and everyone needs to go on a night out in Reykjavik before they die, although some people might need to extend their mortgages to do it. It's true that pornography is technically illegal in Iceland, so in that scenario, if the worst should happen, some of your more worrisome drawings on Wikimedia Commons might be lost; but I understand that the antipornography laws are rarely enforced.—S Marshall T/C 17:05, 20 November 2024 (UTC)
- I have spent a night in Reykjavik (well, it was aboard ship, but we did stay overnight), but I will note that Iceland has no army or navy and only a small coast guard. I'm not sure how well the country could resist pressure from the US (or Russia, for that matter, if the US were looking the other way) to interfere with any entity operating there. I used to have hopes that the EU would get its collective defense act together, but even if it did, Iceland hasn't joined, yet. Donald Albury 18:26, 20 November 2024 (UTC)
- I really don't think we need to worry about the US or Russia invading iceland or something. Besides, they have allies that could protect them. Gaismagorm (talk) 18:31, 20 November 2024 (UTC)
- But since we’re pretending like this actually a viable idea Switzerland has a formidable military for the express purpose of defending its neutrality. Dronebogus (talk) 06:26, 21 November 2024 (UTC)
- OKay I have the perfect one. Vatican city. They'd first have to get through italy, then the elite swiss guard. Gaismagorm (talk) 11:35, 21 November 2024 (UTC)
- Not only that but it would look really bad if anyone invaded the vatican. Gaismagorm (talk) 11:36, 21 November 2024 (UTC)
- Wikimedia starts its own nation. The Bir Tawil is always available. Dronebogus (talk) 21:35, 21 November 2024 (UTC)
- Not only that but it would look really bad if anyone invaded the vatican. Gaismagorm (talk) 11:36, 21 November 2024 (UTC)
- OKay I have the perfect one. Vatican city. They'd first have to get through italy, then the elite swiss guard. Gaismagorm (talk) 11:35, 21 November 2024 (UTC)
- But since we’re pretending like this actually a viable idea Switzerland has a formidable military for the express purpose of defending its neutrality. Dronebogus (talk) 06:26, 21 November 2024 (UTC)
- I really don't think we need to worry about the US or Russia invading iceland or something. Besides, they have allies that could protect them. Gaismagorm (talk) 18:31, 20 November 2024 (UTC)
- @S Marshall: I’m actually thinking of stuff like the Internet Watch Foundation and Wikipedia or Seedfeeder. Plus a country with a tiny, homogeneous population (even a very friendly one) is more likely and able to legally force its weird idiosyncratic opinions onto Wikimedia, especially if it thinks the biggest nonprofit website on Earth has done something to damage its reputation (because in this hypothetical scenario Wikimedia would quickly become synonymous with Iceland by virtue of being its biggest export besides maybe Bjork) Dronebogus (talk) 06:39, 21 November 2024 (UTC)
- Socialists are expected to win the next Iceland elections this month, so we would have at least 5 years without worrying. Many organizations had to move in Paris then in London then in the United States in WWII. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:23, 24 November 2024 (UTC)
- The moon. We will move to the moon. Gaismagorm (talk) 02:06, 24 November 2024 (UTC)
- No. Latency would be tarrible and it wouldn t mean much than moving into the ocean as legally, everything would need to be attached to an earth nation. However by speaking about time, Elon, is planning 2 starship launches per week under Trump. If he moves to mars, in less than a decade, he ll be cut from Internet access. That s why gainning time is usefull. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:15, 24 November 2024 (UTC)
- The moon. We will move to the moon. Gaismagorm (talk) 02:06, 24 November 2024 (UTC)
- Socialists are expected to win the next Iceland elections this month, so we would have at least 5 years without worrying. Many organizations had to move in Paris then in London then in the United States in WWII. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:23, 24 November 2024 (UTC)
- I have spent a night in Reykjavik (well, it was aboard ship, but we did stay overnight), but I will note that Iceland has no army or navy and only a small coast guard. I'm not sure how well the country could resist pressure from the US (or Russia, for that matter, if the US were looking the other way) to interfere with any entity operating there. I used to have hopes that the EU would get its collective defense act together, but even if it did, Iceland hasn't joined, yet. Donald Albury 18:26, 20 November 2024 (UTC)
- Iceland's a fantastic place, and everyone needs to go on a night out in Reykjavik before they die, although some people might need to extend their mortgages to do it. It's true that pornography is technically illegal in Iceland, so in that scenario, if the worst should happen, some of your more worrisome drawings on Wikimedia Commons might be lost; but I understand that the antipornography laws are rarely enforced.—S Marshall T/C 17:05, 20 November 2024 (UTC)
- Honestly, it's unneeded. Look, I get worrying about this situation but I doubt the situation will get so bad where wikipedia needs to move overseas. As stsated above, wikimedia also likely already has a plan for if this happens. Gaismagorm (talk) 11:40, 13 November 2024 (UTC)
- Just a thought, but if the WMF does have or in the future creates contingency plans for moving operations in response to political developments, publicly revealing such plans in advance might make it harder to carry them out. It would be like a business announcing that they will build a factory in a given location without having at least an option to buy the land they will build on. Donald Albury 16:11, 13 November 2024 (UTC)
- They don t have to reveal which plan, only if they have a plan to move and if no build 1. Moving operations isn t required, just move legally. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:38, 24 November 2024 (UTC)
- Stop worrying to much, I doubt Trump is going to do anything against Wikipedia. Attacking and threatening to block Wikipedia will only infuriate the centrist voters, which I didn't think anyone would want to do. Some of the editors here are Trump supporters as well! What is concerning for Wikipedia today is the above case in India, where WMF HAD agreed to disclose the editor's information because of a defamation suit. ✠ SunDawn ✠ (contact) 06:01, 14 November 2024 (UTC)
- This is also an important part of the analysis: we are hardly the most vulnerable collective entity in existence: for obvious reasons, we are meant to be apolitical, unaligned, and disinterested in directly influencing public perception of any matter (beyond the core mission of providing information, of course). But the one time this community was willing to flex its muscles to head off a legislative outcome that it felt was a danger to the fundamental viability of the project, the latent power of the project's reach, through the site/encyclopedia was made pretty obvious--and that strength was not trivial, utterly crushing legislation that had been sailing through congress. If pushed into a corner and forced to abandon its apolitical role, this movement is capable of coming back with potent counter-punches in terms of grassroots mobilization, and I think there is some perception of that fact out there now.
- There's also the massive legal warchest of the WMF to contend with (which so many on this project have groused about over recent years, but which was well-advised to build up, for exactly this moment in time). Of course, the current ANI situation raises significant concerns about the ability of the WMF and the community to row together, which is one of the most concerning things about that situation. But the WMF will not have the same onerous sub judice principles giving it both legitimate and illegitimate reasons not to communicate clearly with us (at least nowhere near to the same degree) with regard to suits before U.S. courts. SnowRise let's rap 20:51, 17 November 2024 (UTC)
- Strongly Disagree. He is attempting to appoint the guy at the Fcc that plan to enact laws allowing to crack down on mederation as the part of Project2025 he did write. The Framework would give the power to the Fcc to prevent any kind of moderations by platforms as long as it s not death threats. Wikipedia Articles would be legally compelled to accept Breibart New or Infowars as a trusted source. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:14, 23 November 2024 (UTC)
- Realistically, I doubt anything in particular will happen to Wikipedia. But if you want to prepare for the worst, as it were, and you have a machine with some extra disk space, consider periodically keeping an updated copy of the Wikipedia database dump. I get one periodically, just in case, since I've got plenty of spare space on this machine anyway. If worst ever came to worst, plenty of volunteers have the technical skill to get a DB dump up and working on a MediaWiki instance elsewhere, and run it at least while things are sorted out. I doubt it'll ever come to that, but if you want to be prepared just in case, well, the more widely copies of those are available, the better. Just remember that Wikipedia was completely run by volunteers once, from software development to sysadmins, and we could do it again if we had to. Seraphimblade Talk to me 06:12, 14 November 2024 (UTC)
- The biggest problem would be providing sufficient server capacity to handle the traffic. Anybody can put up a static mirror of WP as it was on the download date (Lord knowns there are a lot of those on the Internet), but providing an editable version that would be used by a large proportion of current editors would be pretty expensive. And if there were more than one editable version out there, it would be very difficult to ever merge the changes back into a single database, with some clones becoming permanent forks, perhaps sponsored by governments and other large entities. Donald Albury 18:19, 14 November 2024 (UTC)
- I've thought of the technical feasibility of a forked encyclopedia more the last few weeks than I have in the last ten years. Not as a serious exercise in making any plans, but just as a consequences of thinking about the relationship between the project and the WMF and what actually keep volunteers invested in this particular, traditional and only mode of building the encyclopedia. Aside from the obvious organizational and cultural ties, there's the obvious cost of maintaining ongoing access and development that you talk about, but then there's also the liabilities and legal fees. If circumstances were drastic enough to take Wikipedia itself down, it would be hard to shield any project with a big enough profile to be able to afford the access and tools for readers and editors from whatever legal forces had compromised Wikipedia's viability in the first place. Even redundancy different jurisdictions wouldn't necessarily obviate the kinds of threats that would be sufficient to take the original Wikipedia out of the picture. SnowRise let's rap 07:49, 18 November 2024 (UTC)
- You know, unless it's a case of tearing itself apart, I suppose... SnowRise let's rap 07:50, 18 November 2024 (UTC)
- I hadn't thought about the legal side. Trying to fork Wikipedia may well cause more problems than it could ever solve. I think the best chance of preserving Wikipedia is anything like its current form is to let the foundation do its job. If the foundation cannot protect Wikipedia in the US, there is little hope of Wikipedia surviving somewhere else. Donald Albury 15:08, 18 November 2024 (UTC)
- I m thinking about WWII where many organizations had to move in Paris then in London then in the United States. Moving should be studied, the fundation wouldn t be able to protect as much Wikipedia as in the US but it would be allowed to do better than abide to https://www.wired.com/story/brendan-carr-fcc-trump-speech-social-media-moderation/. We might even gain 10 years by behaving like that. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:12, 24 November 2024 (UTC)
- I do own a 200Tb server with 1Tib of ram on a 10Gb/s connection. Enough to power all wikipedia.org websites in read only mode. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:15, 23 November 2024 (UTC)
- Unless it s someone who own the hardware personally. No, as I looked, most of the traffic is static web pages loading numbers aren t that much important. The problem is to have proper physicall backups but this would let the WMF time to organize for moving overseas.
- However, as a matter of risks mitigation, password hashes aren t part of data dumps. Until they aren t dumped, admins wouldn t be able to login back. Asking them to be dumped would be an important step. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:01, 24 November 2024 (UTC)
- I've thought of the technical feasibility of a forked encyclopedia more the last few weeks than I have in the last ten years. Not as a serious exercise in making any plans, but just as a consequences of thinking about the relationship between the project and the WMF and what actually keep volunteers invested in this particular, traditional and only mode of building the encyclopedia. Aside from the obvious organizational and cultural ties, there's the obvious cost of maintaining ongoing access and development that you talk about, but then there's also the liabilities and legal fees. If circumstances were drastic enough to take Wikipedia itself down, it would be hard to shield any project with a big enough profile to be able to afford the access and tools for readers and editors from whatever legal forces had compromised Wikipedia's viability in the first place. Even redundancy different jurisdictions wouldn't necessarily obviate the kinds of threats that would be sufficient to take the original Wikipedia out of the picture. SnowRise let's rap 07:49, 18 November 2024 (UTC)
- Yes, I have the entirety of the English Wikipedia as of a few months ago downloaded onto my laptop, plus a few other Wikimedia projects. TheLegendofGanon (talk) 21:08, 16 November 2024 (UTC)
- Worst comes to worst, execute WP:TERMINAL. 2400:79E0:8071:5888:1808:B3D7:3BC1:B010 (talk) 08:43, 17 November 2024 (UTC)
- In case of emergency, one should always know how to use the terminal. Seraphimblade Talk to me 23:07, 17 November 2024 (UTC)
- But if we have the dumps of the passwords hashes, we can just relocate to an other country. Telegram itself is completely unresctrictred by being based in Dubai. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 20:27, 23 November 2024 (UTC)
- The biggest problem would be providing sufficient server capacity to handle the traffic. Anybody can put up a static mirror of WP as it was on the download date (Lord knowns there are a lot of those on the Internet), but providing an editable version that would be used by a large proportion of current editors would be pretty expensive. And if there were more than one editable version out there, it would be very difficult to ever merge the changes back into a single database, with some clones becoming permanent forks, perhaps sponsored by governments and other large entities. Donald Albury 18:19, 14 November 2024 (UTC)
- Fyi, the US House narrowly stopped a legislation that would give Trump the keys to revoke non-profit status of any non-profit organisation in US. [59], [60]. – robertsky (talk) 01:43, 17 November 2024 (UTC)
- To be frank, I am greatly surprised by the faith you put in the US Constitution. Many of you seem unaware that the threats you are facing are unprecedented. Trump attempted a coup in 2020 and during his campaign he actually said he wants to be a dictator. Or how else are we to interpret such things as "If you vote for me, you don't have to vote at all in four years"? He didn't say all this back in 2016. Neither did he employ such rascals in his government as he is planning to do know. Therefore I find the argument that we lived through Trump's first presidency unharmed very unconvincing.
- He and his loyal servants have expressed their contempt of science on numerous occasions, most recently J.D. Vance by saying "professors are the enemy". With both houses of the Congress and the Supreme Court in Republican hands, checks and balances aren't worth much, especially since the party has shown an unfaltering loyalty for its Great Leader over the past few years. A major Gleichschaltung operation is to be expected. What matters most in situations like this is not the law but the sentiment of the people. And that sentiment seems to be strongly in favour of an authoritarian dictatorship. Under such conditions, laws are easily explained the way that best fits the regime.
- So for goodness' sake, move! Not just the servers, but also the WMF as a legal entity. I am well aware that no country on Earth is entirely safe of a populist threat, but the situation isn't as dire everywhere as it is in the US. Canada could be an option. Or Spain, one of the few European countries that still welcomes immigration of some sort. Do it, before it's too late! Don't let yourselves and our work be ground among the cogwheels of this vile, narcissistic despotism! Steinbach (talk) 10:56, 17 November 2024 (UTC)
- Steinbach, you write that the sentiment of the people
seems to be strongly in favour of an authoritarian dictatorship
and yet the current popular vote count has Trump at 50.1% and dropping as California votes continue to be counted. So, the sentiment is not as strong as you portray it. I too am deeply concerned about the path that the United States is on, but we should not overstate public sentiment for dictatorship. Cullen328 (talk) 22:55, 17 November 2024 (UTC)- We should rather say enough peoples that want to go authoritharian so that it doesn t matters. Clearly, things like Dark Maga couldn t had been something elected several years ago. An ideological shift happnned. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:48, 24 November 2024 (UTC)
- Steinbach, you write that the sentiment of the people
- Billions of people rely on Wikipedia. Trump won't be able to do anything without the world going against him. Tons of his very voters shame his fake news big lie narrative. Aaron Liu (talk) 17:20, 17 November 2024 (UTC)
- Ah! you say that, but look how it ended for Twitter. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:19, 23 November 2024 (UTC)
- How is that related? Aaron Liu (talk) 19:38, 23 November 2024 (UTC)
- In 2023, you could had said: Billions peoples relies on Twitter, Elon won t be able to trick it s algorithms to promote disinfo and gender hate speech since the platform rules disallow such thing (and in fact promoting gender discrimination is still among x.com terms of rules but of course the owner is now doing it all the day along and it s 206 millions followers props its content)
- There s a flight of course, but it s not massive, and x.com largely keeps the original twitter.com userbase. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:54, 24 November 2024 (UTC)
- Yes, but it's important to note that the twitter changes were due to elon buying twitter, not due to new laws being formed. Elon Musk (no matter how much he wants to try) can't buy wikimedia. Gaismagorm (talk) 02:08, 24 November 2024 (UTC)
- What s the difference between Elon buying twitter and Congress weaponizing the Fcc with a conservative court? I d rather says none! 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:11, 24 November 2024 (UTC)
- The difference is one is just a poor business strategy, and the other is mostly unfeasable (at least to the level that some are wanting, or dreading). Besides, wikipedia isn't a social media site. It is a encyclopedia. Gaismagorm (talk) 02:19, 24 November 2024 (UTC)
- Elon musk tweets claims highlights that he sees no difference between speech regulation on wikipedia and Youtube/Facebook. I might agree the biggest risk is gettting the fundation non profit status revoked. McCartysm shows how the constitution can little free speech protections. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:24, 24 November 2024 (UTC)
- And McCarthy didn’t last either, because eventually someone called his BS and he crumbled Dronebogus (talk) 10:14, 24 November 2024 (UTC)
- With the planning purschase of MSNBC by Elon, things will last like in Russia where richs mens that supports the executive using conflict of interests purschase and control the media. It Science evidences that won t last. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 10:32, 24 November 2024 (UTC)
- And McCarthy didn’t last either, because eventually someone called his BS and he crumbled Dronebogus (talk) 10:14, 24 November 2024 (UTC)
- Elon musk tweets claims highlights that he sees no difference between speech regulation on wikipedia and Youtube/Facebook. I might agree the biggest risk is gettting the fundation non profit status revoked. McCartysm shows how the constitution can little free speech protections. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:24, 24 November 2024 (UTC)
- The difference is one is just a poor business strategy, and the other is mostly unfeasable (at least to the level that some are wanting, or dreading). Besides, wikipedia isn't a social media site. It is a encyclopedia. Gaismagorm (talk) 02:19, 24 November 2024 (UTC)
- What s the difference between Elon buying twitter and Congress weaponizing the Fcc with a conservative court? I d rather says none! 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 02:11, 24 November 2024 (UTC)
- Yes, but it's important to note that the twitter changes were due to elon buying twitter, not due to new laws being formed. Elon Musk (no matter how much he wants to try) can't buy wikimedia. Gaismagorm (talk) 02:08, 24 November 2024 (UTC)
- How is that related? Aaron Liu (talk) 19:38, 23 November 2024 (UTC)
- Ah! you say that, but look how it ended for Twitter. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:19, 23 November 2024 (UTC)
- What you are urging is not really feasible, at least not in the short term, and if the fight you fear is coming, it will go best for the movement on the ground that a U.S. base provides. If you think that moving to Spain and putting the project even further under the auspices of EU law will lead to greater free speech protections, I have bad news for you: a substantial portion of the content on this site would be much more amenable to exclusion and state interference under petition by private parties under GDPR principles than it would under U.S. jurisprudence. This is one area of civil and human rights where the EU is much more laissez-faire about suppression than is the U.S., especially when you consider "right to be forgotten" stances. SnowRise let's rap 21:02, 17 November 2024 (UTC)
- Exactly, but we don t have to do it on the short term. We have time before things changes. And that s why we must be prepared to move instead of realizing we have to move within 2 weeks.. We can move in Damage control. For example if we did choose Qatar, we would have to just remove all content that critisize the country. Otherwise they have a strong journalism and allow to critiise anything else, including saudi Arabia. Plus there s no elections there (so stable). There would be no such things as accepting climate changes and vaccine by Trumpers. The United States might had been the best place, but now it risks to become worst than Russia. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:21, 23 November 2024 (UTC)
- We are not moving to any country that would make us remove all content critical of said country. QuicoleJR (talk) 22:14, 23 November 2024 (UTC)
- It s about a tradeoff. Because you prefer not only letting Trumpers to remove anti trump content but to change all sciences articles at a massive scale? No info is better than conspirasionism and disinfo. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:04, 24 November 2024 (UTC)
- This would take longer than two weeks since the WMF would have to legally establish themselves in a new country, and study their laws so they are in compliance with them. So years, not two weeks. Also Qatar would want to delete articles and media of human sexuality and possibly some other highly contentious topics, so that would appear to be a nonstarter for WMF. Abzeronow (talk) 23:47, 23 November 2024 (UTC)
- I m noticing Telegram was allowed to let gender discussion happenning by being in Dubai in addition to outright advertising illegal drug trade. Otherwise, exactly! As passing laws through congress takes time' we do have time. That s why it has to be studied now, so when rather than if it become required everything would be ready. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 01:08, 24 November 2024 (UTC)
- We are not moving to any country that would make us remove all content critical of said country. QuicoleJR (talk) 22:14, 23 November 2024 (UTC)
- Exactly, but we don t have to do it on the short term. We have time before things changes. And that s why we must be prepared to move instead of realizing we have to move within 2 weeks.. We can move in Damage control. For example if we did choose Qatar, we would have to just remove all content that critisize the country. Otherwise they have a strong journalism and allow to critiise anything else, including saudi Arabia. Plus there s no elections there (so stable). There would be no such things as accepting climate changes and vaccine by Trumpers. The United States might had been the best place, but now it risks to become worst than Russia. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:21, 23 November 2024 (UTC)
- Cross that bridge if we get there. I don't imagine this would be seriously considered at the current time. –Novem Linguae (talk) 22:39, 17 November 2024 (UTC)
- Last I heard the WMF keeps both the main site and the backup site in the US. Now might be a good time to reevaluate this and move one of them to another country. The WMF is quite good at employing a diverse multinational workforce scattered across the planet, but it is very centralised when it comes to fundraising, a more distributed model where funds raised in particular countries were controlled by affiliate charities or chapters in those countries would in my view be stronger. At least it wouldn't have a single point of failure. ϢereSpielChequers 15:02, 18 November 2024 (UTC)
- The problem is wikimedia begin subject of thr incoming Fcc laws. 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 18:23, 23 November 2024 (UTC)
- I don't think the WMF has contingency plans for any potential authoritarian steps Trump may take, and as seen with the ANI case, may obey any legal demands the Trump Administration makes of them. WMF does have some flexibility not to do some things since they are not a publisher (that is they don't have editorial control over Wikipedia), and WMF does not want such control. I don't think the WMF would share their contingency plans if they have them though, and by the time Trump or his Administration took extreme authoritarian measures against WMF and its Board, it would probably be too late to do anything. Abzeronow (talk) 19:55, 23 November 2024 (UTC)
- The point is to ask to etablish such moving overseas plans. They don t have to tell us which is the plan but if they have 1.
- Under the project 2025, they would compell the WMF to allow any kind of sources as trusted (and thus requires them to have some controls over Wikipedia). 2A01:E0A:401:A7C0:64A1:A0FD:CDDA:2E99 (talk) 20:04, 23 November 2024 (UTC)
- WMF moving its servers to Switzerland has its own tradeoffs (no PD-Art; possibly different fair use/fair dealing laws, some PD-US works would have to be deleted), and such a process would take years so it would not be helpful against a Trump Administration. Abzeronow (talk) 21:39, 23 November 2024 (UTC)
- Moving servers isn t needed, just the legal entity. I m also noticing that by chosing Dubai Telegram was allowed to have no moderation at all to the point of outright being allowed to let opiods advertising posts. United States is clearly the best country, but things can become worst than in Russia and thus have to legally move to a place where things wouldn t be ideal but better thzn the upUnited States2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:16, 24 November 2024 (UTC)
- What if we hosted some content in some countries and other content in others? I know, I know, that’s probably just the insane troll logic talking Dronebogus (talk) 10:48, 24 November 2024 (UTC)
- WMF moving its servers to Switzerland has its own tradeoffs (no PD-Art; possibly different fair use/fair dealing laws, some PD-US works would have to be deleted), and such a process would take years so it would not be helpful against a Trump Administration. Abzeronow (talk) 21:39, 23 November 2024 (UTC)
As an alternative, would it be possible to have dumps of password hashes for each users? I know it s a little security threat but it would be a good thing in current times, As there s data dumps of everything else, this would allows anyone to resume operations (without physicallly separated backups though). In my case, I personally own what s required for 1/4th of the traffic. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 00:42, 24 November 2024 (UTC)
- Is this thread a good use of time? WMF will not be moving out of the United States, Elon Musk and Donald Trump will not be meeting with anyone from WMF (nor would it be wise for us to do anything to get on their radar), and WMF is not going to publicly release our password hashes. This thread is full of the most hypothetical of hypotheticals. –Novem Linguae (talk) 10:35, 24 November 2024 (UTC)
- It’s not. But it a) helps Wikimedians cope with the uncertainty of the present moment and b) leads to amusing tangents about relocating to Iceland/Switzerland/Spain/the Moon. Dronebogus (talk) 10:45, 24 November 2024 (UTC)
- Well said, Novem Linguae. Phil Bridger (talk) 11:17, 24 November 2024 (UTC)
- Passwords hashes says little about the underlying password as basically it s what things like Bitcoin s security is based on. I m suggesting it as an alternative of moving to a better place if the United States turns from the best place to the worst place in order to to let other peoples take back hosting in other countries. Personally, I created an account in 2013, and wouldn t mind having the password hash being released for thr greater good.
- Ok. Guys Makes sure to not have debates https://x.com/DemocraticWins/status/1835668071773581413. But I m sure to bet something, and I can open a Polymarket about this: Within 11 months you d had lost all your trials by deseparately trying to stay in the United States at all costs, and all langagues of wikipedia would have turned to promoting consiparcies theories even in in maths or wikipedia.org will be shut down. Such passivity in the face of the obvious will be remembered in the history like the actions of the Sigmund Freuds Sisters thinking something like the Shoas won t happen. 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2 (talk) 11:47, 24 November 2024 (UTC)
- 2A01:E0A:401:A7C0:7829:35FD:7F37:21A2, stop WP:BLUDGEONing the debate with your sensational doomerism. You have made fewer than 50 edits and they’re exclusively to this thread. This is WP:SPA behavior and it’s growing tedious. If you are WP:NOTHERE to build an encyclopedia then I see good reason to report you to an admin. Dronebogus (talk) 12:59, 24 November 2024 (UTC)
Wikimedia Foundation Bulletin November Issue 2
Upcoming and current events and conversations
Talking: 2024 continues
- Conversation with the trustees: Speak directly with the Wikimedia Foundation trustees about their work at the next Conversation with the Trustees on 27 November from 12:00 – 13:30 UTC.
- Wikimedia Hackathon: Registration is now open for the 2025 Wikimedia Hackathon which will be held in Istanbul, Turkey, May 2–4, 2025.
- Language Community: The next language community meeting will be held on November 29 at 16:00 UTC.
- Wikimania 2025: Application for scholarship to attend Wikimania 2025 in Nairobi is open until the end of December 8.
- Central Asian WikiCon: The Central Asian WikiCon 2025 will take place on April 19–20, 2025, in Tashkent, Uzbekistan. Applications to be part of the Program and Scholarship Committee is open until November 30.
Annual Goals Progress on Infrastructure
See also newsletters: Wikimedia Apps · Growth · Research · Web · Wikifunctions & Abstract Wikipedia · Tech News · Language and Internationalization · other newsletters on MediaWiki.org
- Tech News: Admins and users of the Wikimedia projects where Automoderator is enabled can now monitor and evaluate important metrics related to Automoderator's actions; Stewards can now make global account blocks cause global autoblocks. Learn about the latest tech updates from tech news 45, 46, and 47.
- Wikifunctions: Wikifunctions now has a new Type: rational numbers. They expand the ability to deal with numbers considerably, allowing us to work with fractions and decimals, and not just whole numbers anymore. More status updates.
- Temporary accounts: We are rolling out temporary accounts for unregistered (logged-out) editors for more wikis including Romanian, Serbian, Danish, and Norwegian Bokmål.
Annual Goals Progress on Equity
See also a list of all movement events: on Meta-Wiki
- Language & Internationalization: The fifth edition of the Language & Internationalization newsletter is available. Some key highlights: Mooré Wikipedia is live; Keyboard Layouts for Multiple Languages Added; New Projects Added to Translatewiki.net.
- Wikimedia Research Showcase: Watch the latest showcase which looked at external factors that help different language versions of Wikipedia thrive.
- Wikipedia Library: What's new in the Wikipedia Library?
- Tulu Wikisource: Welcoming Tulu Wikisource.
- CEE Meeting 2024: Highlights from Central Asian community members at the CEE Meeting 2024.
- Let's Connect: Let's Connect Learning clinic on Gender Sensitivity Training within Wikimedia communities was held on November 22.
Annual Goals Progress on Effectiveness
See also: quarterly Metrics Reports
- Audit reports 2023-24: Highlights from the fiscal year 2023–2024 Wikimedia Foundation and Wikimedia Endowment audit reports.
- Wikimedia Enterprise: Financial report of Wikimedia Enterprise for the fiscal year 2023–2024.
Board and Board committee updates
See Wikimedia Foundation Board noticeboard · Affiliations Committee Newsletter
- Board Updates: The Board met in Katowice, Poland on August 5 and held its quarterly business meeting before Wikimania. Learn more about the outcomes of the meeting.
- AffCom: The Affiliates Committee has resumed User Group recognition work after a pause to improve the User Group recognition process.
Other Movement curated newsletters & news
See also: Diff blog · Goings-on · Planet Wikimedia · Signpost (en) · Kurier (de) · Actualités du Wiktionnaire (fr) · Regards sur l’actualité de la Wikimedia (fr) · Wikimag (fr) · other newsletters:
- Topics: Education · GLAM · The Wikipedia Library
- Wikimedia Projects: Milestones · Wikidata
- Regions: Central and Eastern Europe
Subscribe or unsubscribe · Help translate
Previous editions of this bulletin are on Meta-Wiki. Let askcacwikimedia.org know if you have any feedback or suggestions for improvement!
MediaWiki message delivery 18:18, 25 November 2024 (UTC)
Wikimedia Foundation banner fundraising campaign in Australia, Canada, Ireland, New Zealand, the UK, and the US starts next week
Dear all,
As mentioned previously, the WMF is running its annual banner fundraising campaign for non logged in users in Australia, Canada, Ireland, New Zealand, the UK, and the US from the 2nd to the 31st of December 2024.
You can find more information around the campaign on the community collaboration page.
Generally, before and during the campaign, you can contact us:
- On the talk page of the fundraising team or on the community collaboration page
- If you need to report a bug or technical issue, please create a phabricator ticket
- If you see a donor on a talk page, VRT or social media having difficulties in donating, please refer them to donate at wikimedia.org
Thank you and regards, JBrungs (WMF) (talk) 05:54, 27 November 2024 (UTC)
- If it starts next week, then why have I been seeing it for several weeks already? 216.147.127.204 (talk) 17:39, 28 November 2024 (UTC)
The future of US government web sites as sources
I am posting this here because it has very broad implications for the project and may require foundation help in the coming weeks. Wikipedia articles on energy and the environment and other many other subjects rely on data from US government web sites, which are generally regarded as authoritative. There is a significant likelihood that many or all of these sites will be taken offline after January 20, 2025 when the US administration changes over. Is the foundation participating in any organized effort to back this material up? Can we just rely on the Internet Archive? What happens if the new administration puts up conflicting data? Will editors be free to "correct" articles based on what newer Government websites say, regardless of scientific backing? We do not have a lot of time to think this through.--agr (talk) 19:02, 1 December 2024 (UTC)
- I understand (and share) your concern, but deciding which sources are reliable is an editorial decision which the WMF does not get involved in. Sources that were once considered reliable can have their reputation reevaluated if conditions warrant, and even sources that are generally considered reliable should always be examined with a critical eye to ensure that any particular statement holds up to the general reputation.
- This is an important issue, but it's just not one that the WMF has any input on. I would suggest asking this at WT:RS or perhaps WP:RSN. RoySmith (talk) 19:44, 1 December 2024 (UTC)
- As far as I know, whenever something is cited on Wikipedia, the Internet Archive automatically takes a snapshot of it. You can contact someone like GreenC to confirm this.
- The rest of your post seems like it would be a good fit for WP:RSN. Reliable sources have become unreliable before, and RSN can handle reducing a source's ranking on the WP:RSPSOURCES list when that situation comes to pass. A note will even be added to the entry stating that it used to be reliable, and after what date it became unreliable. However, it might be jumping the gun to post about this before it actually happens. There's not really anything to do yet. –Novem Linguae (talk) 00:27, 2 December 2024 (UTC)
- Do you have a specific source for the allegations that many or all of these sites will be taken offline after January 20, 2025? Yes, the Dept. of Ed website's not going to be up anymore if that agency is axed, but this isn't the first post that I've seen here predicting that the administration change will be the end of America as we know it. Yes, if the energy/climate/public health sites go downhill we can/will revisit how we handle those sources. But all of this doom and gloom is overwrought, like when people I knew thought Obama was the antichrist or that Hillary was going to put Christians into death camps. This is Wikipedia, not Reddit. I thought we were a little more level-headed here. Hog Farm Talk 02:01, 3 December 2024 (UTC)
- We had a nice four years where the main agitators in AMPOL were right-wing nuts. These are pretty easy to take care of, since they have virtually zero social capital on Wikipedia. They can be overruled and the community is ready to ban them at the drop of a hat if they get frustrated and lash out. Now we can look forward to four years where the main agitators will be left-wing nuts and #Resistance. This is harder to deal with because these people do have social capital on Wikipedia and have wikifriends (including several established editors and admins) to come back them up in disputes or tilt consensus. I suspect we can also look forward to more Anti-American bigotry toward subjects and editors as well. Thebiguglyalien (talk) 19:43, 3 December 2024 (UTC)
Recent WMF update on ANI case
Noting that the WMF has posted an update on the ANI case here on 2 December, for those interested. —Ganesha811 (talk) 12:37, 4 December 2024 (UTC)
I can’t upload Auferstanden aus Ruinen
You see, the East German anthem doesn’t have an audio file because when I tried to upload it, it doesn’t work. It keeps telling it is unconstructive, but there is no other file. Same thing for the Chechen anthem, even thought the file doesn’t work on mobile. 197.167.245.218 (talk) 11:27, 6 December 2024 (UTC)
- Have you tried uploading it to https://commons.wikimedia.org? If that doesn't work, maybe post on their commons:Commons:Help desk. –Novem Linguae (talk) 18:46, 6 December 2024 (UTC)
Wikimedia Foundation Bulletin December Issue
Upcoming and current events and conversations
Talking: 2024 continues
- Wikimania: Open call to host Wikimania 2027 and beyond is open until end of January 27 anywhere on earth.
Annual Goals Progress on Infrastructure
See also newsletters: Wikimedia Apps · Growth · Research · Web · Wikifunctions & Abstract Wikipedia · Tech News · Language and Internationalization · other newsletters on MediaWiki.org
- Tech News: Chart extension is now available on Commons and Testwiki; a new version of the standard wikitext editor-mode syntax highlighter will be available as a beta feature; Edit Check will be relocated to a sidebar on desktop. More updates from tech news 50, 49, and 48.
- Wikifunctions: WordGraph dataset is released, which is particularly useful for abstract descriptions for people in Wikidata. More status updates.
- Wikipedia 2024 Year in Review: Wikipedia 2024 Year in Review launched, showcasing the collective impact of Wikipedia and Wikipedia contributors in the last calendar year. The iOS App also released a personalized Year in Review to Italy and Mexico, with insights based on reading, editing, and donation history.
- Wikipedia Android App: The Android team has launched the Rabbit Holes feature in the final release of the year as part of Wiki Experiences 3.1. Currently being tested in Sub-Saharan Africa and South Asia, this feature suggests a search term and a reading list based on the user's last two visited articles. For more details or to share feedback, visit the project page.
Annual Goals Progress on Equity
See also a list of all movement events: on Meta-Wiki
- WikiCelebrate: From Challenges to Change-Making: We Wikicelebrate Chabota Isaac Kanguya, a passionate contributor from Zambia, whose journey through the Wikimedia movement embodies resilience, collaboration, and a commitment to representing underrepresented voices.
- Conference: Announcing Central Asian WikiCon 2025 which will be hosted at Diplomat International School on April 19–20, 2025, in Tashkent, Uzbekistan.
- Campaigns and topical collaboration: The Campaign Product and Programs teams published research on the needs of WikiProject and other topical collaborations.
- Wikisource: The journey so far and looking ahead with Wikisource Loves Manuscripts (WiLMa).
- CEE Meeting: Experiences and Highlights by Central Asian Community Members.
- Partnership: Wikimedia Indonesia and Google Join Forces for Wikipedia Content Enrichment in Indonesia.
- Wikimedia Research Showcase: Watch the latest showcase which discussed AI for Wikipedia.
Annual Goals Progress on Safety & Integrity
See also blogs: Global Advocacy blog · Global Advocacy Newsletter · Policy blog
- Ongoing litigation: Update on litigation in India.
Board and Board committee updates
See Wikimedia Foundation Board noticeboard · Affiliations Committee Newsletter
- Board Elections: The Board’s Executive Committee shared some thoughts on the 2024 Wikimedia Foundation Board of Trustees elections.
External media releases & coverage
- Most popular articles: Announcing English Wikipedia’s most popular articles of 2024.
- Interview: Jimmy Wales on Why Wikipedia Is Still So Good.
Other Movement curated newsletters & news
See also: Diff blog · Goings-on · Planet Wikimedia · Signpost (en) · Kurier (de) · Actualités du Wiktionnaire (fr) · Regards sur l’actualité de la Wikimedia (fr) · Wikimag (fr) · other newsletters:
- Topics: Education · GLAM · The Wikipedia Library
- Wikimedia Projects: Milestones · Wikidata
- Regions: Central and Eastern Europe
Subscribe or unsubscribe · Help translate
For information about the Bulletin and to read previous editions, see the project page on Meta-Wiki. Let askcacwikimedia.org know if you have any feedback or suggestions for improvement!
MediaWiki message delivery 18:03, 16 December 2024 (UTC)
Miscellaneous
How do you choose which articles to work on ?
Greetings! My question is the next. How do you choose the articles you want to work on ?
In my case, it's simple. I read articles on topics that interest me and I read the related articles (For example, internal links).
If I don't have time to work on it. I write a note on my user page to work on it later. Anatole-berthe (talk) 01:57, 5 December 2024 (UTC)
- I think that really depends on who you ask. Polygnotus (talk) 22:29, 5 December 2024 (UTC)
- Everybody's different. Some people are on a mission to document every professional cricket player, every TV station, every species of reptile, every politician in their home country, etc, etc. I like to explore the history of where I live and as often as not, my interest in a topic is sparked by going past some building or park and wondering if there's more there than meets the eye. And, just like Anatole-berthe, my user space is littered with stubs of future articles that never went anywhere. RoySmith (talk) 22:39, 5 December 2024 (UTC)
- Aye, I figure that everyone will have different motives. I've ceased article writing because this list of articles I have worked on is also a list of articles I need to maintain, and it's gotten too long. Every year I do maintain that list. Jo-Jo Eumerus (talk) 10:51, 6 December 2024 (UTC)
- Thanks to everybody for yours answers ! Anatole-berthe (talk) 13:27, 6 December 2024 (UTC)
- I'm a WikiSloth: I work on whatever catches my eye, most often merely to untangle awkward wording; though I pay more attention to areas where I think I know something, like heraldry and polytopes. —Tamfang (talk) 23:08, 11 December 2024 (UTC)
Can we please change the wording of protected {{ambox}}es?
I previously left a message elsewhere, but got no response.
If you visit the page of many current events, you'll see:
- This article documents a current event... Feel free to improve this article or discuss changes on the talk page, but please note that updates without valid and reliable references will be removed. [Emphasis added]
But many of these articles are semi- or extended-protected, so most readers can't actually edit the article, despite the kind (or teasing?) invitation. And for those experience editors who can edit them, they probably don't need to be reminded to add reliable sources.
So can we change {{current}}, and all the similar {{ambox}}es, to remove the invitation to edit from quasi-protected articles? ypn^2 18:46, 8 December 2024 (UTC)
Question about the meaning of political spectrum terms in the infobox of political parties.
Hello. I am wondering something lately, and that is some ambiguity around political spectrum terms. We say, for instance, that the Democratic Party of the US, is Center-left on the page for that party. But where on the spectrum does this lie? Is it, for instance, between the center and far left? Or between the center and that aforementioned point. Really, I am curious and I think we need some consensus to clear this up. I am also confused by other parties, such as the Republican Party of the US. Is it the case where, as specified in the infobox, the party RANGES from center-right to right-wing? Or that its mostly in BETWEEN those points? I feel like it is not at all consistent. Thank you for reading this. Jayson (talk) 23:49, 8 December 2024 (UTC)
- For the past few years, this is my in-mind concept of what it means to be on a certain portion of the political spectrum based on the descriptions in the Wikipedia Infobox, and I can't find any guidelines that standardise the meaning, and its not consistent. Jayson (talk) 00:01, 9 December 2024 (UTC)
- My personal choice would be something at least two-dimensional, such as the Nolan Chart or The Political Compass. Donald Albury 01:23, 9 December 2024 (UTC)
- Yes, I have done that. On the page for the Irish political party Aontu, I made such in the same section using information already cited. Unfortunately it was reverted Jayson (talk) 01:44, 9 December 2024 (UTC)
- My personal choice would be something at least two-dimensional, such as the Nolan Chart or The Political Compass. Donald Albury 01:23, 9 December 2024 (UTC)
- I think you're overthinking this.
- This is not Discrete mathematics. This is not the sort of thing in which you can meaningfully ask whether "1 to 2" means "includes 2.0" or "asymptotically approaching 2, but never getting any closer than 1.999999999...". This is a fuzzy spectrum with approximate signposts stuck in it. "Center-right to right-wing" means stuff that's anywhere between or around those two points. WhatamIdoing (talk) 03:37, 9 December 2024 (UTC)
- Not to mention the fact that parties given the same description can have many differences. A good descriptive section on their major policy positions would seem to be much more useful than these tags. --User:Khajidha (talk) (contributions) 13:01, 9 December 2024 (UTC)
- And… let’s not forget that the meaning of the terms “left wing” and “right wing” have shifted and changed over time. Stances that were considered “left wing” in 1900 might be considered “right wing” today and vise versa).
- Also, these terms have different meanings when talking European politics vs American politics.
- These nuances make such terms awkward to use as an infobox data point. They require context. Blueboar (talk) 13:19, 9 December 2024 (UTC)
- Sometimes they require context. But if you're just doing a quick look up ("Who's this TLABBQ in my news feed again? I keep mixing up the political parties in that little country"), then "Ah, they're the lefties" may be all you want or need. WhatamIdoing (talk) 06:56, 10 December 2024 (UTC)
- Not to mention the fact that parties given the same description can have many differences. A good descriptive section on their major policy positions would seem to be much more useful than these tags. --User:Khajidha (talk) (contributions) 13:01, 9 December 2024 (UTC)
- In theory, the right answer should be to follow the consensus of reliable sources. But that would require going through literally hundreds of thousands of newspaper articles, political journals, reports, books, etc. etc., and then weighing them per date published, reliability, POV concerns, and then figuring out how exactly to count them, and then tabulating and summarizing them, and then repeat this process every few years for each party.
- Since that's not going to happen, I would recommend going with the least common denominator - i.e., what everyone agrees to. Since everyone in 2024 calls US. Democrats some form of "left", and Republicans some form of "right", we should probably leave it at that, and not try to decide between "center-left" or "left" or "center-left to left" ad infinitum. ypn^2 19:09, 9 December 2024 (UTC)
- I think of "center-left" and "center-right" merely as labels indicating parties that can govern without having to form coalitions, as both the Democrats and Republicans can. Within those spectrums are shifts in sentiment (Ds voting for Rs and vice versa) as well as contradictions (people who declare themselves fiscally conservative but socially liberal, those who might be religious but are concerned more about fair distribution of wealth rather than efficient creation of it, etc.). So, spending much time focusing on subdividing such political categories might just be a waste of effort, especially in articles meant as mere summaries of political activity. Dhtwiki (talk) 06:25, 10 December 2024 (UTC) (edited 22:22, 10 December 2024 (UTC))
- As Donald Albury has pointed out, a single axis is overly simplistic, if you know how authoritarian someone is you may not know how they stand on economics. I much prefer a two axis system, but that can work out very differently depending on whether you use your second axis to measure how egalitarian/redistributive a party is as opposed to how much they believe in central planning and state intervention in or control of business. Personally I also think it important to know where a party stands on the grey/green spectrum and on the issue of how big is your tribe. But here we are writing a General Interest Encyclopaedia, and doing so as a global community covering many different nations political setups, so there is a case for not over complicating things while accepting that this is much more complex for an encyclopaedia that covers both the current day and also the past, as well as our writing one article that has to cater both to the local audience for whom this is their political milieu as well as the curious foreigner who probably doesn't know how different the meaning is of the word "Republican" in a Belfast scenario as opposed to a Brooklyn one. At the heart of this issue is the question of our audience. US sources operating in a two party system and describing the US system for a US audience will of course default to a blue/red two party system. Just as anyone writing about Belgian politics has to be aware of the Flemish/Walloon divide. But if we are writing about politics for a global audience we need to explain the very different politics of different countries in ways that informboth a local audience and a global one. ϢereSpielChequers 09:11, 10 December 2024 (UTC)
- And how complicated that can be is illustrated by this attempt to map many countries on a pair of axes that are different from the usual left-right (economic) vs authoritarian-libertarian axes. We would need to find quite a few reliable sources for each country to do anything like that in Wikipedia. Personally, I think political space is multidimensional, but the more dimensions you incorporate, the harder it is to present the space in visual form. So, a one-dimensional chart of political position will always give an incomplete, and possibly misleading, view, of what distinguishes one party or politician from another, while a two-dimensional chart will take up more page space, and beyond two dimensions becomes impractical. In any case, any presentation of a party or politician's place in political space will require reliable sources. Despite being a visually-oriented person, I think any explanation of political positions should be in prose. Donald Albury 15:39, 10 December 2024 (UTC)
- I agree. A political spectrum is on two axes (or more). But for the sake of simplification you can approximate both to one axis. A multi-axis designation for an something intended as a broad overview wouldn't be necessary in most cases. If it happens that two axes go in different directions, then for scoring on one axis you'd probably have to broadly estimate which direction is more extreme and then put lean-left, moderate, or lean-right. In other words, I'm not really seeing the problem. EEpic (talk) 06:56, 11 December 2024 (UTC)
- WP:SYNTH says you cant take one source that says far-left and one source that says far-right and move it to the center. That means that, for the Libertarians in the US, you can't just approximate as "center". Jayson (talk) 15:35, 11 December 2024 (UTC)
- It is certainly something we should not be saying in Wikivoice. We should apply political labels only if they are supported by most reliable sources that address political orientation. Donald Albury 16:30, 11 December 2024 (UTC)
- WP:SYNTH says you cant take one source that says far-left and one source that says far-right and move it to the center. That means that, for the Libertarians in the US, you can't just approximate as "center". Jayson (talk) 15:35, 11 December 2024 (UTC)
Introducing Let's Connect!
Hello everyone,
I hope that you are in good spirits. My name is Serine Ben Brahim and I am a part of the Let’s Connect working group - a team of movement contributors/organizers and liaisons for 7 regions : MENA | South Asia | East, South East Asia, Pacific | Sub-Saharan Africa | Central & Eastern Europe | Northern & Western | Latina America.
Why are we outreaching to you?
Wikimedia has 18 projects, and 17 that are solely run by the community, other than the Wikimedia Foundation. We want to hear from sister projects that some of us in the movement are not too familiar with and would like to know more about. We always want to hear from Wikipedia, but we also want to meet and hear from the community members in other sister projects too. We would like to hear your story and learn about the work you and your community do. You can review our past learning clinics here.
We want to invite community members who are:
- Part of an organized group, official or not
- A formally recognized affiliate or not
- An individual who will bring their knowledge back to their community
- An individual who wants to train others in their community on the learnings they received from the learning clinics.
To participate as a sharer and become a member of the Let’s Connect community you can sign up through this registration form.
Once you have registered, if you are interested, you can get to know the team via google meets or zoom to brainstorm an idea for a potential learning clinic about this project or just say hello and meet the team. Please email us at Letsconnectteam@wikimedia.org. We look forward to hearing from you :)
Many thanks and warm regards,
Let’s Connect Working Group Member
Serine Ben Brahim (talk) 11:21, 9 December 2024 (UTC)
MOS article title discrepancy
I recently learned that Wikipedia:Manual of Style/Visual arts includes the article title guidance "If the title is not very specific, or refers to a common subject, add the surname of the artist in parentheses afterwards". I encountered this when Peeling Onions was moved to Peeling Onions (Lilly Martin Spencer) for this reason by User:SilverLocust. This seems to be contrary to the general rule of not using disambiguation unless necessary, and is also not in sync with other comparable guidelines like Wikipedia:Naming conventions (music) which follow the general rule. Is there a reason for this local consensus overriding the global one that I am missing? Fram (talk) 08:37, 12 December 2024 (UTC)
- To be clear, I moved it from Peeling Onions(Lilly Martin Spencer) to Peeling Onions (Lilly Martin Spencer) after another user had objected to renaming it just Peeling Onions. But as noted at WP:MISPLACED#Other exceptions, there are some naming conventions that call for unnecessary disambiguation. The other thing people usually point to when disagreeing with WP:MISPLACED is WP:ASTONISH. Also, MOS:ART isn't a local consensus. SilverLocust 💬 08:46, 12 December 2024 (UTC)
- Yeah, "local consensus" was not the right choice of words, I meant a more specific guideline overruling the general one and not being in sync with most other ones. Fram (talk) 09:08, 12 December 2024 (UTC)
But anyway, the question is, is there a good reason why the band, movie, album, book, .... "Peeling Onions" would all be at the title "Peeling Onions", but for the painting we need to add the name of the artist? Fram (talk) 09:39, 13 December 2024 (UTC)
- If there were two or more notable paintings called “Pealing Onions”, disambiguating by artist would be helpful.
- Otherwise, we don’t need to be so specific. We can disambiguate as “Pealing Onions (painting)” to distinguish it from the book, album, etc of the same title. Blueboar (talk) 13:57, 13 December 2024 (UTC)
Talk:Nikolai Rimsky-Korsakov has an RFC
Talk:Nikolai Rimsky-Korsakov has an RFC for possible consensus. A discussion is taking place. If you would like to participate in the discussion, you are invited to add your comments on the discussion page. Thank you. Nemov (talk) 14:35, 12 December 2024 (UTC)
Suggestion to rename many criticism/controversies articles to include both concepts in name
Ok. First, I am posting this here, because I can't figure out a better forum (this is a cross-WikiProject issue). Second, sure, criticism and controversies are separate concepts. But consider for example Criticism of Facebook, CNN controversies/Controversies of Nestlé and existence of categories like Category:Facebook criticisms and controversies. Having looked at several of articles on criticism/controversies about companies/organizations, I am really hard pressed to find difference between them, and we already have several categories grouping those concepts together (like the mentioned Facebook one). (Rarely, we have two articles about this: consider Criticism of Wikipedia vs List of Wikipedia controversies - but this is pretty exceptionaland perhaps a case of navel gazines asking for a merge).
For those who care about category trees, few points:
- we do not have Category:Controversies by company (subcat of Category:Controversies by organizations; only Category:Criticisms of companies (subcat of Category:Criticism of organizations - and yes, the plural vs singular is another, if minor, issue to fix).(Mentioned Wikipedia list of controversies is just in Category:Internet-related controversies) Creating them, of course, is not hard, and should be done, but that won't solve the problem of conceptually similar articles with different names (Criticism of Company A vs Controversies about Company B).
- We do, however, have Category:Corporate controversies...
- according to our category structure, Category:Controversies is a subcategory of Category:Criticisms. Whether this is correct, I am not sure, trying to make a hierarchy for such content is challenging - here, I am just noting how they are related at present in our structure
- however note that the only entity I noticed that has both controversies and criticism categories has a reverse order here: Category:Donald Trump controversies is a subcategory of Category:Donald Trump controversies
Before you tell me to take this to (probably inactive anyway) WT:COMPANIES, let me point out similar issues with, for example, Category:Controversies by person vs Category:Criticism of individuals (hey, BLP-caring folks, have fun :P; and hey, US-politics-caring folks, did you know Trump is the only person to have both a criticism and a controversy category? Have more fun :P Anyway, Criticism of Franklin D. Roosevelt or Criticism of Jesus are again hard to conceptually distinguish from Controversies related to Sheikh Hasina or Controversies surrounding Silvio Berlusconi. Oh, and if you think you can tell the difference between then, then try to tackle this weirdly named stuff arbitrarily spread between those categories: Commentary about Julian Assange, Donald Trump's comments on John McCain, Historical assessment of Klemens von Metternich, Reception and legacy of Muhammad Khatami, Commentary on Edward Snowden's disclosure and Jack Abramoff scandals (I think this is the only scandal-page in the biographies; BLP folks - you may want to rename this, together with its category... Update: I've started a RM for that one)
Then of course we have the rest of this can of worms - for example Scouting controversy and conflict (why conflict??).
To make it simple, we can probably retain only criticism for ideologies and concepts (Category:Criticisms by ideology; Category:Criticism of science).
And I am not feeling like tackling Category:Scandals by type into this (subcat to controversies by type).
But I'd like to suggest that we rename all articles and categories for criticism and controversies of organizations/companies to follow Category:Facebook criticisms and controversies and few others named in this fashion.
For people, I suggest "Criticism and controversies related to Person X" or just rename all controversies to criticisms (because Criticism and controversies related to Jesus, for example, sounds a bit weird). That said, again, conceptually, criticism of Jesus and Controversies surrounding Silvio Berlusconi are pretty much the same (Jesus is a controversial figure to some; Berlusconi has been criticized, and those pages cover all those aspects).
Really, almost all of those articles are pretty much conceptually identical, so even if you think you have a handle on how to draw the line between controversies of foo and criticism of foo, please note that enforcing this will be next to impossible. Rather than having multiple names and two category trees for conceptually identical articles, I think standardizing them to one is going to be best. Piotr Konieczny aka Prokonsul Piotrus| reply here 09:34, 14 December 2024 (UTC)
- I think a first step is to go through these controversy articles to make them more of a summary style rather than listing every single incident where the topic came under controversy. Criticism of Christianity is maybe halfway there but it still has too much dissecting to specific indicents.
- A second step would be to strip legal aspects like lawsuits to separate articles, eg like Litigation involving Apple Inc. which generally stays more factual to actual things that happen in courts of law, rather than the commentary and criticism of from a range of sources. This might not be a possible step for several of these, but we should not try to mix criticism and ligitation. Masem (t) 03:59, 15 December 2024 (UTC)
Or how about we don't have either? I don't think that we should have stand alone criticism/controversies articles or sections (for aren't we advised to integrate such stand alone into the article? Aren't they simply relics of a less rigourous era doomed to be eventualy disassembled?) Horse Eye's Back (talk) 15:46, 14 December 2024 (UTC)
- +1. By their nature, these articles are either POV forks or so close to it that the end result is the same. Controversies and criticisms shouldn't be made standalone solely for being controversies or criticisms, whether it be as a separate article or a section within an article. They should be incorporated into the article like any other facts, and if they don't fit, then they're probably not due. Thebiguglyalien (talk) 22:19, 14 December 2024 (UTC)
- I don't know about that. For some large subjects, we can expect many subtopics/subarticles. If there's room in Wikipedia for a fairly niche article like History of religion in the Netherlands, then there's probably room in Wikipedia for a general article like Criticism of religion. WhatamIdoing (talk) 02:34, 15 December 2024 (UTC)
- Also, this wouldn't be the line of reasoning as regards more discretized controversies, e.g. Chinese Rites controversy. Remsense ‥ 论 02:39, 15 December 2024 (UTC)
- I agree. A one-size-fits-all approach might not be appropriate. WhatamIdoing (talk) 03:04, 15 December 2024 (UTC)
- Also, this wouldn't be the line of reasoning as regards more discretized controversies, e.g. Chinese Rites controversy. Remsense ‥ 论 02:39, 15 December 2024 (UTC)
- I don't know about that. For some large subjects, we can expect many subtopics/subarticles. If there's room in Wikipedia for a fairly niche article like History of religion in the Netherlands, then there's probably room in Wikipedia for a general article like Criticism of religion. WhatamIdoing (talk) 02:34, 15 December 2024 (UTC)
- FWIW, Controversy does a good job at outlining its distinction as a state of prolonged public dispute—ergo, controversy is properly subcategorized under criticism, requiring additional narrative and intersocial characteristics. Remsense ‥ 论 02:12, 15 December 2024 (UTC)
- Per Wikipedia:Neutral point of view#Naming, I'd rather look for names that avoid both "Criticisms of" and "Controversies of" when possible, especially when the subject is kind of narrow. Criticism of Walmart could be divided into a couple of less POV-ish-ly titled articles, like Labor relations at Walmart (currently a redirect). Other parts, like Criticism of Walmart#Midtown Walmart (500+ words on the construction of a single store) could either be blanked or merged to a more relevant article (e.g., Midtown Miami). WhatamIdoing (talk) 02:41, 15 December 2024 (UTC)
- I agree; this likely should be done with many of these articles. I'm not sure that all of them should be liquidated, though I'm not immediately hitting upon a specific counterexample. Remsense ‥ 论 02:45, 15 December 2024 (UTC)
- @Horse Eye's Back Some of them are certainly due (Criticism of capitalism or Criticism of Marxism, etc.), although I am sure we can find a few that wouldn't survive AfD. Criticisms of particular individuals is probably the most problematic aspect and we should really look at all articles there carefully, although for historical figures it is less of an issue (and if my post here results in clearing of some BLP-violating detritus, great). Piotr Konieczny aka Prokonsul Piotrus| reply here 03:44, 15 December 2024 (UTC)
- Yes, I was talking in terms of individuals. My understanding is that criticism in the context of Criticism of capitalism or Criticism of Marxism is referring to scholarly criticism and not general negative feelings (we have separate pages after all for Anti-capitalism and Anti-communism). I would expect for example that an article "Criticism of Hegel" would note Critique of Hegel's Philosophy of Right but not that his mom thought he was a jerk. Horse Eye's Back (talk) 04:33, 15 December 2024 (UTC)
- We have these FAs on "controversies":
- I didn't see any with "criticism" in the title. We have five FAs on "scandals":
- At a glance, I don't think that a one-size-fits-all renaming to "Criticism and controversies related to _____" would be appropriate for any of these. WhatamIdoing (talk) 06:28, 15 December 2024 (UTC)
- I think that such an approach tailored to natural persons would be appropriate. I also see a major difference between an event which has controversy, criticism, or scandal in its proper name and the use as a descriptive title. Horse Eye's Back (talk) 00:49, 16 December 2024 (UTC)
- Yes, I was talking in terms of individuals. My understanding is that criticism in the context of Criticism of capitalism or Criticism of Marxism is referring to scholarly criticism and not general negative feelings (we have separate pages after all for Anti-capitalism and Anti-communism). I would expect for example that an article "Criticism of Hegel" would note Critique of Hegel's Philosophy of Right but not that his mom thought he was a jerk. Horse Eye's Back (talk) 04:33, 15 December 2024 (UTC)
How to handle plagiarism from Wikipedia?
Hey all, hope everyone here is doing well. Today I woke up to discover that a podcaster I follow had plagiarised part of an article I wrote, as well as parts of some other articles (some of which I had contributed to, others not). The podcaster did not cite their sources, nor did they make it clear that they were pulling whole paragraphs from Wikipedia, but they ran advertisements and plugged their patreon anyway. This is not the first time an article I wrote for Wikipedia has been plagiarised and profited off (earlier this year I noticed a youtuber had plagiarised an entire article I had written; I've also noticed journalists ripping off bits and pieces of other articles). Nor is this limited to articles, as I often see original maps people make for Wikimedia Commons reused without credit.
Obviously I'm not against people reusing and adapting the work we do here, as it's freely licensed under creative commons. But it bugs me that no attribution is provided, especially when it is required by the license; attribution is literally the least that is required. I would like attribution of Wikipedia to become more common and normalised, but I don't know how to push for people off-wiki to be more considerate of this. In my own case, the 'content creators' in question don't provide contact details, so I have no way of privately getting in touch with them. Cases in which I have been able to contact an organisation about their unattributed use of Wikipedia/Wikimedia content often get ignored, and the unattributed use continues. But I also have no interest in publicly naming and shaming these people, as I don't think it's constructive.
Does anyone here have advice for how to handle plagiarism from Wikipedia? Is there something we can do to push for more attribution? --Grnrchst (talk) 13:59, 16 December 2024 (UTC)
- Sadly there are plenty of lazy sods who think that copying directly from Wikipedia is "research". This has happened with some of the articles that I have been involved with. It's rude, but hard to stop.--♦IanMacM♦ (talk to me) 14:13, 16 December 2024 (UTC)
- I would start by writing to the podcaster and politely explaining to them that they are welcome to use the material but are required to provide attribution. They may simply be unaware of this and might be willing to comply if properly educated. Failing that, I assume the podcast was being streamed from some content delivery service like YouTube. You might have better luck writing to the service provider demanding that the offending material be taken down.
- Realistically, crap like this happens all the time, and there's probably not a whole bunch we can do to prevent it. RoySmith (talk) 14:37, 16 December 2024 (UTC)
- To support RoySmith's point, for those who may not have seen it, here is a very long youtube video about youtube and plagiarism [61]. (Works just having it on as background audio.) CMD (talk) 14:59, 16 December 2024 (UTC)
- Funnily enough, plagiarism from Wikipedia comes up a couple times in that video. MJL also made a very good response video, which I think was a useful addition in the conversation of crediting Wikipedians. --Grnrchst (talk) 15:10, 16 December 2024 (UTC)
- Thanks, I'll give that a listen. CMD (talk) 15:18, 16 December 2024 (UTC)
- Funnily enough, plagiarism from Wikipedia comes up a couple times in that video. MJL also made a very good response video, which I think was a useful addition in the conversation of crediting Wikipedians. --Grnrchst (talk) 15:10, 16 December 2024 (UTC)
- Aye, I figured it be an uphill battle trying to accomplish even minor changes on this front. As I can't find a way to contact the creator directly, sending an email to the hosting company may be the best I can do, but even then I doubt it'll lead to anything. Thanks for the advice, anyhow. --Grnrchst (talk) 15:12, 16 December 2024 (UTC)
- To support RoySmith's point, for those who may not have seen it, here is a very long youtube video about youtube and plagiarism [61]. (Works just having it on as background audio.) CMD (talk) 14:59, 16 December 2024 (UTC)