Jump to content

Wikipedia talk:WikiProject AI Cleanup

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

I wanted to share a helpful tip for spotting AI generated articles on Wikipedia

[edit]

If you look up several buzzwords associated with ChatGPT and limit the results to Wikipedia, it will bring up articles with AI-generated text. For example I looked up "vibrant" "unique" "tapestry" "dynamic" site:en.wikipedia.org and I found some (mostly) low-effort articles. I'm actually surprised most of these are articles about cultures (see Culture of Indonesia, Culture of Qatar, or Culture of Indonesia). 95.18.76.205 (talk) 01:54, 2 September 2024 (UTC)[reply]

Thanks! That matches with Wikipedia:WikiProject AI Cleanup/AI Catchphrases, feel free to add any new buzzwords you find! Chaotic Enby (talk · contribs) 02:00, 2 September 2024 (UTC)[reply]

A new WMF thing

[edit]

Y'all might be interested in m:Future Audiences/Experiment:Add a Fact. Charlotte (Queen of Heartstalk) 21:46, 26 September 2024 (UTC)[reply]

Is it possible to specifically tell LLM-written text from encyclopedically written articles?

[edit]

The WikiProject page says "Automatic AI detectors like GPTZero are unreliable and should not be used." However, those detectors are full of false positives because LLM-written text stylistically overlap with human-written text. But Wikipedia doesn't seek to cover all breadth of human writing, only a very narrow strand (encyclopedic writing) that is very far from natural conservation. Is it possible to specifically train a model on (high-quality) Wikipedia text vs. average LLM output? Any false positive would likely be unencyclopedic and needing to be fixed regardless. MatriceJacobine (talk) 13:29, 10 October 2024 (UTC)[reply]

That would definitely be a possibility, as the two output styles are stylistically different enough to be reliably distinguished most of the time. If we can make a good corpus of both (from output of the most common LLMs on Wikipedia-related prompts on one side, and Wikipedia articles on the other), which should definitely be feasible, we could indeed train such a detector. I'd be more than happy to help work on this! Chaotic Enby (talk · contribs) 14:50, 10 October 2024 (UTC)[reply]
That is entirely possible, a corpus of both "Genuine" Articles and articles generated by LLMs would be better though, as the writing style of for example ChatGPT can still vary depending on prompting. Someone should collect/archive articles found to be certainly generated by Language Models and open-source it so the community can contribute. 92.105.144.184 (talk) 15:10, 10 October 2024 (UTC)[reply]
We do have Wikipedia:WikiProject AI Cleanup/List of uses of ChatGPT at Wikipedia and User:JPxG/LLM dungeon which could serve as a baseline, although it is still quite small for a corpus. A way to scale it would be to find the kind of prompts being used and use variations of them to generate more samples. Chaotic Enby (talk · contribs) 15:24, 10 October 2024 (UTC)[reply]

GPTZero etc

[edit]

I have never used an automatic AI detector, but I would be interested to know why the advice is "Automatic AI detectors like GPTZero are unreliable and should not be used."

Obviously, we shouldn't just tag/delete any article that GPTZero flags, but I would have thought it could be useful to highlight to us articles that might need our attention. I can even imagine a system like WP:STiki that has a backlog of edits sorted by likelihood to be LLM-generated and then feeds those edits to trusted editors for review.

Yaris678 (talk) 14:30, 11 October 2024 (UTC)[reply]

It could indeed be useful to flag potential articles, assuming we keep in mind the risk that editors might over-rely on the flagging as a definitive indicator, given the risk of both false positives and false negatives. I would definitely support brainstorming such a backlog system, but with the usual caveats – notably, that a relatively small false positive rate can easily be enough to drown true positives. Which means, it should be emphasized that editorial judgement shouldn't be primarily based on GPTZero's assessment.
Regarding the advice as currently written, the issue is that AI detectors will lag behind the latest LLMs themselves, and will often only be accurate on older models on which they have been trained. Indeed, their inaccuracy has been repeatedly pointed out. Chaotic Enby (talk · contribs) 14:54, 11 October 2024 (UTC)[reply]
How would you feel about changing the text to something like "Automatic AI detectors like GPTZero are unreliable and should over ever be used with caution. Given the high rate of false positives, automatically deleting or tagging content flagged by an automatic AI detector is not acceptable." Yaris678 (talk) 19:27, 15 October 2024 (UTC)[reply]
That would be fine with me! As the "automatically" might be a bit too restricted in scope, we could word it as "Given the high rate of false positives, deleting or tagging content only because it was flagged by an automatic AI detector is not acceptable." instead. Chaotic Enby (talk · contribs) 19:46, 15 October 2024 (UTC)[reply]
I'd argue that's an automatic WP:MEATBOT, but there's no harm in being clearer. jlwoodwa (talk) 16:21, 16 October 2024 (UTC)[reply]
I support that wording. I use GPTZero frequently, after I already suspect that something is AI-generated. It's helped me avoid some false positives (human-generated text that I thought was AI), so it's pretty useful. But I'd never trust it or rely on it. jlwoodwa (talk) 16:15, 16 October 2024 (UTC)[reply]

I have edited the wording based on my suggestion and Chaotic Enby's improvement. Yaris678 (talk) 06:54, 19 October 2024 (UTC)[reply]

404 Media article

[edit]

https://www.404media.co/email/d516cf7f-3b5f-4bf4-93da-325d9522dd79/?ref=daily-stories-newsletter Seananony (talk) 00:44, 12 October 2024 (UTC)[reply]

Question about To-Do List

[edit]

I went to 3 of the articles listed, Petite size, I Ching, and Pension, and couldn't find any templates in the articles about AI generation. Is the list outdated? Seananony (talk) 02:21, 12 October 2024 (UTC)[reply]

Seananony: The to-do list page hasn't been updated since January. Wikipedia:WikiProject AI Cleanup § Categories automatically catches articles with the {{AI-generated}} tag. Chaotic Enby, Queen of Hearts: any objections to unlinking the outdated to-do list? — ClaudineChionh (she/her · talk · contribs · email) 07:08, 12 October 2024 (UTC)[reply]
Fine with me! Chaotic Enby (talk · contribs) 11:14, 12 October 2024 (UTC)[reply]

What to do with OK-ish LLM-generated content added by new users in good faith?

[edit]

After opening article Butene, I noticed the headline formatting was broken. Then I read the text and it sounded very GPT-y but contained no apparent mistakes. I assume it has been proofread by the human editor, Datagenius Mahaveer who registered in June and added the text in July.

I could just fix the formatting and remove the unnecessary conclusion but decided to get advice from more experienced users here. I would appreciate if you put some kind of a brief guide for such cases (which, I assume, are common) somewhere BTW! Thanks in advance 5.178.188.143 (talk) 13:57, 19 October 2024 (UTC)[reply]

Hi! In that case, it is probably best to deal with it the same way you would deal with any other content, although you shouldn't necessarily assume that it has been proofread and/or verified. In this case, it was completely unsourced, so an editor ended up removing it. Even if it had been kept, GPT has a tendency to write very vague descriptions, such as polybutene finds its niche in more specialized applications where its unique properties justify the additional expense, without specifying anything. These should always be reworded and clarified, or, if there are no sources supporting them, removed. Chaotic Enby (talk · contribs) 15:24, 19 October 2024 (UTC)[reply]
I very much agree with the idea of putting up a guide, by the way! Thanks a lot! Chaotic Enby (talk · contribs) 15:26, 19 October 2024 (UTC)[reply]
I already have two guides on my to-do list, so I'll pass this to someone else, but I made a skeleton of a guide at Wikipedia:WikiProject AI Cleanup/Guide and threw in some stuff from the main page of this project, in an attempt to guilt someone else (@Chaotic Enby?) into creating one. -- asilvering (talk) 19:08, 19 October 2024 (UTC)[reply]
Great, now I've been guilt-tripped and can't refuse! I'll go at it, should be fun – and thanks for setting up the skeleton! (I was thinking of also having a kind of flow diagram like the NPP one) Chaotic Enby (talk · contribs) 19:10, 19 October 2024 (UTC)[reply]
Oh, that would be a great idea! I just can't really guilt you into it by making a half-finished svg. -- asilvering (talk) 19:19, 19 October 2024 (UTC)[reply]
Do we need a separate guide page? A lot of the content currently in Wikipedia:WikiProject AI Cleanup/Guide is copied from or paraphrasing Wikipedia:WikiProject AI Cleanup#Editing advice. I think it would make sense to not have a separate page for now (usual issues with forking) and instead expand Wikipedia:WikiProject AI Cleanup#Editing advice. If that section gets to big for the main page of this WikiProject, then we can copy it to Wikipedia:WikiProject AI Cleanup/Guide and leave a link and summary at Wikipedia:WikiProject AI Cleanup#Editing advice. Yaris678 (talk) 12:51, 23 October 2024 (UTC)[reply]
For now, the "guide" is mostly just the skeleton that Asilvering set up, I haven't gotten to actually writing the bulk of the guide yet. Chaotic Enby (talk · contribs) 13:54, 23 October 2024 (UTC)[reply]
Sure. But what am saying is, rather than expand on that skeleton, expand on Wikipedia:WikiProject AI Cleanup#Editing advice. Yaris678 (talk) 16:55, 23 October 2024 (UTC)[reply]
These links were useful. Thanks!
I suggest centralizing them all under Wikipedia:WikiProject AI Cleanup/Guide and simply linking it from WikiProject page. Symphony Regalia (talk) 17:33, 23 October 2024 (UTC)[reply]
Expanding the guides a bit for corner cases would be useful. Symphony Regalia (talk) 17:31, 23 October 2024 (UTC)[reply]

Flagging articles up for examination

[edit]

Hi Folks!! I'm looking to catch up to the current state. I reviewed an article during the last NPP sprint as an IP editor had flagged it with LLM tag. I couldn't say for sure if it was generated or not, so I'm behind. I sought advice and was pointed here. I was generated in fact. So I'm looking any flagged articles that you happen to come across, so I can take a look and learn the trade, chat about and so on, so to speak. I've joined the group as well. Thanks. scope_creepTalk 14:01, 24 October 2024 (UTC)[reply]

The pre-ChatGPT era

[edit]

We may want to be more explicit that text from before ChatGPT was publicly released is almost certainly not the product of an LLM. For example, an IP editor had tagged Hockey Rules Board as being potentially AI-generated when nearly all the same text was there in 2007. (The content was crap, but it was good ol' human-written crap!) Maybe add a bullet in the "Editing advice" section along the lines of "Text that was present in an article before December 2022 is very unlikely to be AI-generated." Apocheir (talk) 00:57, 25 October 2024 (UTC)[reply]

This is probably a good idea. I'm sure they were around before then, but definitely not publicly. Symphony Regalia (talk) 01:42, 25 October 2024 (UTC)[reply]
Definitely a good idea, also agree with this. Just added a slightly edited version of it to "Editing advice", feel free to adjust it if you wish! Chaotic Enby (talk · contribs) 01:59, 25 October 2024 (UTC)[reply]
So far, I haven’t seen anything that I thought could be GPT-2 or older. But I did run into a few articles that seem to make many of the same mistakes as ChatGPT, except a decade earlier.
If old pages like that could be mistaken for AI because it makes the mistakes that we look for in AI text, that does still mean that’s a problematic find; maybe we should recommend other cleanup tags for these cases. 3df (talk) 22:53, 25 October 2024 (UTC)[reply]
I think that's very likely an instance of "bad writing". Human brains have very often produced analogous surface-level results! Remsense ‥  23:05, 25 October 2024 (UTC)[reply]
Yes, I have to say, ChatGPT's output is a lot like how a lot of first- or second-year undergraduate students write when they're not really sure if they have any ideas. Arrange some words into a nice order and hope. Stick an "in conclusion" on the end that doesn't say much. A lot of early content on Wikipedia was generated by exactly this kind of person. (Those people grew out of it; LLMs won't.) -- asilvering (talk) 00:31, 26 October 2024 (UTC)[reply]

AI account

[edit]

Special:Contributions/Polynesia2024. Their contribution pattern is suspicious. No matching edit summaries and content dump in thousands of bytes minutes apart over many articles. Some of their inserted contents test as high as 99% AI, such as the contents they inserted into Ford. What is the current policy on AI generated contents without disclosure? Perhaps it could be treated as account sharing (because the person who has the account isn't the one who wrote it) or adding contents you did not create. Graywalls (talk) 23:53, 25 October 2024 (UTC)[reply]

There isn't technically any policy on not disclosing AI content yet, even in obvious cases like this one. However, the user who publishes the content is still responsible for it, whether it is manually written or AI-generated, so this would be treated the same as rapid-fire disruptive editing, especially given their unresponsiveness. Chaotic Enby (talk · contribs) 00:25, 26 October 2024 (UTC)[reply]
Also being discussed at Wikipedia_talk:WikiProject_Spam#Possible_academic_boosterism_ref_spamming. Flounder fillet (talk) 00:57, 26 October 2024 (UTC)[reply]

Ski Aggu is potentially stuffed with fake sources that do not work or sources may not directly support contents. CSD request was denied. I'm not going to spend the time to manually check everything but putting it out there for other volunteers to look. Unfortunately AI spam bots can apparently churn out tainted articles and publish into articles, but there's more procedural barrier to their removal than creation. Graywalls (talk) 16:19, 26 October 2024 (UTC)[reply]

I'll check the first ref block and if it is, I'll Afd it. scope_creepTalk 16:40, 26 October 2024 (UTC)[reply]
The whole first block is two passing mentions, a couple youtube videos and many Discog style album listing sites. There is nothing for a blp. Several of them don't mention him. They are fake. scope_creepTalk 16:49, 26 October 2024 (UTC)[reply]

Editor with 1000+ edit count blocked for AI misuse

[edit]

User:Jeaucques Quœure. See [1]. I do wonder if a WP:CCI-like process for poor AI contributions could be made. Ca talk to me! 13:02, 26 October 2024 (UTC)[reply]

Wow, I think that would be a quagmire if we were specifically looking for LLM text, as detection would be slow and ultimately questionable in many instances. We could go through and verify that the info added in those edits is verifiable, but I wouldn’t go beyond that, nor do I think there is a need to go beyond that. — rsjaffe 🗣️ 14:28, 26 October 2024 (UTC)[reply]
I checked the last 50 edits, and the problematic edits appear to have been taken care of. Ca talk to me! 14:55, 26 October 2024 (UTC)[reply]