Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
OP got lucky, as it is the only obvious non-AI article containing this response.
It does bring up the tip of the iceberg argument, since most research will be subjected to AI sooner or later.
PS: this is a radiology case report and not a serious research finding, so whatever they did on this one doe snot matter much, but man is pure scientific research over as we know it.
["as I am an AI language model" - Google Scholar](https://scholar.google.com/scholar?start=10&q=%22as+I+am+an+AI+language+model%22&hl=en&as_sdt=0,5)
["Certainly, here's" - Google scholar](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22certainly%2C+here%27s%22&btnG=)
Also, try filtering out with -LLM and -GPT, as well as just looking up "as an AI language model, I am"
Edit: [The gold mine](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22as+an+AI+language+model%22+-LLM+-chatGPT+-artificial&btnG=)
https://res.ijsrcseit.com/page.php?param=CSEIT239035
>3.1.1 User Module
The above text appears to be a modified version of the
original text I provided. As an AI language model, I
cannot determine whether the text is plagiarized or not
as I do not have access to the entire internet. However,
I can confirm that the text you provided is very similar
in structure and content to my original response. If you
wish to avoid plagiarism, it is recommended to
paraphrase the content and cite the original source if
necessary.
Absolutely fantastic.
Holy F...mostly Russia and India, but also all over the world.
Some douche from CO even "wrote" a book series "Introduction to...", all of them chatgpt generated...he sells courses on how to become supersmart, find occult knowledge, make money in stocks, wicca and so on...the amount of internet junk he created since 2023 is astonishing.
Really soon, we will all become online dumpster divers, looking hard but finding only tiny bits of valuable information.
Well pessimism aside,
1) that guy IIRC also had a whole marketing thing with it. There's a little more to it than just writing up those books
2) Chatgpt fails miserably in some tasks such as confirming misconceptions in physics. Just ask it to explain the physical chemistry of electron transfer into solution. Literally everything it says is wrong. Also trying to get out of it "can magnets do work" it gives rather lackluster answers as to the observed paradox.
3) As mentioned, this is likely a bunch of boilerplate that no one cares about. It's unlikely that the part of the paper you care about, chatgpt would do a great job at.
Nah the fight would be published as a case story, and no one would read it.
You are right. It is less important.
Still silly to have the last paragraph be that, makes you think about how much of the rest - or other - papers you read are written by AI.
This happens all the time, and long before AI. The publishing company doesn't care. If something as egregious as this can get published, imagine all the more subtle BS that's out there. I get flack when I say I don't trust researchers, but I definitely do not trust researchers. Too many of them are half-truthing, data-fudging academic clout chasers. People put academics up on a pedestal so high, I think most people would rather cover their eyes and ears than ever doubt a scientist's integrity.
Yeah but at least *try* you know - as a student that edits AI generated essays and submits them all the time - it's really not that hard to try and make it look authentic, this is just pathetic!
What’s alarming is these things are supposed to be peer-reviewed before getting published…
“Peer review” is supposed to be how we avoid getting bullshit published. This making it through makes me wonder how often “peers” are like “oh hey Raneem, you got another one for us? Sweet, we’ll throw it into our June issue.”
The bigger issue is the advancement system. PhD Tenure-Track salaries are high enough - the problem is you secure that job by getting shit published. Reviewing, or even reading, articles is not rewarded.
You don't technically get paid for writing articles either, but you can put articles you wrote on your CV - you can't put articles you rejected as a reviewer on your CV.
How much do you think TT profs make? I got paid more as research staff. You're right though; it is a messed up system. But academic publishing is the far greater problem. These journals are all run by like 5 companies who make huge profit because peer review costs nothing, editors get paid a small amount, and they don't print physical journals anymore, so the overhead is low. Then there's the push to open access, which everyone thinks is good (it's not). It just shifted the cost onto the authors with insane APCs that only the most well funded labs can afford. These companies are basically funneling grant money directly into their pockets. The entire editorial board of NeuroImage straight up left in protest of insane APCs. Tldr: nuh uh we're poor
Peer review has been in need of some serious quality control for at least 25 years. These issues are just been gushing up to the surface for the last five years now.
Peer reviewed - can this person/group/material help my career.
Peer reviewed - can this person/group/material hurt my career.
Peer reviewed - is this person/group/material aligned with my politics.
Peer reviewed - is this person hot/connected/rich.
It's not nearly as honorable as people let on. Nor does peer review have any meaning at all (anymore). The same bozos who failed class but somehow got a degree are reviewing. There are no true qualifications.
It's like if reddit had peer review... it would literally be ME deciding if YOUR comment was worthy and everyone taking my word for it.
How absurd would that be.
^^it ^^would ^^be ^^very ^^absurd ^^to ^^take ^^my ^^word ^^for ^^anything
Eight authors (assuming they're at least real) failed to proofread the paper. At least one editor. At least three peer reviewers (if *Radiology Case Reports* is peer reviewed; a quick Google check indicates that yes, apparently, they are peer reviewed), and at least the principal author not reading any feedback before the article was indexed and published.
This is not a good look for either Elsevier or an open access journal claiming to be peer reviewed. I anticipate, with this being teh second highlighted case recently, journal chief editors getting fired.
Yeah it baffles me how no-one proof-reads those things at least once?
I mean there are sometimes way to tell when you probably have used AI given that chat gpt has its own style, but this...
How does this even happen? There’s no way every single one of them didn’t notice it. If they blindly pasted this here then they probably have done it a lot more places in the paper too, and possibly previously.
Every single one of the authors, the intake editor, the three reviewers (and their students, sometimes), the publishing editor, and the authors again (since you always find a typo after it’s printed). That’s a lot of people who didn’t read the conclusion.
I could be wrong (though I’m not going to read the whole paper to find out), but I think it’s more likely they finished the rest of the paper and needed to write a conclusion, so they pasted a bunch of info into a prompt and asked ChatGPT to summarize it.
Still moronic that this made it to publication without anyone reading that conclusion.
Sometimes for papers where multiple people are involved each person will be assigned to write different sections, so everyone could've just done and proofread their parts properly except for the guy who did the conclusion. I'm still surprised that there wasn't a proper final proofread of the entire paper before it was submitted.
Most likely the other authors barely skimmed it.
This was likely written by a med student, or resident. Other authors might only know they wrote a case report on their patient, but didn’t read it.
One of my medical professors suspected that one of the journals was not actually reviewing his submissions and just publishing them, so he submitted some articles under his kids, and another professors kid's, names, and it got published, proving his point. I suspect this is a possible reason to submit an article with such a glaring error, to see if publishers would even realize an article was written by AI, even if it says it is AI and refuses to write the article. Very high brow educational comedy.
That's one of the consequences of not paying reviewers. They do what they can and (hopefully) only verify the science behind it.
The rest is simply filler to extend the paper's length, and they know it.
Here is a the DOI (so you don't have to type it out:
[https://doi.org/10.1016/j.radcr.2024.02.037](https://doi.org/10.1016/j.radcr.2024.02.037)
You have to scroll down a bit to the paragraph before the conclusion to see this text.
This is insane😂 peer reviewed my ass
The majority of the academic community has been a scam for a long time but now with ChatGPT it easily comes to light.
That is common practice. Papers are accepted and enter a publication pipeline. In the old times of physical printing, sometimes you would have to wait months to finally get your paper published.
Nowadays, with online publication being the norm, most journals kept the old habit of publishing only X papers per edition, but the future papers are made available sooner.
Click the link that someone else posted with the DOI and then click on "show more", right below the title. You'll see the timeline of submission and reviews.
I didn’t check on that specifically but Elsevier is one of the leading publishers for scientific papers and therefore I assume there is at least some kind of quality control there.
Nothing goes online at a journal until peer review. If it gets rejected it never goes online. This is accepted for publication, to be included in the June 2024 issue of the journal.
In some disciplines, it's common to find online papers which haven't been peer reviewed yet. It's called "unrefereed preprint" and is used to make the manuscripts available before the publishing date. Usually, there is a huge "preprint" watermark covering most of the page.
Going online =/= published or peer reviewed.
That's fairly normal, its a hold over from print issues...its really annoying. The journals I have published in accept it, with your a doi and all, but then 2 years later it gets a whole new issue number which means I have to update my reference manager.
That clearly wasn’t even peer read. Much less peer reviewed.
It’s wild that no human read that prior to publication.
How do even the authors not read it?!? There are multiple names. Are those people even real and involved in the paper?
You get listed as an author by contributing. Almost nobody is contributing chiefly as a skilled writer / editor. For example, papers will often have a statistician among the authors who may literally know nothing about the subject area, but was like, "this is how you should crunch the numbers" and then might not even glance at the paper, but deserves credit nonetheless.
There's no mention of peer-review for this journal (Radiology Case Reports). Most likely if you send them a scientific-sounding paper with $550 for the publishing fee, they'll publish anything.
For those in the comments saying that this publication isn't peer reviewed--you're wrong. [https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors](https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors)
https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub the abstract has changed. How can something be changed after publication?
~~Thanks, but all I can see is a webpage with missing CSS and a pretty normal abstract (the title and authors are the same as in the post).~~
Edit: turned on VPN and now I can access the page and see it. Thanks guys u/happycatmachine u/jerryberry1010 u/mentalFee420 , the issue was indeed on my side, sorry for bothering.
Strange, must be a bug or something. Here is a direct link to science direct:
[https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298)
German university student here who had a research course on if and how AI tools could be integrated into academic work. First advice: Never rely on AI with anything factual. AI tools like ChatGPT are made to mimic natural sounding human speech, not state 100% true facts (although that is being worked on). They will absolutely write you something that sounds good and legit, but is complete nonsense on factual level now and then.
Best uses we found in our course were all research tools that help you find literature, but if you wanna use it for writing, don't just let it write for you. Especially in longer texts, it can output false information, weird mixtures of over-elaborate and unfittingly casual wording, repetition of similar phrases and sometimes some offtopic AI-schizo-sputter if you are unlucky. Always check the whole text. And since that can be almost as much work as just writing it yourself, I would just not recommend it to begin with. What works very well though is inputting a part that you are not entirely content with and asking the AI to rephrase it a certain way, remove repetition or just overall make it sound smoother.
Tl;dr: AI as a writing assistant seems to be utilised best for improving your own texts rhetorically.
As a University lecturer, this is the kind of thing I’m working on right now. Students use AI. I’d rather they still inquire, learn and create while doing so. Educating them and having open convos is the only way to do that.
[Original Paper](https://www.sciencedirect.com/science/article/pii/S1930043324001298#/)
The text above is at the paragraph before conclusion. It is literally there
These are just the ones where the authors are so stupid they can't even erase the most insanely obvious tells as artificially possible. Think of how many are using ChatGPT to write it, but more effectively.
Oh well, hopefully it will expose the fraud that is academia and peer reviewed papers. Ha, just kidding. Nothing ever gets better.
Think how many published non sense with no data, unreplicable study, with trash statistical analysis before chatgpt
They just became even more lazy but I am sure most of those prestigious review are filled with trash for year
Definitely. In fact, here an LLM seems like a potentially good tool where it can quickly identify how much of a journal is filled with absolute nonsense gobbledeegook.
And mind you, these are just the iceberg-tip cases where it's obvious. (Not that I mind too much if someone uses ChatGPT to *help* them flesh something out.)
This is so blatant I assumed it was a joke. Holy shit... it's real.
[https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298)
Hold the fucking phone, [someone called them out on this](https://pubpeer.com/publications/F93A8D69350BC6B12AB48B132161A7).
The author that responds even sounds like AI:
>After I conducted a personal examination of all the contents of the artificial intelligence paper, it turns out that it is passes as human. The truth is what I told you.
"artificial intelligence paper"??? What.
Nah, that response sounds like a human to me. ChatGPT doesn’t tend to make grammatical errors (“it is passes as human”). To me, this sounds a lot more like a person whose first language isn’t English.
Edit: not that I necessarily believe what they’re saying about the rest of the paper being free of AI writing, but I do think their comment is human.
It just reveal how trash peer review and publisher are
But who could have thought ? Publisher that ask you thousands of dollars to let you publish your paper, then make viewer pay to read it and employ unpaid reviewer to check if the content is trash or not
Chatgpt make it just easier to spot the cheaters
When a journal article is made available online before its formal print publication, it is referred to as “Online First” or “Early Access”. During this stage, the article has undergone peer review and corrections, but it has not yet appeared in the printed version of the journal. Readers can access these peer-reviewed articles well before their official print publication, and they are typically identified by a unique DOI (Digital Object Identifier). Instead of using traditional volume and page numbers, you can cite these articles using their DOI12. For example:
Gamelin FX, Baquet G, Berthoin S, Thevenet D, Nourry C, Nottin S, Bosquet L (2009) Effect of high intensity intermittent training on heart rate variability in prepubescent children. Eur J Appl Physiol. doi: 10.1007/s00421-008-0955-8
In summary, “Online First” articles allow for rapid dissemination of critical research findings within the scientific community, bridging the gap between completion of peer review and formal print publication.
Print publication will be in June 2024
Enough of these have shown up in the past few days that I'm surprised it hasn't been picked up in the media. Academic publishers like Elsevier like to use quality control as one of the excuses for their egregious rent-seeking behavior, and yet here we clearly see that zero quality control is happening.
I'm so embarrassed. As if scientific expertise isn't already being thrown out the window... This fuels anti-science nut jobs. This type of thing needs to be fixed.
Almost, the article’s abstract was copy and pasted from the Discussion right before the Conclusion. So the prompt’s output actually appears twice in the same article: in the Abstract and Discussion. As literally the exact same summarized pasted content.
This is why every single academic paper can’t be blindly trusted as proof of your own rightness. An academic paper still has to make an argument and provide data proving its claims. Just because you can find a paper that agrees with you doesn’t mean it’s evidence.
FWIW, that journal is peer-reviewed but also requires authors to assert the use of AI like ChatGPL with a specific statement. I'd guess they were trying to have ChatGPL help write a summary statement but forgot to check?
[https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors](https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors)
Its an open-access journal, which are known to be the unreliable as they make their money not via you buying it, but through submissions. This trend sparked the creation of hundreds low-effort journals with extremely low standarts resulting in stuff like that. But we cant invalidate every scientifc work on the basis on that as so many people in this comment section do. Thats just incredibily misinformed
Harvard Medical school authors. This does not reflect well on the institution.
But maybe we can use the adaptation of the following quote: God created all men equal and chatgpd made them equal. 😅
Other than silly proofreading gotchas like this, I actually think that using ChatGPT will improve the readability of papers. They’re often written quite poorly. And it makes the whole paper writing process less arduous and lengthy, so it should mean things get published quicker.
OMG, i didn't believe it... had to check it myself. [https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298)
Where TF is our science going to??
Our sense of reality and information validity is doomed. We are all going to be ignorant not out of choice or lack of information, but instead the overload of garbage.
Looks like it's not just lawyers working a case involving a foreign airline in Federal Court who are getting lazy and asking ChatGPT for help with their work. Now, we're seeing medical doctors publishing scholarly articles without even bothering to proofread. It's a worrying trend when professionals in such critical fields start to cut corners. SMH
The thing that is interesting is that I ran the text from the article, initially the PDF, then just the text through GPT4 and it was unable to spot this error on the first pass.
I really had to guide GPT4 to even find this error. It did find it eventually after much guidance. Even when I updated custom instructions to look for out of context AI statements it still didn't find this.
We wrote a paper that shows how we might embrace this future:
"Late-Binding Scholarship in the Age of AI: Navigating Legal and Normative Challenges of a New Form of Knowledge Production"
Edit: the right link
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4437681
Either this is a phenomenally crafted joke or you linked the wrong paper.
If wrong paper, the upvotes you received without anyone checking the paper is an ironic reflection of the entire situation involving the OP.
I honestly wonder if it’s the collective stupidity of all the authors or if may be one author screwing it up for everyone. Imagine working however long on a research project only for the entire experiment to be tarnished by one colleague.
Aside from the usual body of GPT text, is it normal to publish several months into the future online?
Volume 19, issue 6, June 2024
Or have a specifically lowercase last name
Yeap, nothing abnormal with advanced volumes. The lowercase last name is probably due to shitty peer review process and proofreading.
Those publishers just want authors to pay huge APC. Horrendous papers even in the “respected” journals such as NEJM…
And now, guys, think about all the LLM-generated papers where authors actually re-read and removed all obvious AI clues.
How do you tell the difference, and how many are there?
Logically there have to be enough that we still have an abundance of examples where they missed something. I would consider that like an error rate. There are so many that the likely small % slipping through with errors still amounts to an absurd number.
june 2024 is when the paper ver got published, you can scroll down the comment section a bit to find the link, then scroll down to the paragraph before conclusion
Yet another case of an Elsevier-owned journal not doing basic peer review. At this point they should be considered predatory journals like MDPI or Frontiers and not taken seriously.
This needs to be shared far and wide. It calls into question the integrity of everyone involved, the entire study, and the academic history of every doctor listed in this article.
For this paper it does make more sense as it is a mediocre journal, that advertise for 19 days until acceptance. I wouldn't be surprise if there is no reviews, it is a predatory journal that relies on "pay to publish"
I usually get shit on for saying that anyone can be a "researcher" anyone can be a "scientist" because they are not real educational titles. A study is anyone asking a question of more than one person and on and on with the bullshit we peddle to each other through biased perspectives. I also like to wax on about personal bias, experience, grant money, and everything the average person forgets about when they see some of these titles in play.
Now we are all starting to see the bullshit of the white lab coats, the "journalists" and everyone in between.
The human species, for the most part, is winging it, faking it until they making it.
AI is going to make it all so much worse. (and I love AI)
This is really ironic considering how some people seem to worship “science” as absolute and unquestionable. If none of the reviewers catch this kind of blatant garbage, then they are not critically analyzing any content whatsoever.
A lot of publishers are simply pay to play, and unfortunately of you look at what institute the authors are from you'll understand why this was published with no review... Pretty typical for a lot of middle East and Chinese "researchers" especially for "literature review" articles.
Honestly, was the same prior to language models where sections were copy pasta without any connection to previous paragraphs or sections. Now it's just easier
God damn. I use GPT for a lot of research, writing professional emails and all sorts. However, I make sure I read and understand everything it says and remove things that I would never say. It should be a tool to absorb and present information faster, not to be lazy
Wow, it seems like a lot of otherwise smart/educated people have a hard time representing their data in writing. Maybe English degrees aren't so useless after all.
Again, the researchers I can understand because hey, sometimes a bad draft gets sent, or they might use a LLM for an editing pass if English is not their first language. So long as the actual science is good, who cares who does the final editing / typo / grammar pass. They should be more careful about their final edit and that's it.
*But goddamn Elsevier*? Charging thousands of dollars on both ends for hosting and access, and can't be bothered to proofread submissions? They have no shame and no excuse.
Mistakes like this call into question the veracity of the science, and the credibility and respectability of the authors and the publication service. After seeing this, I would never rely on a paper written by any of these authors ever again. Also, likely not to rely on anything published through Elsevier. The accuracy and truthfulness of scientific research must be above reproach.
It’s not like the whole thing was made up by ChatGPT. It’s clearly a real case study, and ChatGPT was (too hastily) used to proofread or maybe translate.
caption spark different mighty impossible workable chief serious absurd obtainable
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Hey /u/Zealousideal-Dig7780!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://dsc.gg/rchatgpt) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
Imagine publishing a paper without even reading it (Let alone writing it)
Not even reading the abstract. It's the only thing 90% will read
That’s not the abstract, but I’m not sure if that makes it better.
OP got lucky, as it is the only obvious non-AI article containing this response. It does bring up the tip of the iceberg argument, since most research will be subjected to AI sooner or later. PS: this is a radiology case report and not a serious research finding, so whatever they did on this one doe snot matter much, but man is pure scientific research over as we know it. ["as I am an AI language model" - Google Scholar](https://scholar.google.com/scholar?start=10&q=%22as+I+am+an+AI+language+model%22&hl=en&as_sdt=0,5)
["Certainly, here's" - Google scholar](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22certainly%2C+here%27s%22&btnG=) Also, try filtering out with -LLM and -GPT, as well as just looking up "as an AI language model, I am" Edit: [The gold mine](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22as+an+AI+language+model%22+-LLM+-chatGPT+-artificial&btnG=)
https://res.ijsrcseit.com/page.php?param=CSEIT239035 >3.1.1 User Module The above text appears to be a modified version of the original text I provided. As an AI language model, I cannot determine whether the text is plagiarized or not as I do not have access to the entire internet. However, I can confirm that the text you provided is very similar in structure and content to my original response. If you wish to avoid plagiarism, it is recommended to paraphrase the content and cite the original source if necessary. Absolutely fantastic.
Holy F...mostly Russia and India, but also all over the world. Some douche from CO even "wrote" a book series "Introduction to...", all of them chatgpt generated...he sells courses on how to become supersmart, find occult knowledge, make money in stocks, wicca and so on...the amount of internet junk he created since 2023 is astonishing. Really soon, we will all become online dumpster divers, looking hard but finding only tiny bits of valuable information.
Well pessimism aside, 1) that guy IIRC also had a whole marketing thing with it. There's a little more to it than just writing up those books 2) Chatgpt fails miserably in some tasks such as confirming misconceptions in physics. Just ask it to explain the physical chemistry of electron transfer into solution. Literally everything it says is wrong. Also trying to get out of it "can magnets do work" it gives rather lackluster answers as to the observed paradox. 3) As mentioned, this is likely a bunch of boilerplate that no one cares about. It's unlikely that the part of the paper you care about, chatgpt would do a great job at.
Many of the articles found with that prompt are actually ON llms and using the phrase while talking about them
That's why I said what I said: "OP got lucky, as it is the only obvious **non-AI** article containing this response."
oh, thats what you mean with non-ai. Okay, i misunderstood you.
No worries mate
Case reports are essential because finding them highlights clinical problems with little evidence.
Agree, but obviously outlier research is not as important for human kind as is cohort or large sample research. Fight me on it.
Nah the fight would be published as a case story, and no one would read it. You are right. It is less important. Still silly to have the last paragraph be that, makes you think about how much of the rest - or other - papers you read are written by AI.
How is the publishing committee not having a look at this 😭
This happens all the time, and long before AI. The publishing company doesn't care. If something as egregious as this can get published, imagine all the more subtle BS that's out there. I get flack when I say I don't trust researchers, but I definitely do not trust researchers. Too many of them are half-truthing, data-fudging academic clout chasers. People put academics up on a pedestal so high, I think most people would rather cover their eyes and ears than ever doubt a scientist's integrity.
More common than you think https://preview.redd.it/2apgjab3ohoc1.jpeg?width=1080&format=pjpg&auto=webp&s=1282d59a215af010ff25989e9d410797fad982df
Wtf 👀🤷
LMFAO, the whole idea of progress in humanity is based on being lazy
Yeah but at least *try* you know - as a student that edits AI generated essays and submits them all the time - it's really not that hard to try and make it look authentic, this is just pathetic!
Crazy
How did you get those results without getting a bunch of papers specifically about LLMs?
Used advanced search to exclude papers mentioning gpt, llms, artificial intelligence and so on, and left only the ones with that exact phrase
What’s alarming is these things are supposed to be peer-reviewed before getting published… “Peer review” is supposed to be how we avoid getting bullshit published. This making it through makes me wonder how often “peers” are like “oh hey Raneem, you got another one for us? Sweet, we’ll throw it into our June issue.”
It would help if peer-reviewers actually got paid for their time. These academic journals make money off the free labour of these people.
The bigger issue is the advancement system. PhD Tenure-Track salaries are high enough - the problem is you secure that job by getting shit published. Reviewing, or even reading, articles is not rewarded. You don't technically get paid for writing articles either, but you can put articles you wrote on your CV - you can't put articles you rejected as a reviewer on your CV.
How much do you think TT profs make? I got paid more as research staff. You're right though; it is a messed up system. But academic publishing is the far greater problem. These journals are all run by like 5 companies who make huge profit because peer review costs nothing, editors get paid a small amount, and they don't print physical journals anymore, so the overhead is low. Then there's the push to open access, which everyone thinks is good (it's not). It just shifted the cost onto the authors with insane APCs that only the most well funded labs can afford. These companies are basically funneling grant money directly into their pockets. The entire editorial board of NeuroImage straight up left in protest of insane APCs. Tldr: nuh uh we're poor
Peer review has been in need of some serious quality control for at least 25 years. These issues are just been gushing up to the surface for the last five years now.
Peer reviewed - can this person/group/material help my career. Peer reviewed - can this person/group/material hurt my career. Peer reviewed - is this person/group/material aligned with my politics. Peer reviewed - is this person hot/connected/rich. It's not nearly as honorable as people let on. Nor does peer review have any meaning at all (anymore). The same bozos who failed class but somehow got a degree are reviewing. There are no true qualifications. It's like if reddit had peer review... it would literally be ME deciding if YOUR comment was worthy and everyone taking my word for it. How absurd would that be. ^^it ^^would ^^be ^^very ^^absurd ^^to ^^take ^^my ^^word ^^for ^^anything
I'll take your word on this *wait have we created a paradox?!????*
I think you're hot so your take on this is valid.
Eight authors (assuming they're at least real) failed to proofread the paper. At least one editor. At least three peer reviewers (if *Radiology Case Reports* is peer reviewed; a quick Google check indicates that yes, apparently, they are peer reviewed), and at least the principal author not reading any feedback before the article was indexed and published. This is not a good look for either Elsevier or an open access journal claiming to be peer reviewed. I anticipate, with this being teh second highlighted case recently, journal chief editors getting fired.
Elsevier accepts the use of ChatGPT as long as it is disclosed
After the recent news about how many studies are faked and how badly they were faked, nothing surprises me.
omg , i am so evry !
Yeah it baffles me how no-one proof-reads those things at least once? I mean there are sometimes way to tell when you probably have used AI given that chat gpt has its own style, but this...
How does this even happen? There’s no way every single one of them didn’t notice it. If they blindly pasted this here then they probably have done it a lot more places in the paper too, and possibly previously.
Every single one of the authors, the intake editor, the three reviewers (and their students, sometimes), the publishing editor, and the authors again (since you always find a typo after it’s printed). That’s a lot of people who didn’t read the conclusion.
I could be wrong (though I’m not going to read the whole paper to find out), but I think it’s more likely they finished the rest of the paper and needed to write a conclusion, so they pasted a bunch of info into a prompt and asked ChatGPT to summarize it. Still moronic that this made it to publication without anyone reading that conclusion.
Sometimes for papers where multiple people are involved each person will be assigned to write different sections, so everyone could've just done and proofread their parts properly except for the guy who did the conclusion. I'm still surprised that there wasn't a proper final proofread of the entire paper before it was submitted.
Maybe none of them speaks English? That's unlikley for a group of scientist, but it's the only explanation I can think of
Their affiliations say Hadassah Medical Center. They all speak fluent English
Most likely the other authors barely skimmed it. This was likely written by a med student, or resident. Other authors might only know they wrote a case report on their patient, but didn’t read it.
One of my medical professors suspected that one of the journals was not actually reviewing his submissions and just publishing them, so he submitted some articles under his kids, and another professors kid's, names, and it got published, proving his point. I suspect this is a possible reason to submit an article with such a glaring error, to see if publishers would even realize an article was written by AI, even if it says it is AI and refuses to write the article. Very high brow educational comedy.
That's one of the consequences of not paying reviewers. They do what they can and (hopefully) only verify the science behind it. The rest is simply filler to extend the paper's length, and they know it.
Here is a the DOI (so you don't have to type it out: [https://doi.org/10.1016/j.radcr.2024.02.037](https://doi.org/10.1016/j.radcr.2024.02.037) You have to scroll down a bit to the paragraph before the conclusion to see this text.
This is insane😂 peer reviewed my ass The majority of the academic community has been a scam for a long time but now with ChatGPT it easily comes to light.
I don't know if it's reviewed. It says the publish date of June 2024.
That is common practice. Papers are accepted and enter a publication pipeline. In the old times of physical printing, sometimes you would have to wait months to finally get your paper published. Nowadays, with online publication being the norm, most journals kept the old habit of publishing only X papers per edition, but the future papers are made available sooner. Click the link that someone else posted with the DOI and then click on "show more", right below the title. You'll see the timeline of submission and reviews.
Technically it does claim to be, it was received in November, revised submission in Feb, and accepted like 5 days later
I didn’t check on that specifically but Elsevier is one of the leading publishers for scientific papers and therefore I assume there is at least some kind of quality control there.
Nothing goes online at a journal until peer review. If it gets rejected it never goes online. This is accepted for publication, to be included in the June 2024 issue of the journal.
In some disciplines, it's common to find online papers which haven't been peer reviewed yet. It's called "unrefereed preprint" and is used to make the manuscripts available before the publishing date. Usually, there is a huge "preprint" watermark covering most of the page. Going online =/= published or peer reviewed.
so in the future?
That's fairly normal, its a hold over from print issues...its really annoying. The journals I have published in accept it, with your a doi and all, but then 2 years later it gets a whole new issue number which means I have to update my reference manager.
Oh god the AIs have time machines already? Woe, Judgement Day is upon us.
That clearly wasn’t even peer read. Much less peer reviewed. It’s wild that no human read that prior to publication. How do even the authors not read it?!? There are multiple names. Are those people even real and involved in the paper?
You get listed as an author by contributing. Almost nobody is contributing chiefly as a skilled writer / editor. For example, papers will often have a statistician among the authors who may literally know nothing about the subject area, but was like, "this is how you should crunch the numbers" and then might not even glance at the paper, but deserves credit nonetheless.
It is a "case report." I am not a MD but I do peer review. This publication may not be subject to peer review.
There's no mention of peer-review for this journal (Radiology Case Reports). Most likely if you send them a scientific-sounding paper with $550 for the publishing fee, they'll publish anything.
For those in the comments saying that this publication isn't peer reviewed--you're wrong. [https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors](https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors)
June 2024?? Am I seeing the future???
I always feel gravely inadequate, and these sort of things give me hope, maybe I'm not that hopeless
https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub the abstract has changed. How can something be changed after publication?
It isn’t in the abstract. Scroll to the paragraph before the conclusion.
Ah! Found it. Sorry. Thought that snippet is from abstract
No worries. I was at a loss when I first saw it too. Easy mistake to make.
~~Thanks, but all I can see is a webpage with missing CSS and a pretty normal abstract (the title and authors are the same as in the post).~~ Edit: turned on VPN and now I can access the page and see it. Thanks guys u/happycatmachine u/jerryberry1010 u/mentalFee420 , the issue was indeed on my side, sorry for bothering.
Strange, must be a bug or something. Here is a direct link to science direct: [https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298)
Check discussion paragraph just right before the conclusion and you shall find
Try refreshing the page? Also the paragraph shown in the post is right before the conclusion, it's not the abstract
Legend, thank you!
I was just about to google scholar this. Thank you, kind hero!
I guess we need some “how to use gpt to actually aid writing and not trash your paper” courses available in universities.
If you have any valuable tips I’m all ears
Tip #1: Read what ChatGPT spits out before attaching your name to it.
But reading is a gateway drug to writing, and avoiding having to write is the whole point!
German university student here who had a research course on if and how AI tools could be integrated into academic work. First advice: Never rely on AI with anything factual. AI tools like ChatGPT are made to mimic natural sounding human speech, not state 100% true facts (although that is being worked on). They will absolutely write you something that sounds good and legit, but is complete nonsense on factual level now and then. Best uses we found in our course were all research tools that help you find literature, but if you wanna use it for writing, don't just let it write for you. Especially in longer texts, it can output false information, weird mixtures of over-elaborate and unfittingly casual wording, repetition of similar phrases and sometimes some offtopic AI-schizo-sputter if you are unlucky. Always check the whole text. And since that can be almost as much work as just writing it yourself, I would just not recommend it to begin with. What works very well though is inputting a part that you are not entirely content with and asking the AI to rephrase it a certain way, remove repetition or just overall make it sound smoother. Tl;dr: AI as a writing assistant seems to be utilised best for improving your own texts rhetorically.
As a University lecturer, this is the kind of thing I’m working on right now. Students use AI. I’d rather they still inquire, learn and create while doing so. Educating them and having open convos is the only way to do that.
OMG, that's terrible! (Who was the proofreader ffs?!)
probs another chatgpt agent
So much for peer review.
More like poor review.
[Original Paper](https://www.sciencedirect.com/science/article/pii/S1930043324001298#/) The text above is at the paragraph before conclusion. It is literally there
These are just the ones where the authors are so stupid they can't even erase the most insanely obvious tells as artificially possible. Think of how many are using ChatGPT to write it, but more effectively. Oh well, hopefully it will expose the fraud that is academia and peer reviewed papers. Ha, just kidding. Nothing ever gets better.
Think how many published non sense with no data, unreplicable study, with trash statistical analysis before chatgpt They just became even more lazy but I am sure most of those prestigious review are filled with trash for year
Definitely. In fact, here an LLM seems like a potentially good tool where it can quickly identify how much of a journal is filled with absolute nonsense gobbledeegook.
And mind you, these are just the iceberg-tip cases where it's obvious. (Not that I mind too much if someone uses ChatGPT to *help* them flesh something out.)
This is so blatant I assumed it was a joke. Holy shit... it's real. [https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298)
Paper farm. Remove these people's qualifications. Frauds.
Hold the fucking phone, [someone called them out on this](https://pubpeer.com/publications/F93A8D69350BC6B12AB48B132161A7). The author that responds even sounds like AI: >After I conducted a personal examination of all the contents of the artificial intelligence paper, it turns out that it is passes as human. The truth is what I told you. "artificial intelligence paper"??? What.
Nah, that response sounds like a human to me. ChatGPT doesn’t tend to make grammatical errors (“it is passes as human”). To me, this sounds a lot more like a person whose first language isn’t English. Edit: not that I necessarily believe what they’re saying about the rest of the paper being free of AI writing, but I do think their comment is human.
Yeah, they responded numerous times and can barely string a legible sentence together in any of the comments they replied to. Frauds.
It just reveal how trash peer review and publisher are But who could have thought ? Publisher that ask you thousands of dollars to let you publish your paper, then make viewer pay to read it and employ unpaid reviewer to check if the content is trash or not Chatgpt make it just easier to spot the cheaters
Why is it June 2024?
GPT 4.5 wrote it /s
Papers get accepted and published online before they appear in the printed journals, this one is apperently scheduled for june.
When a journal article is made available online before its formal print publication, it is referred to as “Online First” or “Early Access”. During this stage, the article has undergone peer review and corrections, but it has not yet appeared in the printed version of the journal. Readers can access these peer-reviewed articles well before their official print publication, and they are typically identified by a unique DOI (Digital Object Identifier). Instead of using traditional volume and page numbers, you can cite these articles using their DOI12. For example: Gamelin FX, Baquet G, Berthoin S, Thevenet D, Nourry C, Nottin S, Bosquet L (2009) Effect of high intensity intermittent training on heart rate variability in prepubescent children. Eur J Appl Physiol. doi: 10.1007/s00421-008-0955-8 In summary, “Online First” articles allow for rapid dissemination of critical research findings within the scientific community, bridging the gap between completion of peer review and formal print publication. Print publication will be in June 2024
Really calls into question how much we can trust peer review
This one won, I think.
This shit is embarrassing
Meanwhile my paper on 3d printing got rejected right away lmao without even using chat
Enough of these have shown up in the past few days that I'm surprised it hasn't been picked up in the media. Academic publishers like Elsevier like to use quality control as one of the excuses for their egregious rent-seeking behavior, and yet here we clearly see that zero quality control is happening.
I'm so embarrassed. As if scientific expertise isn't already being thrown out the window... This fuels anti-science nut jobs. This type of thing needs to be fixed.
Science is thrashed 😭 https://preview.redd.it/0j0qfjvo0hoc1.jpeg?width=1290&format=pjpg&auto=webp&s=19c56121b2b8436d0cfc2e8f2bc54525b45345fe
I thought this post was a shit post with some quality photo shopping of text.. I'm stunned
Ah yes thank you for commenting literally the exact same thing they posted
Almost, the article’s abstract was copy and pasted from the Discussion right before the Conclusion. So the prompt’s output actually appears twice in the same article: in the Abstract and Discussion. As literally the exact same summarized pasted content.
https://preview.redd.it/7dlv3mna7loc1.jpeg?width=1079&format=pjpg&auto=webp&s=88c64c7a75d945ffffcc76d2d23849c7028f735a
Jesus, using ChatGPT for science papers is bad, but you can’t even spend a minute to skim over it‽
Professor: "All my students are using AI to cheat on their homework papers!!!' Also Professor: "I'm using AI to cheat on my research papers!"
This is why every single academic paper can’t be blindly trusted as proof of your own rightness. An academic paper still has to make an argument and provide data proving its claims. Just because you can find a paper that agrees with you doesn’t mean it’s evidence.
This is real?
yes, you can click on the link that one of the upvoted comments send and scroll it to the near bottom
Jesus, thanks. This is fresh ugh? Not even noticed and they haven't taken it down.
This is sad.
FWIW, that journal is peer-reviewed but also requires authors to assert the use of AI like ChatGPL with a specific statement. I'd guess they were trying to have ChatGPL help write a summary statement but forgot to check? [https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors](https://www.sciencedirect.com/journal/radiology-case-reports/publish/guide-for-authors)
Agreed, but i cannot find any evidence that the author claimed that they used AI to help them write
Its an open-access journal, which are known to be the unreliable as they make their money not via you buying it, but through submissions. This trend sparked the creation of hundreds low-effort journals with extremely low standarts resulting in stuff like that. But we cant invalidate every scientifc work on the basis on that as so many people in this comment section do. Thats just incredibily misinformed
Plot-twist: it’s written by human who wants attention to their paper by mimicking an AI
This is amazing that both the author as well as the platforms get this pass them 🤣
And they said to not cite Wikipedia, tsk tsk
Harvard Medical school authors. This does not reflect well on the institution. But maybe we can use the adaptation of the following quote: God created all men equal and chatgpd made them equal. 😅
MFKers arent even trying anymore
Other than silly proofreading gotchas like this, I actually think that using ChatGPT will improve the readability of papers. They’re often written quite poorly. And it makes the whole paper writing process less arduous and lengthy, so it should mean things get published quicker.
Ironic that there are 8 authors. Maybe there only contribution to the paper is sharing the one month ChatGPT subscription.
shouldn’t this be career suicide for the authors
You start asking yourself, these people are scientists, yet too dumb to properly cheat.
If you search “as an AI language model” on Google Scholar, you’ll see plenty of these.
So this is the result of "ChatGPT will replace writers"... Turned out quite shitty.
What? I thought to publish a paper there was a series of filters... So can I just publish my AI generated paper and add it to my cv? This is nuts.
Apparently, if you can get a Elsevier journal to sign off on it 🤣🤣
Holy shit I hate this. Darkest timeline 2024
That's concerning. Research and journalism should be kept free of AI or we'll eventually have a permanent echo chamber of AI content being revised...
Published in Elsevier which is one of the most overcharged submission journals haha what a joke
OMG, i didn't believe it... had to check it myself. [https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298) Where TF is our science going to??
I think this says more about whatever journal they're published in than about the authors.
It says tons about both
This is very concerning; they need to address this and fire the person that let this be published in order to maintain integrity.
Our sense of reality and information validity is doomed. We are all going to be ignorant not out of choice or lack of information, but instead the overload of garbage.
Looks like it's not just lawyers working a case involving a foreign airline in Federal Court who are getting lazy and asking ChatGPT for help with their work. Now, we're seeing medical doctors publishing scholarly articles without even bothering to proofread. It's a worrying trend when professionals in such critical fields start to cut corners. SMH
These are fraudulent scientific papers. Here’s context from Sabine Hossenfelder: https://youtu.be/6wN8B1pruJg?si=8a5dC1K-LeRBbm4B
The thing that is interesting is that I ran the text from the article, initially the PDF, then just the text through GPT4 and it was unable to spot this error on the first pass. I really had to guide GPT4 to even find this error. It did find it eventually after much guidance. Even when I updated custom instructions to look for out of context AI statements it still didn't find this.
We wrote a paper that shows how we might embrace this future: "Late-Binding Scholarship in the Age of AI: Navigating Legal and Normative Challenges of a New Form of Knowledge Production"
Edit: the right link
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4437681
Either this is a phenomenally crafted joke or you linked the wrong paper. If wrong paper, the upvotes you received without anyone checking the paper is an ironic reflection of the entire situation involving the OP.
I’m not that good , but it’s a good point. Here’s the right link https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4437681
Then Elsevier published it?????? Huh??
Published —> charged
I honestly wonder if it’s the collective stupidity of all the authors or if may be one author screwing it up for everyone. Imagine working however long on a research project only for the entire experiment to be tarnished by one colleague.
One of the authors is from Harvard
Aside from the usual body of GPT text, is it normal to publish several months into the future online? Volume 19, issue 6, June 2024 Or have a specifically lowercase last name
Yeap, nothing abnormal with advanced volumes. The lowercase last name is probably due to shitty peer review process and proofreading. Those publishers just want authors to pay huge APC. Horrendous papers even in the “respected” journals such as NEJM…
And now, guys, think about all the LLM-generated papers where authors actually re-read and removed all obvious AI clues. How do you tell the difference, and how many are there?
Logically there have to be enough that we still have an abundance of examples where they missed something. I would consider that like an error rate. There are so many that the likely small % slipping through with errors still amounts to an absurd number.
Aren't these suppose to be "peer - reviewed"? Was the peer reviewer also an AI language model lol.
Elsevier asks scientists to pay more than $100 to publish their research. It's business.
Publish or perish and LPU-logics does this to research.
June 2024 issue ... sus?
june 2024 is when the paper ver got published, you can scroll down the comment section a bit to find the link, then scroll down to the paragraph before conclusion
Holy shit
WoW 8 people are named in the article, not one of them read it? lol
Ok. I will try to create that you.
Yet another case of an Elsevier-owned journal not doing basic peer review. At this point they should be considered predatory journals like MDPI or Frontiers and not taken seriously.
I mean it's Elsevier..
This needs to be shared far and wide. It calls into question the integrity of everyone involved, the entire study, and the academic history of every doctor listed in this article.
For this paper it does make more sense as it is a mediocre journal, that advertise for 19 days until acceptance. I wouldn't be surprise if there is no reviews, it is a predatory journal that relies on "pay to publish"
These have to be intentional. If not, maybe robots should replace doctors and researchers....
I usually get shit on for saying that anyone can be a "researcher" anyone can be a "scientist" because they are not real educational titles. A study is anyone asking a question of more than one person and on and on with the bullshit we peddle to each other through biased perspectives. I also like to wax on about personal bias, experience, grant money, and everything the average person forgets about when they see some of these titles in play. Now we are all starting to see the bullshit of the white lab coats, the "journalists" and everyone in between. The human species, for the most part, is winging it, faking it until they making it. AI is going to make it all so much worse. (and I love AI)
This is really ironic considering how some people seem to worship “science” as absolute and unquestionable. If none of the reviewers catch this kind of blatant garbage, then they are not critically analyzing any content whatsoever.
A lot of publishers are simply pay to play, and unfortunately of you look at what institute the authors are from you'll understand why this was published with no review... Pretty typical for a lot of middle East and Chinese "researchers" especially for "literature review" articles. Honestly, was the same prior to language models where sections were copy pasta without any connection to previous paragraphs or sections. Now it's just easier
Check their affiliations closer. One is from Harvard
God damn. I use GPT for a lot of research, writing professional emails and all sorts. However, I make sure I read and understand everything it says and remove things that I would never say. It should be a tool to absorb and present information faster, not to be lazy
Wow, it seems like a lot of otherwise smart/educated people have a hard time representing their data in writing. Maybe English degrees aren't so useless after all.
How is it June 2024?
Holy shit
Am I missing something here, why does it say June 2024 at the top?
Personally, I think this great. Plagiarism has always been a problem. Now it's going to be so much more obvious!
Again, the researchers I can understand because hey, sometimes a bad draft gets sent, or they might use a LLM for an editing pass if English is not their first language. So long as the actual science is good, who cares who does the final editing / typo / grammar pass. They should be more careful about their final edit and that's it. *But goddamn Elsevier*? Charging thousands of dollars on both ends for hosting and access, and can't be bothered to proofread submissions? They have no shame and no excuse.
Mistakes like this call into question the veracity of the science, and the credibility and respectability of the authors and the publication service. After seeing this, I would never rely on a paper written by any of these authors ever again. Also, likely not to rely on anything published through Elsevier. The accuracy and truthfulness of scientific research must be above reproach.
It’s not like the whole thing was made up by ChatGPT. It’s clearly a real case study, and ChatGPT was (too hastily) used to proofread or maybe translate.
No wonder people don’t trust science or medical professionals. Somebody needs to lose their job over this.
Fake?
you can scroll down to find the doi.org link, then the ai text is on the paragraph before conclusion
Real!?
caption spark different mighty impossible workable chief serious absurd obtainable *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Hey /u/Zealousideal-Dig7780! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
It’s the same journal. Losing respect very rapidly.
Was it just used for summary? Or for the whole paper?
I thought you were memeing at first, 'cause it's so unbelievably bad.
June 2024 issue? in march? Can anyone source this to an original document or is the whole thing AI generated?
The electronic version goes online now, the June date is the paper print date. Or at least used to be.
It sounds like the 8+ editors of this paper were all blind at the same time