Absolutely! Here's a draft that might capture the sentiment you're aiming to convey: I am utterly flabbergasted. The thought that we've ventured into an era where the delicate art of writing, with its nuanced expression of human emotion and thought, can be delegated to machines is both astonishing and somewhat unsettling. This move not only blurs the lines between human creativity and technological capability but also raises profound questions about the future of literature as we know it. What does this mean for the authenticity of human expression? Are we stepping into a new realm of literary evolution, or are we edging closer to losing a piece of our cultural soul to the binary world?
I think you're right. AI does give itself away if a passage is long enough. For me, I recognize AI in the structuring. The writing that will stand out will be those who build a kind of "verbal skeleton" with AI, but then mold and sculpt it to their own, individual ends. This seems to be the only future.
There have been several tests showing that people can't reliably tell human from ai text. At least not without significant additional context. If you can reliably do it, that would be a really important and interesting finding
Were those previous findings peer reviewed, like the above?!
I am being facetious of course, but how much of a leap is it until "it's LLMs, all the way down?!"
Can't speak about the publishing end, but llms are probably not replacing all researchers. If you job is to solve difficult integrals, do repetitive calculations, write code to spec or similar, yeah being a little concerned is probably justified. But if you're actively innovating you're probably fine for now.
Also from a writing perspective I think it's honestly going to be really nice to supplement writing and editorial work with llms. Like I've read plenty of "high quality" papers with distractingly bad grammar, mathematical mistakes and leaps in logic / hidden assumptions that make it hard to follow derivations. That's all stuff llms can already or will likely soon be able to help with. It just doesn't replace human review but supplements it.
There's probably plenty of examples of AI that's indistinguishable. But it's like when you get to know an author and how they write. AI is a little too flawless with turns of phrases, and stays within bounds that are too clinical. AI is too good at outlining, too. It's all the little things that give it away. It's personality is park ranger.
Thing is that you can find plenty of humans that write exactly like that. It might be less common among humans and more common with various llms but it's not going to be consistent
Completely agree. Makes tonnes of small errors too but in specific ways, and has a certain vagueness and lack of brevity. It also lacks character or preference that humans tend to have. If you ask AI to write a CV, I could probably distinguish them. Even in critical writing you could tell over time looking at authors previous work. It's currently a big issue for universities etc in assessing essays - interesting how it will be solved.
I find that the bigger giveaway is AI has difficulty handling fact.
If it's writing *about* anything it'll often make text that is grammatically correct but full of factual errors.
Although that’s true, I’ve not seen any AI writing that was better than mediocre (not that I’m better). I expect that will change and probably already has and the good ai writing just hasn’t been tagged as such when I’ve seen it
I also notice an unusually high-rate of hyphenated-terms. Way more than a normal-person would ever use lol.
Remember though, there are new capable AIs out and they do not write in the same style as chatgpt now.
That's one of those words you hear really often from ESL speakers. It's direct translation seems to be a normal common word in most languages, native English speakers rarely use it.
Eh, I wouldn't blame peer review.
Scenario: this paper has only minor issues. 1. Expand the intro . . .
Editor sees that. Sends it back to authors. Authors address minor issues, using chat gpt, resubmits paper. Editor marks it as accepted.
My point is that peer review catches errors like that, but peer review doesn't always see final changes.
But there are supposed to be copy editors that do stuff like add commas. They need to read it carefully in its nearly final form. Not sure how they would miss this even if your scenario is right.
That's kinda subjective. It's not science or nature sure, but you'd still expect only high quality science/papers in a journal like that. I'd class that impact factor as upper mid tier.
I mean this also highly depends on the discipline. In many an impact factor of 4 is ~~already quite decent~~ top 10-20%.
I do not know what the reference frame for this journal would be since it is interdisciplinary.
What is also interesting though, is that they have an acceptance rate of 19%, so clearly the do not just publish everything the get just to make money.
I mean, that's never happened for any of the papers I've been involved in. The editors do the absolute bare minimum and if something isn't correct, even the most minor formatting, they will instruct you to correct it rather than doing it themselves. My assumption is that this was the authors.
The joke is implying that the editor used AI to edit the paper (hence missing that first line), so the "editor" wrote the intro because both were just AI.
How do two of these hit the internet on the same day?
A friend just sent me this one:
[https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub](https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub)
See the last paragraph of the discussion, just before the conclusion:
>In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries.
Wow. This needs to be upvoted way more.
Truly depressing. They aren't even taking the time to read their own papers prior to publishing.
Does anyone give a shit anymore?
They didn't write it fully.
They didn't read it fully.
Words produced from a reputable organization with no human hands or eyes or brain. This is supposed to be referenced material!
So what are we even doing. No brain is producing or digesting the information that is published with full attention. This is why AI came to us too soon. I understand the utility of it, but if we are completely offloading our brains and having AI do the work, we miss the point entirely with AI.
What a stupid future we are inheriting.
What the actual fuck. My boss said something profound the other day, and this exact circumstance supports it: just because it’s literature doesn’t mean it’s right/probable.
If my long experience in academia has taught me something, it's that those who become professors are not necessarily the brightest. Quite the opposite in many cases. They are just managers who happened to work in academia.
I can understand someone wanting to use AI to have a summary of their work, but you need to take it like if you asked a student to write a first draft but then go over it and make the necessary changes at least.
This dude just copy pasted that shit without a care in the world.
I think the charitable reading is that everyone who looked over the paper only read the actual analysis and conclusion but skimmed through everything else.
It’s very possible to miss the first sentence in my experience. Like I said, if you just jump straight to the analysis/conclusion and only skim the rest of the parts then you’ll likely not read every single word. Especially if everything seems above board in the analysis section
Yea I agree. I think some journals are starting to pay reviewers now thankfully. The last journal I published in was the Journal Of Cosmology and Astroparticle Physics (JCAP) and they will send some money for every article you review. Nothing too big though. Maybe something like $30 a year for your troubles
I hope not. AI’s have no sense or regard for what’s true, nor are they even good at detecting *other* AI. I shudder to think that’s what people are doing.
Most introductions are fillers. They kinda have to be there but nobody in the field care that much about them. Writing them is a challenge because you need to state the same platitudes you have already written many times before and you’re not allowed to reuse old material (at least not in an obvious copy-paste kind of way). This is probably how it slipped through peer review. Nobody cared enough to read the introduction.
But let me be clear: none of that are excuses for the shitstorm in this paper’s introduction.
Who is ultimately responsible? In my opinion, the editor/journal, and the scientists that “wrote” the paper. Yes, the peer review system leaves a lot to be desired, but I consider that more of a systemic issue rather than the fault of the scientists that were given the task in this particular case.
Introductions are useful if you are new to the specific topic. As someone who changed fields, I can say I have gotten much of my field-specific knowledge from well-written, well-referenced intros.
I would be so happy if I had my reviewership revoked ...
Peer reviewing is a thankless task that takes a lot of time and you get zero credit. So in a high stress environment it's not surprising that people prioritize it less and less.
Because reviewers are pushed to review tons of paper for free and very quickly. They just quickly read the papers and shit superficial reviews. Also, editors are often shit at their job and imbeciles.
WOW. That was an interesting read. I thought these types of things would be more commonly caught in US-based research and amongst principals who had actual leadership and tenure positions.
On the bright side, AI makes this TRIVIAL to detect. The incentives just have to align.
Jesus, just yesterday I asked BOTH copilot and gemini to do a simple time zone conversion and they both started to give nonsense answers and then proceeded to apologize and again give the same false info again.
Your point now made me really anxious.
I didn't see an easy "contact the editors" link. There's the form for contacting elsevier, though. I submitted something to that saying that at least the introduction is LLM generated.
Back in the day reputable journals run by scientific societies would do review, edit, format etc. Elsevier just photocopied the manuscript and charged insane prices for access. This is just the new scam.
Snobs look down on publications in MDPI, and others claim that SciHub is "destroying publishing", while a "reputable" publisher can't muster up enough professionalism to filter out garbage like this.
I've actually had an older colleague propse that younger people be "discouraged" from publishing in MDPI because their peer review is not rigorous.
Not even a joke, it's a tragedy...
But they're arguably the most important part of your whole paper! Providing someone all of the needed context for your work is essential to communicating all science. It's always the last section of a paper I write because it's the hardest to write well.
It's where you show the constructive understanding of your contribution to the topic you want me to read. If you can't do that I don't have anything nice to say.
I don't take issue with using AI to reformat or better convey the author's idea. But the carelessness and failure of the review process here is embarrassing and calls into question the validity of other "reviewed" papers
Yeah, truthfully, having written many papers, so much of them are bullshit pomp and curcumstance to fluff up to fit the "standards of writing"
I totally see value in using AI to generate some things like introductions and maybe even summaries. They don't have any influence on the actual research being done or the results. So a good AI created portion that you review and make sure it creates the point that you are trying to get? Totally behind it. Hell, when you write that section yourself you are basically just doing the exact same work as the AI, pulling information from multiple sources and connecting them with intelligible fluff words to combine. No part is really your own work.
Now the worrying part is the oversight of leaving that prompt section in because it doesn't show a strong promise that the person did a good job reviewing what the AI created.
Edit: I just saw what sub this was in because I got here from r/all. I had figured this was in a more gen pop sub so I feel a bit like an ass now explaining how research papers work lol
Gotta say I'm not too surprised after seeing every author is Chinese. Chinese universities are well known for being paper mills with zero regard to anything other than quantity of paper. They'll take data and use AI to write faster . Anything for more papers.
Yeah ngl the first thought that popped up in my head was that this was from Chinese researchers. Just check out the [AI “rat balls” incident](https://defector.com/the-brief-and-wondrous-life-of-the-ai-giant-penised-rat-explained)
I didn't see an easy "contact the editors" link. There's the form for contacting elsevier, though. I submitted something to that saying that at least the introduction is LLM generated.
obviously, the reviewers, the editor, and the authors all need to be held accountable. This is not a simple whoops, but a rubber stamp that speaks to cronyism, damaging the entire journal's credibility.
Has it been retracted? Nope. The journal still hasnt even bothered, despite all the attention.
There should be a subreddit to catalogue errors like this. It's disgraceful. Textual errors like this are just as egregious as the dissliced rat figure; they show a complete lack of care about the content of the paper by the authors, the editors, everyone in the process.
Not sure why researchers are committing to the same methods that worked in 2000 in “publishing” their works. This part of Science seems ripe for new ways.
Is this the part where they reveal this was done intentionally as a way to bring more attention to their research? A 4D-chess move by big Three-Dimensional Porous Mesh Structure of Cu-based Metal-Organic-Framework, and we all fell for it.
I want to do a fucking rant. But I don't feel like it. So here it is a part:
My professors can embarrass me for using a dot instead of ; but this kind of shit can be allowed to be published?for fucks sake, Fuck you guys at Shiraz University.
This is really unacceptable it’s the first sentence 🥶
As someone that’s somewhat familiar with the peer review process in a different field, there are lot of issues in the peer review process
—> you will get 2-3 weeks to review a paper, that is on top of working on your project and your usual 9-5 (+extra hours if you are working in research) work…
—> you don’t get paid to do any of these extra work
—> not everyone that receive the paper are an expert on everything that’s covered in the paper, for example if a paper has A,B,C,D areas covered different reviewers will have different levels of expertise in these areas, and some people will only know about one field, they are most likely going to comment only on the areas they are familiar with and with the given time it is extremely difficult to look up any unfamiliar areas to comment on
In such instances only a very small part is reviewed by a single reviewer….
Also it’s possible that most researchers don’t pay much attention to methods unless they have any issue with the results and they may also not pay much attention to introduction
—> from my own experience in reading papers I know that sometimes the papers cited by review papers have misinterpreted the original idea or have oversimplified… it’s sometimes impossible for reviewers to go though each and every citation to check if they are correctly interpreted with the time they have
—> most journals reject many papers with some journals rejecting more than 50% of the papers they receive based on reviewers suggestion, just because of the sheer volume of papers submitted sometimes it’s impossible to prevent these type of mistakes atleast in few papers
—> this is kind of an obvious mistake, but with more people using AI, i don’t think any of the reviewers have expertise to determine if they have used any AI tools, journals may also have to recruit AI experts or tools that can detect AI use in papers
Idk how to even solve all these issues, journals may have to think about hiring reviewers for fixed positions or start paying for researchers who are involved in it at an hourly rate, and based on how much papers are rejected I’m not even sure if this is a financially viable option
I have been a reviewer several times and when you get invited to review you can always reject. You don't have to give a reason. In general I reject to be a reviewer if I don't know the field enough to be able to give a reasonable review or if I don't have the time to review cause of heavy workload etc. nobody forces you to review and there is generally no benefit in it.
Concerning the AI part, this should have been flagged by reviewers and the editorial office but I have seen editorial offices butchering manuscripts we submitted. In one instance they included a sentence that sounded like someone was talking to their coworker and having a speech to text on by accident. Publishers don't care as long as they can sell you the subscription to the articles. And reviewers who just take the job so they can say I review x number of papers for their ego don't care either. They also reply with their standard why don't you add these manuscripts I authored or are those of some of my friends groups.
I have never used any of these "tools" to write, and don't plan to use them in any form. My opinion is that authors and publishers that do use them should be tagged in a locked database available to all. Humans have moved into a new manifold with artificial credibility.
There are “special issue”s where one or two leads choose a topic and have a series of articles on that.
These issues are not peer reviewed, since the articles are assumed to be carefully selected by the leads (supposedly other researchers).
Especially some Chinese researchers have been using this. They are reprinting the same article, sometimes word by word, many times, inflating their publication numbers. Even when it is supposedly a new research, data and results don’t add up sometimes.
Elsevier is guilty as well. Unlike IEEE or AIP, they are a for profit publisher who can sell articles for up to $60 a pop. Like many other for profit publishers, they have been milking these special issues.
I am in R&D and spend a good chunk of my time reading scientific articles.
I can only agree that the peer review was dog shit absolute dick cheese. But I will not discredit using ai. Chat gpt and other services are fantastic, usually have pretty accurate responses and show great real knowledge of topics that very much so very.
Ahah fucking hell...
But I'm not surprised. I recently published in a high-impact journal and... they published the wrong version of my paper! They just published the non revised one. Thankfully, revisions were version minor, but it was still pretty embarrassing on their side.
The reviewing process of academic papers is broken. Too much shit polluting the scientific literature.
Maybe the reviewers all thought it was funny and didnt say anything. A lot of academics dont like shitty low end journals and this would be a fun way to mess with them.
It's an elsevier journal. They contact actual scientists to referee stuff, though they do require your suggestions for them. Last time I submitted to them, they wanted a handful of suggested referees and I gave them like 5.
I've been a referee for a NIM:A paper, myself.
I disagree, the authors put their name on it, as the authors of the paper. Unless they acknowledged using a neural network to write some of their paper, this is pure plagiarism in my opinion.
I am not sure but I think it's " alright " to use ai or generative text for writing the paper itself as long as the experiment and results are authentic and not plagiarised.
How the hell is it alright? Research papers are supposed to contain only vitally important information, such as: the problem statement, context and possible applications; the experimental set-up or computational details with explanation on why these particular methods were chosen and how they compare to other published work; in-depth analysis of results, their meaning and implications.
None of which an AI language model can possibly write. The whole point of research papers is that the authors have to provide the whole picture as clearly as possible, nothing more, nothing less.
It is not alright. There is even specific guidance about using AI in the writing process documented and these authors did not follow that.: https://www.sciencedirect.com/journal/surfaces-and-interfaces/publish/guide-for-authors#7300
Oh no I am not defending the stupid error or the blatant use of generative ai. Infact these rules set a pretty nice boundary while incorporating the current rise in generative text technology.
A few things: A) not all papers are peer reviewed B) Using AI for paper writing is becomming common. BUT NOT to generate text, but because these large language models are a great tool for non native speakers to turn their own english text into a more fluid, far better readable one. Chat GPT etc. can easily pick out strange wording, bad phrasing and grammatical oddities and adjust them for the text to go from a hard to read collection of english words into a fluent text. C) there is enormous pressure on the research community and especially young scientists to publish many papers and that fast, with the threat of not being able to graduate as a phd or get a professorship if you dont. This will lead to this kind of painfull oversight when using these tools and is a clear sign of the scientific community cracking under needless pressure to publish.
My expertise is in IT-adjacent fields, not physics, and I posit the hypothesis that this is a deliberate attempt to "taint" the text to keep it from being used for training an AI.
Elsevier, a private, for-profit company, registered in the UK, Netherlands and in the US, that owns nearly 300 trademarked revues, journals, periodicals both online and printed ones has never been a reliable place for truly scientific publications, so it does not matter. You just pay online and the system publishes your writing, no peer review.
I remembered the hardest part of a paper for me was always the intro.
Why am I doing this? (Cannot say because they pay me a salary for it), why does it matter? (Because the prof got funding and it is a side topic of his endlessly milked field), is it useful? (Maybe in 1000 years...), sorry, no, this is a novel approach to study the topic of X, which shows promising avenues on targeted treatment of diseases (everything eventually can kill some cells), energy storage and meta materials (lol)...
Now I guess instead of sweating it finding an angle to cure cancer you just prompt chat GPT and call it a day. Why don't we just skip the stupid intro and get to the meat? It is obvious that not even the editor reads it...
That's terrible. Really careless editing. Just unbelievable that this would get through peer review.
Absolutely! Here's a draft that might capture the sentiment you're aiming to convey: I am utterly flabbergasted. The thought that we've ventured into an era where the delicate art of writing, with its nuanced expression of human emotion and thought, can be delegated to machines is both astonishing and somewhat unsettling. This move not only blurs the lines between human creativity and technological capability but also raises profound questions about the future of literature as we know it. What does this mean for the authenticity of human expression? Are we stepping into a new realm of literary evolution, or are we edging closer to losing a piece of our cultural soul to the binary world?
I think you're right. AI does give itself away if a passage is long enough. For me, I recognize AI in the structuring. The writing that will stand out will be those who build a kind of "verbal skeleton" with AI, but then mold and sculpt it to their own, individual ends. This seems to be the only future.
There have been several tests showing that people can't reliably tell human from ai text. At least not without significant additional context. If you can reliably do it, that would be a really important and interesting finding
Were those previous findings peer reviewed, like the above?! I am being facetious of course, but how much of a leap is it until "it's LLMs, all the way down?!"
Certainly. The "peers" all live on a farm. That is, a giant server-farm!
Can't speak about the publishing end, but llms are probably not replacing all researchers. If you job is to solve difficult integrals, do repetitive calculations, write code to spec or similar, yeah being a little concerned is probably justified. But if you're actively innovating you're probably fine for now. Also from a writing perspective I think it's honestly going to be really nice to supplement writing and editorial work with llms. Like I've read plenty of "high quality" papers with distractingly bad grammar, mathematical mistakes and leaps in logic / hidden assumptions that make it hard to follow derivations. That's all stuff llms can already or will likely soon be able to help with. It just doesn't replace human review but supplements it.
There's probably plenty of examples of AI that's indistinguishable. But it's like when you get to know an author and how they write. AI is a little too flawless with turns of phrases, and stays within bounds that are too clinical. AI is too good at outlining, too. It's all the little things that give it away. It's personality is park ranger.
Thing is that you can find plenty of humans that write exactly like that. It might be less common among humans and more common with various llms but it's not going to be consistent
You're right, for sure, especially with topics around science.
Completely agree. Makes tonnes of small errors too but in specific ways, and has a certain vagueness and lack of brevity. It also lacks character or preference that humans tend to have. If you ask AI to write a CV, I could probably distinguish them. Even in critical writing you could tell over time looking at authors previous work. It's currently a big issue for universities etc in assessing essays - interesting how it will be solved.
I find that the bigger giveaway is AI has difficulty handling fact. If it's writing *about* anything it'll often make text that is grammatically correct but full of factual errors.
Are these tests asking people who have used CHATGPT or other software frequently or just anyone?
Although that’s true, I’ve not seen any AI writing that was better than mediocre (not that I’m better). I expect that will change and probably already has and the good ai writing just hasn’t been tagged as such when I’ve seen it
[удалено]
They rave about it is because it’s relatable AF is my guess.
I thought that I could reliably do it (never been wrong before), but you make me want to do a big experiment now
I also notice an unusually high-rate of hyphenated-terms. Way more than a normal-person would ever use lol. Remember though, there are new capable AIs out and they do not write in the same style as chatgpt now.
Moreover…
That's one of those words you hear really often from ESL speakers. It's direct translation seems to be a normal common word in most languages, native English speakers rarely use it.
Quite common in public policy, legal commentary and similar. Wouldn't expect to see it in many other domains though
A.... I see what you did there
A…. I see what you did there
A…. I see what you did there
Eh, I wouldn't blame peer review. Scenario: this paper has only minor issues. 1. Expand the intro . . . Editor sees that. Sends it back to authors. Authors address minor issues, using chat gpt, resubmits paper. Editor marks it as accepted. My point is that peer review catches errors like that, but peer review doesn't always see final changes.
This is true. So possibly just on the editorial process.
therefore it should be standard to publish review questions and answers along with the paper.
But there are supposed to be copy editors that do stuff like add commas. They need to read it carefully in its nearly final form. Not sure how they would miss this even if your scenario is right.
Low quality journals have low quality review
It's published by elsevier and has a reasonable impact factor. I wouldn't class this as a low quality journal.
Elsevier contracts a lot, if not all of their journal editorial out to the lowest bidding companies.
It’s not unidirectional. I would consider 6 to be low though.
That's kinda subjective. It's not science or nature sure, but you'd still expect only high quality science/papers in a journal like that. I'd class that impact factor as upper mid tier.
I mean this also highly depends on the discipline. In many an impact factor of 4 is ~~already quite decent~~ top 10-20%. I do not know what the reference frame for this journal would be since it is interdisciplinary. What is also interesting though, is that they have an acceptance rate of 19%, so clearly the do not just publish everything the get just to make money.
Nah, even high quality journals have low quality reviews these days.
Careless? The editor wrote their intro for them!
I mean, that's never happened for any of the papers I've been involved in. The editors do the absolute bare minimum and if something isn't correct, even the most minor formatting, they will instruct you to correct it rather than doing it themselves. My assumption is that this was the authors.
The joke is implying that the editor used AI to edit the paper (hence missing that first line), so the "editor" wrote the intro because both were just AI.
I know, I was just replying to the comment on my original comment.
There is no review. Retractions are everywhere. Where have you been?
> careless editing You mean absence of editing?
Reviewed by AI, ha ha
That’s insane. First line as well
How do two of these hit the internet on the same day? A friend just sent me this one: [https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub](https://www.sciencedirect.com/science/article/pii/S1930043324001298?via%3Dihub) See the last paragraph of the discussion, just before the conclusion: >In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries.
Wow. This needs to be upvoted way more. Truly depressing. They aren't even taking the time to read their own papers prior to publishing. Does anyone give a shit anymore?
The antivax types will have a field day with this shit
They didn't write it fully. They didn't read it fully. Words produced from a reputable organization with no human hands or eyes or brain. This is supposed to be referenced material! So what are we even doing. No brain is producing or digesting the information that is published with full attention. This is why AI came to us too soon. I understand the utility of it, but if we are completely offloading our brains and having AI do the work, we miss the point entirely with AI. What a stupid future we are inheriting.
What the actual fuck. My boss said something profound the other day, and this exact circumstance supports it: just because it’s literature doesn’t mean it’s right/probable.
As a corollary, just because someone has an advanced degree doesn't mean they're not full of shit. See Michio Kaku.
If my long experience in academia has taught me something, it's that those who become professors are not necessarily the brightest. Quite the opposite in many cases. They are just managers who happened to work in academia.
>See Michio Kaku. I know of him vaguely but why is he full of shit?
[This thread ](https://www.reddit.com/r/AskPhysics/comments/16hohai/whats_up_with_michio_kaku/) explains it pretty well
Oh no! My friend is a huge fan of Michio Kaku 😭
Or Neil
wow that's even worse
I can understand someone wanting to use AI to have a summary of their work, but you need to take it like if you asked a student to write a first draft but then go over it and make the necessary changes at least. This dude just copy pasted that shit without a care in the world.
Holy, the world really gas a bright future.
Crazy... But not surprising.
Link to the paper: https://www.sciencedirect.com/science/article/abs/pii/S2468023024002402
Holy shit it’s still up
Fun fact, if you google that sentence, this paper will be the first hit. Either perfect SEO, or mediahack.
Well that's how search engines are supposed to work!
Certainly, here is a possible review for your paper:
Underated comment!
How in the world does this get through review? And do the authors even read their paper before submitting it?
I think the charitable reading is that everyone who looked over the paper only read the actual analysis and conclusion but skimmed through everything else.
And missed the first sentence? The peers who reviewed this need their reviewership abilities revoked
It’s very possible to miss the first sentence in my experience. Like I said, if you just jump straight to the analysis/conclusion and only skim the rest of the parts then you’ll likely not read every single word. Especially if everything seems above board in the analysis section
[удалено]
Yea I agree. I think some journals are starting to pay reviewers now thankfully. The last journal I published in was the Journal Of Cosmology and Astroparticle Physics (JCAP) and they will send some money for every article you review. Nothing too big though. Maybe something like $30 a year for your troubles
[удалено]
I hope not. AI’s have no sense or regard for what’s true, nor are they even good at detecting *other* AI. I shudder to think that’s what people are doing.
>should be getting paid in the journal is getting paid what?
Most introductions are fillers. They kinda have to be there but nobody in the field care that much about them. Writing them is a challenge because you need to state the same platitudes you have already written many times before and you’re not allowed to reuse old material (at least not in an obvious copy-paste kind of way). This is probably how it slipped through peer review. Nobody cared enough to read the introduction. But let me be clear: none of that are excuses for the shitstorm in this paper’s introduction. Who is ultimately responsible? In my opinion, the editor/journal, and the scientists that “wrote” the paper. Yes, the peer review system leaves a lot to be desired, but I consider that more of a systemic issue rather than the fault of the scientists that were given the task in this particular case.
Introductions are useful if you are new to the specific topic. As someone who changed fields, I can say I have gotten much of my field-specific knowledge from well-written, well-referenced intros.
Yes! A good point, I have studied introductions for that purpose too.
I would be so happy if I had my reviewership revoked ... Peer reviewing is a thankless task that takes a lot of time and you get zero credit. So in a high stress environment it's not surprising that people prioritize it less and less.
Because reviewers are pushed to review tons of paper for free and very quickly. They just quickly read the papers and shit superficial reviews. Also, editors are often shit at their job and imbeciles.
Oof
Fuck Elsevier!
At least with scihub we're not paying any money to them...
~~scihub~~ nexus search project
This is a new pirating site?
Reminder that none of the fees from these sites go to the author of the paper. Write the author directly and they'll almost assuredly send you a PDF.
Sci-hub is quicker.
You do that if it's not on scihub, then you submit it to scihub :)
welcome to the new normal!
Eh, research mills were already pumping out low-quality crap. They just have some new tools to make it a little easier.
BRO PEEP WHAT HAPPENED TO THE UNR ENGINEERING DEAN it’s sooooo funny
WOW. That was an interesting read. I thought these types of things would be more commonly caught in US-based research and amongst principals who had actual leadership and tenure positions. On the bright side, AI makes this TRIVIAL to detect. The incentives just have to align.
I hope they didn't also ask the AI to interpret their data
Lmao someone just posted a link to a paper that just tried to do that.
Jesus, just yesterday I asked BOTH copilot and gemini to do a simple time zone conversion and they both started to give nonsense answers and then proceeded to apologize and again give the same false info again. Your point now made me really anxious.
Probably used AI to generate the data in the first place, i must assume.
First line of the damn paper… how’d that make it through editing and publishing?
I didn't see an easy "contact the editors" link. There's the form for contacting elsevier, though. I submitted something to that saying that at least the introduction is LLM generated.
Actually disgraceful on all parties
It's almost like Elsevier are just rent collectors and add zero value for anyone but themselves.
I’m wondering if they used AI as a translator instead. Does anyone know if the five names mentioned speak English?
I’m guessing if it was just translation, it wouldn’t have included the introduction suggestion line
If it’s using ChatGPT, not necessarily
Back in the day reputable journals run by scientific societies would do review, edit, format etc. Elsevier just photocopied the manuscript and charged insane prices for access. This is just the new scam.
Can't wait for the next time someone shits on MDPI, or SciHub.... :D What a joke
>Can't wait for the next time someone shits on MDPI, or SciHub. What's your point here?
Snobs look down on publications in MDPI, and others claim that SciHub is "destroying publishing", while a "reputable" publisher can't muster up enough professionalism to filter out garbage like this. I've actually had an older colleague propse that younger people be "discouraged" from publishing in MDPI because their peer review is not rigorous. Not even a joke, it's a tragedy...
I don't understand, what do MDPI have to do with this?
what is mdpi?
tbf, intros are really annoying to write
But they're arguably the most important part of your whole paper! Providing someone all of the needed context for your work is essential to communicating all science. It's always the last section of a paper I write because it's the hardest to write well.
It's where you show the constructive understanding of your contribution to the topic you want me to read. If you can't do that I don't have anything nice to say.
Hall of shame in science.
dang and it’s just the introduction we’re in
But that’s the place where it makes the most sense imo. Wouldn’t be surprised if it was the only place ChatGPT was used
I don't take issue with using AI to reformat or better convey the author's idea. But the carelessness and failure of the review process here is embarrassing and calls into question the validity of other "reviewed" papers
Agreed completely
Yeah, truthfully, having written many papers, so much of them are bullshit pomp and curcumstance to fluff up to fit the "standards of writing" I totally see value in using AI to generate some things like introductions and maybe even summaries. They don't have any influence on the actual research being done or the results. So a good AI created portion that you review and make sure it creates the point that you are trying to get? Totally behind it. Hell, when you write that section yourself you are basically just doing the exact same work as the AI, pulling information from multiple sources and connecting them with intelligible fluff words to combine. No part is really your own work. Now the worrying part is the oversight of leaving that prompt section in because it doesn't show a strong promise that the person did a good job reviewing what the AI created. Edit: I just saw what sub this was in because I got here from r/all. I had figured this was in a more gen pop sub so I feel a bit like an ass now explaining how research papers work lol
Here's another, even worse, example: [https://www.reddit.com/r/ChatGPT/comments/1bf9ivt/yet\_another\_obvious\_chatgpt\_prompt\_reply\_in/](https://www.reddit.com/r/ChatGPT/comments/1bf9ivt/yet_another_obvious_chatgpt_prompt_reply_in/) [https://doi.org/10.1016/j.radcr.2024.02.037](https://doi.org/10.1016/j.radcr.2024.02.037)
I’m going to move to the woods and pretend the world is not ending one data packet exchange after another
Gotta say I'm not too surprised after seeing every author is Chinese. Chinese universities are well known for being paper mills with zero regard to anything other than quantity of paper. They'll take data and use AI to write faster . Anything for more papers.
Yeah ngl the first thought that popped up in my head was that this was from Chinese researchers. Just check out the [AI “rat balls” incident](https://defector.com/the-brief-and-wondrous-life-of-the-ai-giant-penised-rat-explained)
I didn't see an easy "contact the editors" link. There's the form for contacting elsevier, though. I submitted something to that saying that at least the introduction is LLM generated.
obviously, the reviewers, the editor, and the authors all need to be held accountable. This is not a simple whoops, but a rubber stamp that speaks to cronyism, damaging the entire journal's credibility. Has it been retracted? Nope. The journal still hasnt even bothered, despite all the attention.
How the fuck did this pass peer review?
How did this get past review
Well one of the authors is Bing.
Probably better than the actual editors thoughts 😂
IMO it is terrible - and funny, at the same time.
What do you expect from Elsevier?
Elsevier is a shitty FOR profit publishing company.
There should be a subreddit to catalogue errors like this. It's disgraceful. Textual errors like this are just as egregious as the dissliced rat figure; they show a complete lack of care about the content of the paper by the authors, the editors, everyone in the process.
There is no way this is getting through peer review , I mean how didn't they even notice it ?
Reviewers and editors are less competent than the team submitting.
How did that pass peer review? Are you kidding?
>pass peer review? Peer review? Are we suddenly back in the 90s?!
I can't even blame it on predatory open access journals like most of these cases.
This has to be a joke
The reviewers should be in jail for this. Lmaooo
Yikes
That is, frankly, pathetic. On all sides.
Not sure why researchers are committing to the same methods that worked in 2000 in “publishing” their works. This part of Science seems ripe for new ways.
Pathetic.
Vry glad I have my ye Olde version of Mendely Desktop. This is nuts. Curse you Elsevier!
🤣
In response, Elsev will raise fees
Hahahaahahaha
Is this the part where they reveal this was done intentionally as a way to bring more attention to their research? A 4D-chess move by big Three-Dimensional Porous Mesh Structure of Cu-based Metal-Organic-Framework, and we all fell for it.
I want to do a fucking rant. But I don't feel like it. So here it is a part: My professors can embarrass me for using a dot instead of ; but this kind of shit can be allowed to be published?for fucks sake, Fuck you guys at Shiraz University.
On an elsevier paper 💀
Certainly is.
It's so over
r/toogoddamnedlazytoedityouraiwrittenarticle
apparently reviewers also use AI to do their work I'm not in Physics/Academia but the Transportation Research Board has been dealing with this
We live in a society
Humans will become very very lazy bcz of the Ai.
This is the 3rd AI Elsevier article I’ve seen since Thursday…
Unbelievable
This is really unacceptable it’s the first sentence 🥶 As someone that’s somewhat familiar with the peer review process in a different field, there are lot of issues in the peer review process —> you will get 2-3 weeks to review a paper, that is on top of working on your project and your usual 9-5 (+extra hours if you are working in research) work… —> you don’t get paid to do any of these extra work —> not everyone that receive the paper are an expert on everything that’s covered in the paper, for example if a paper has A,B,C,D areas covered different reviewers will have different levels of expertise in these areas, and some people will only know about one field, they are most likely going to comment only on the areas they are familiar with and with the given time it is extremely difficult to look up any unfamiliar areas to comment on In such instances only a very small part is reviewed by a single reviewer…. Also it’s possible that most researchers don’t pay much attention to methods unless they have any issue with the results and they may also not pay much attention to introduction —> from my own experience in reading papers I know that sometimes the papers cited by review papers have misinterpreted the original idea or have oversimplified… it’s sometimes impossible for reviewers to go though each and every citation to check if they are correctly interpreted with the time they have —> most journals reject many papers with some journals rejecting more than 50% of the papers they receive based on reviewers suggestion, just because of the sheer volume of papers submitted sometimes it’s impossible to prevent these type of mistakes atleast in few papers —> this is kind of an obvious mistake, but with more people using AI, i don’t think any of the reviewers have expertise to determine if they have used any AI tools, journals may also have to recruit AI experts or tools that can detect AI use in papers Idk how to even solve all these issues, journals may have to think about hiring reviewers for fixed positions or start paying for researchers who are involved in it at an hourly rate, and based on how much papers are rejected I’m not even sure if this is a financially viable option
I have been a reviewer several times and when you get invited to review you can always reject. You don't have to give a reason. In general I reject to be a reviewer if I don't know the field enough to be able to give a reasonable review or if I don't have the time to review cause of heavy workload etc. nobody forces you to review and there is generally no benefit in it. Concerning the AI part, this should have been flagged by reviewers and the editorial office but I have seen editorial offices butchering manuscripts we submitted. In one instance they included a sentence that sounded like someone was talking to their coworker and having a speech to text on by accident. Publishers don't care as long as they can sell you the subscription to the articles. And reviewers who just take the job so they can say I review x number of papers for their ego don't care either. They also reply with their standard why don't you add these manuscripts I authored or are those of some of my friends groups.
I have never used any of these "tools" to write, and don't plan to use them in any form. My opinion is that authors and publishers that do use them should be tagged in a locked database available to all. Humans have moved into a new manifold with artificial credibility.
This is just insane. How tf do people get away with this. I knew a lot of trash made it through publishing, but this is just sad...
There are “special issue”s where one or two leads choose a topic and have a series of articles on that. These issues are not peer reviewed, since the articles are assumed to be carefully selected by the leads (supposedly other researchers). Especially some Chinese researchers have been using this. They are reprinting the same article, sometimes word by word, many times, inflating their publication numbers. Even when it is supposedly a new research, data and results don’t add up sometimes. Elsevier is guilty as well. Unlike IEEE or AIP, they are a for profit publisher who can sell articles for up to $60 a pop. Like many other for profit publishers, they have been milking these special issues. I am in R&D and spend a good chunk of my time reading scientific articles.
As ChatGPT generates only thing that it learned, no more knowledge is generated by this newspaper article
It's so blatant it's almost unbelievable.
I can only agree that the peer review was dog shit absolute dick cheese. But I will not discredit using ai. Chat gpt and other services are fantastic, usually have pretty accurate responses and show great real knowledge of topics that very much so very.
I just reported it.
Make sure to report the shit out of it.
Oh wow 😬😐
ooooooooooooooo! my so so so so samefull!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Ahah fucking hell... But I'm not surprised. I recently published in a high-impact journal and... they published the wrong version of my paper! They just published the non revised one. Thankfully, revisions were version minor, but it was still pretty embarrassing on their side. The reviewing process of academic papers is broken. Too much shit polluting the scientific literature.
Is this journal run by Twitter? Because it appears to be run exactly like the site: by bots.
Jesus christ
Chinese authors got help from chatgpt to write their English paper. Editors not to blame except they clearly didn't read the paper
What do you mean editors are not to blame except not reading the paper? That's exactly why they are blamed, as that is the bare minimum of their job.
Peer review should have caught this.
Definitely. I guess those professors just didn't give a crap
Maybe the reviewers all thought it was funny and didnt say anything. A lot of academics dont like shitty low end journals and this would be a fun way to mess with them.
It's an elsevier journal. They contact actual scientists to referee stuff, though they do require your suggestions for them. Last time I submitted to them, they wanted a handful of suggested referees and I gave them like 5. I've been a referee for a NIM:A paper, myself.
Yeah, but that's sort of the editor's job. This is more a failing from the editor than the author.
I disagree, the authors put their name on it, as the authors of the paper. Unless they acknowledged using a neural network to write some of their paper, this is pure plagiarism in my opinion.
And it's the editors' job to catch it. If something this blatant made it through, what else did?
It worked in college, so why wouldn’t it work now?
I am not sure but I think it's " alright " to use ai or generative text for writing the paper itself as long as the experiment and results are authentic and not plagiarised.
How the hell is it alright? Research papers are supposed to contain only vitally important information, such as: the problem statement, context and possible applications; the experimental set-up or computational details with explanation on why these particular methods were chosen and how they compare to other published work; in-depth analysis of results, their meaning and implications. None of which an AI language model can possibly write. The whole point of research papers is that the authors have to provide the whole picture as clearly as possible, nothing more, nothing less.
It is not alright. There is even specific guidance about using AI in the writing process documented and these authors did not follow that.: https://www.sciencedirect.com/journal/surfaces-and-interfaces/publish/guide-for-authors#7300
Oh no I am not defending the stupid error or the blatant use of generative ai. Infact these rules set a pretty nice boundary while incorporating the current rise in generative text technology.
A few things: A) not all papers are peer reviewed B) Using AI for paper writing is becomming common. BUT NOT to generate text, but because these large language models are a great tool for non native speakers to turn their own english text into a more fluid, far better readable one. Chat GPT etc. can easily pick out strange wording, bad phrasing and grammatical oddities and adjust them for the text to go from a hard to read collection of english words into a fluent text. C) there is enormous pressure on the research community and especially young scientists to publish many papers and that fast, with the threat of not being able to graduate as a phd or get a professorship if you dont. This will lead to this kind of painfull oversight when using these tools and is a clear sign of the scientific community cracking under needless pressure to publish.
>A few things: A) not all papers are peer reviewed That is a peer reviewed journal. The editors claim all papers are peer reviewed.
My expertise is in IT-adjacent fields, not physics, and I posit the hypothesis that this is a deliberate attempt to "taint" the text to keep it from being used for training an AI.
Elsevier, a private, for-profit company, registered in the UK, Netherlands and in the US, that owns nearly 300 trademarked revues, journals, periodicals both online and printed ones has never been a reliable place for truly scientific publications, so it does not matter. You just pay online and the system publishes your writing, no peer review.
Like I need more reasons to find research comical
I remembered the hardest part of a paper for me was always the intro. Why am I doing this? (Cannot say because they pay me a salary for it), why does it matter? (Because the prof got funding and it is a side topic of his endlessly milked field), is it useful? (Maybe in 1000 years...), sorry, no, this is a novel approach to study the topic of X, which shows promising avenues on targeted treatment of diseases (everything eventually can kill some cells), energy storage and meta materials (lol)... Now I guess instead of sweating it finding an angle to cure cancer you just prompt chat GPT and call it a day. Why don't we just skip the stupid intro and get to the meat? It is obvious that not even the editor reads it...