The publishing industry got caught by AI and they don't have much policy / rules so it is already a chaos.
-
The publishing industry got caught by AI and they don't have much policy / rules so it is already a chaos. Many in the industry think - great, people will now generate new papers faster, easier, so higher profits for us, but things could change drastically with unpredictable effects sooner than you think. https://www.linkedin.com/posts/sabahetaramcilovic_ai-activity-7367983837684961281-1SVa
Personally I am against ANY copy paste from ChatGPT in anything that is called "original research article".Some ethical dilemmas:
Q1. if an LLM generated some text, even some solutions, shouldn't these systems also be listed as authors?
Q2. If larger parts of text, whole sentences were generated by AI but this is not mentioned: is this a scientific fraud? (PS: yes, it absolutely is!)
Q3. If the majority of text has been written by ChatGPT, shouldn't the paper then become "ChatGPT et al."? -
Some ethical dilemmas:
Q1. if an LLM generated some text, even some solutions, shouldn't these systems also be listed as authors?
Q2. If larger parts of text, whole sentences were generated by AI but this is not mentioned: is this a scientific fraud? (PS: yes, it absolutely is!)
Q3. If the majority of text has been written by ChatGPT, shouldn't the paper then become "ChatGPT et al."?Q4. Are we only 2-3 yrs away from the world where we will have to certify / prove that we have actually come up to some idea, wrote a sentences or made artwork by our brain (and our brain only)? What if it become near to impossible to prove a human source of an idea?
Q5. Should we all (research societies) make urgent meetings and discuss these issues and agree upon (democratically) & provide clear policies that all members need to stick to (or be expelled)? -
Q4. Are we only 2-3 yrs away from the world where we will have to certify / prove that we have actually come up to some idea, wrote a sentences or made artwork by our brain (and our brain only)? What if it become near to impossible to prove a human source of an idea?
Q5. Should we all (research societies) make urgent meetings and discuss these issues and agree upon (democratically) & provide clear policies that all members need to stick to (or be expelled)?You can use LLMs (and I use Gemini pro, Grok, Mistral and Claude now almost every day!) but then summarize things in your own words / use your own style. I had the same situation with Wikipedia for decades - read the articles, learn, but then summarize and explain in my own worlds. Stay yourself, be transparent and seek for clarity, transparency and authenticity in whatever research you do.
Anyway, I'm curious to hear what you think. -
You can use LLMs (and I use Gemini pro, Grok, Mistral and Claude now almost every day!) but then summarize things in your own words / use your own style. I had the same situation with Wikipedia for decades - read the articles, learn, but then summarize and explain in my own worlds. Stay yourself, be transparent and seek for clarity, transparency and authenticity in whatever research you do.
Anyway, I'm curious to hear what you think.@tomhengl
Important questions! I don't have the solution, but comments.If you write your text, feel it is lacking language-wise, have an LLM re-word it and then check that it still says what you want, then I'd consider it a writing-tool, like a dictionary or a spell-checker. Nothing new here.
1/x -
Some ethical dilemmas:
Q1. if an LLM generated some text, even some solutions, shouldn't these systems also be listed as authors?
Q2. If larger parts of text, whole sentences were generated by AI but this is not mentioned: is this a scientific fraud? (PS: yes, it absolutely is!)
Q3. If the majority of text has been written by ChatGPT, shouldn't the paper then become "ChatGPT et al."?@tomhengl
The thing is that the name AI is misleading. It pretends to be intelligent by mixing sentences of other authors over the internet.
AND AI conveniently hides the original authors
YES, Using AI is a public fraud -
You can use LLMs (and I use Gemini pro, Grok, Mistral and Claude now almost every day!) but then summarize things in your own words / use your own style. I had the same situation with Wikipedia for decades - read the articles, learn, but then summarize and explain in my own worlds. Stay yourself, be transparent and seek for clarity, transparency and authenticity in whatever research you do.
Anyway, I'm curious to hear what you think.@tomhengl
Then what's AI doing?
Wikipedia is already there, and there are communities for every topic, with real human users.The human brain will be reduced to a medium for using AI, if this continues. There are various scientific studies to prove so. The human brain will refuse to work optimally in the absence of AI aid.
AND the attention span issue introduced by corporate social media... will be exacerbated.Just learn like a human, with patience and curiosity, be human. AI solves nothing.
-
@tomhengl
Important questions! I don't have the solution, but comments.If you write your text, feel it is lacking language-wise, have an LLM re-word it and then check that it still says what you want, then I'd consider it a writing-tool, like a dictionary or a spell-checker. Nothing new here.
1/x -
@some1
On the one hand I can relate. On the other, this is the viewpoint of a privileged person in a world where English is the lingua franca of science and equally smart and capable researchers with different mother tongues and less chances to learn English from a young age are structurally disadvantaged. -
@some1
On the one hand I can relate. On the other, this is the viewpoint of a privileged person in a world where English is the lingua franca of science and equally smart and capable researchers with different mother tongues and less chances to learn English from a young age are structurally disadvantaged.In such a scenario what is the need for a "perfect" english?
My mother tongue isn't english. My accent is vernacular Kannada.
Here I was put in an english medium school, by privelege.
But that english too is pretty vernacular
I am learning a more global form of english exclusively by fosstodon, random high-quality articles, novels etc...
I have seen many lesser-priveleged students learning better english.
It isn't a big deal to add "english" as a subject in universities -
In such a scenario what is the need for a "perfect" english?
My mother tongue isn't english. My accent is vernacular Kannada.
Here I was put in an english medium school, by privelege.
But that english too is pretty vernacular
I am learning a more global form of english exclusively by fosstodon, random high-quality articles, novels etc...
I have seen many lesser-priveleged students learning better english.
It isn't a big deal to add "english" as a subject in universities@toxomat @tomhengl
...
Things might not be ideal, but AI is only going to exacerbate it, the ones with english difficulty will never improve.You yourself have a pretty good english in fosstodon. That's sufficient for most scientific literature.
I have also noticed that many AI users are the ones who have difficulty with english (autocorrect deteriorates brain's spelling potential similarly).
What are your opinions?
-
Some ethical dilemmas:
Q1. if an LLM generated some text, even some solutions, shouldn't these systems also be listed as authors?
Q2. If larger parts of text, whole sentences were generated by AI but this is not mentioned: is this a scientific fraud? (PS: yes, it absolutely is!)
Q3. If the majority of text has been written by ChatGPT, shouldn't the paper then become "ChatGPT et al."?Q1: A computer program cannot be an author, but an entity running a computer program can
Q2: As much as writing bullshit papers with non-AI tools is
Q3: In this specific case, since ChatGPT is used by OpenAI, it must be "OpenAI et al"