Skip to content

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone

Today in grading undergraduate writing assignments;

Uncategorized
1 1 16
  • Today in grading undergraduate writing assignments;

    "the journal name is real, the title is the real name of an article, the DOI is real, the authors are real… but nothing matches. For example, the DOI is not from the same article as the journal name, or title, or authors. Same with all other elements of the citation.

    I think this is likely a Claude hallucination (rather than Chat GPT); Claude is known for doing a better job at pulling real citations or pulling real information to create a fake citation. "

    So we assigned students a straightforward writing assignment; pick one of five proposed policy changes and write 500 words in support or in opposition to it. Two or three citations were required. No ChatGPT without sharing your original 500 words, your AI prompt , the resulting AI output and your final revision. 90% awesome, reasonable arguments for and against and then 10% that were essentially perfect.

    We went into the citations via PubMed and other sources and found the above. So, obviously 10 points for that section and we indicated that we'd carefully evaluate all future work from the students with hallucinations, any further indication of AI use outside the syllabus will be zero on the assignment and a referral to student affairs for cheating.

    So does anyone have any idea why these citations are f-ed up in this reproducibly awful way? Like, what is Claude doing wrong, exactly? It's like Claude is ALMOST identifying relevant work but it can't quite produce actual citations, just things that look like citations. It's like they missed the step where you actually have to READ THE DAMN PAPER YOU WANT TO CITE. Which it could do of course, it could ingest the whole paper, summarize it, simply pattern match versus the students writing and insert a valid doi link. Why is it making this specific error? Anyone?


  • oblomov@sociale.networkundefined oblomov@sociale.network shared this topic on

Gli ultimi otto messaggi ricevuti dalla Federazione
  • @gloriouscow @lritter i don't know of any hash tags for those but i'd be astonished if there weren't any

    read more

  • @aeva @lritter

    i would also like to see birds, and maybe foxes

    read more

  • @gloriouscow @cr1901 For myself, with regards to dealing with the cognitive dissonance, of watching technologists I personally know and admire adopt LLMs (some of which are on here, too, and who I am somewhat embarrassed to say have seen my unhinged anti-AI posts 😅):

    I think this has been much easier for me to deal with, because my personal observation for a long time has been that technologists have a very weak sense of ethics. Both in the sense of having good ethics that I agree with, and in the sense of having thought about the subject of ethics at all. Most technologists have not sat down and decided what their moral boundaries are, and what the relationship of their own morality is to the technology they use or develop. Even very skilled ones that I have learned a lot from. Most people are content to think that technologies are value-neutral, and are content to follow the trend of what everyone else is doing.

    I have observed brilliant technologists, long before LLMs, shrug at the ethics of many other things, so for me it is entirely believable that they shrug at the ethics of this, too. I don't think many of these people are inherently morally bad, but I do think that they just don't care. This is bad because I do think that people just following the trend into widespread AI adoption is an ethically bad outcome. However, I also think that as AI backlash increases, if the pendulum swings back to anti-AI being the norm in software: they will follow as well.

    It is also the sad truth that for minority women in many computer fields, we must work with brilliant peers who are not necessarily bad people, but who by way of their privileged position in life will say and do ignorant things. We must see someone saying something hurtful, but not make a fuss about it, because it's not worth the time of having to personally educate that person, to deal with the backlash, or to be labelled as a "confrontational" person. And we must do this as well, for brilliant colleagues and mentors and people we admire, and that we learn a lot from. So in that sense, I am very well practiced at this kind of cognitive dissonance - I do it in order to preserve a career.

    I hope this is maybe a little helpful for you, though this is only my personal experience. And if you do take a break, I hope it is a restful and rejuvenating one!

    read more

  • @gloriouscow @lritter also I recommend # macrophotography (and the various non-english equivalents) though fair warning it has the occasional closeup on insects. my wife introduced me to that one last night :)

    read more

  • read more

  • read more

  • @lritter @aeva

    give me some pretty hashtags

    read more

  • @gloriouscow i think i mean to say please stay i enjoy your presence

    read more
Post suggeriti