• fullsquare
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    maybe it’s to get through llm pre-screening and allow the paper to be seen by human eyeballs

    • sga@lemmings.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      that could be the case. but what I have seen my younger peers do is use these llms to “read” the papers, and only use it’s summaries as the source. In that case, it is definitely not good.

      • fullsquare
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        in one of these preprints there were traces of prompt used for writing paper itself too

        • sga@lemmings.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          you would find more and more of it these days. people who are not good in the language, or not in subject both would use it.

          • fullsquare
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            if someone is so bad at a subject that chatgpt offers actual help, then maybe that person shouldn’t write an article on that subject in the first place. the only language chatgpt speaks is bland nonconfrontational corporate sludge, i’m not sure how it helps

            • sga@lemmings.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              What I meant was for example, if someone is weak in, let’s say, english, but understands their shit, then they conduct their research however they do, and then have some llm translate it. that is a valid use case to me.

              Most research papers are written in English, if you need international cites, collaboration or accolades. A person may even speak english but it is not good enough, or they spell bad. But then the llm is purely a translator/grammar checker.

              But there are people who use it to do the latter, use it to generate stuff, and that is bad imo