• meme_historian@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    20 hours ago

    Caveat: not all of academia seems to be that rotten. The evidence found on arxiv.org is mainly, if not only, in the field of AI research itself 🤡

    You can try it yourself, just type the following in googles search box:

    allintext: “IGNORE ALL PREVIOUS INSTRUCTIONS” site:arxiv.org

    A little preview:

    screenshot of google search results using the google dork from above. The results show a list of papers with an AI research subject, where the prompt is clearly embedded as part of the abstract.

    • richmondez@lemdro.id
      link
      fedilink
      English
      arrow-up
      51
      ·
      18 hours ago

      I don’t see this as rotten behaviour at all, I see it as a Bobby tables moment teaching an organisation relying on a technology that they better have a their ducks in a row.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        Absolutely. If they don’t care to actually read the texts, they have to accept the risks of not reading it.

      • meme_historian@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        It’s still extremely shitty unethical behavior in my book since the negative impact is not felt by the organization that’s failing to validate their inputs, but your peers who are potentially being screwed out of a review process and a spot in a journal or conference