I make art that’s totally mine because I did it through AI. https://imgur.com/a/Rhgi0OC

Nightshade software to protect your art

  • 8 Posts
  • 144 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle




  • This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

    As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

    At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

    “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.











  • Yeah, this article seems like an anti-Wikipedia article. They’re just using it for translation, spelling errors, content quality, etc.

    Wikipedia’s model of collective knowledge generation has demonstrated its ability to create verifiable and neutral encyclopedic knowledge. The Wikipedian community and WMF have long used AI to support the work of volunteers while centering the role of the human. Today we use AI to support editors to detect vandalism on all Wikipedia sites, translate content for readers, predict article quality, quantify the readability of articles, suggest edits to volunteers, and beyond. We have done so following Wikipedia’s values around community governance, transparency, support of human rights, open source, and others. That said, we have modestly applied AI to the editing experience when opportunities or technology presented itself. However, we have not undertaken a concerted effort to improve the editing experience of volunteers with AI, as we have chosen not to prioritize it over other opportunities.