Seems a bit strange to blame AI for this. Meta has always been garbage and using technology to it’s worst effects.
Seems a bit strange to blame AI for this. Meta has always been garbage and using technology to it’s worst effects.
Plagiarism is not the same as copyright infringement. Why you think people probably plagiarize is doubly irrelevant then.
I never claimed it was, but as I said before, it is irrelevant because copyright infringement differs in places depending on the local laws, but plagiarism is usually the concept that guides the ethical position from which those laws are produced, which is why yes, it’s relevant.
Show me literally any example of the defendant’s use of “analysis” having any impact whatsoever in a copyright infringement case or a law that explicitly talks about it, or just stop repeating that it is in any way relevant to copyright.
This is an unreasonable request, and you know it to be. Again, we don’t share the same laws and different jurisdictions provide different exceptions like fair use, fair dealing, or just straight up exclusion from copyright for their use. But it is wholly besides my argument. You can look at any piece of modern media that exists in the same space and see ideas the two share, while not sharing the same expression of that idea. How some characters fulfill the same purpose, dress the same way, or have similar personalities. You are free to make a book with a plumber, a mustached man, someone wearing a red hat with the letter M on it, and someone that goes to save a princess from a castle, but if they’re not the same person they are most likely not considered to be the protected expression of Mario. Same ideas that make up Mario, one infringing, the other not.
Nobody goes to court over this because EVERYONE takes each others ideas, “Good artists copy, great artists steal”. It’s only when you step on the specific expression of an idea that it becomes realistically actionable, and at that point transformativeness is definitely discussed almost every single time, because it is critical to determining the copyright was actually infringed, or if not.
Wrong. The “all together” and “without adding new patterns” are not legal requirements. You are constantly trying to push the definition of copyright infringement to be more extreme to make it easier for you to argue.
I’m sorry but, are you really being this dishonest? I’ve mentioned EXPLICITLY in my last comment that I wasn’t giving a definition of copyright infringement, because it’s besides the point, and not what I’m claiming. Yet here you are saying I am “trying to push” a definition. We are not lawyers or law scholars speaking to each other, I am having a discussion with you as another anonymous person on a message board.
Unfortunately, an AI has no concept of ideas, and it simply encodes patterns, whatever they might happen to be.
You are just arguing semantics and linguistics, it’s meaningless. We are not talking technical specifics, not even a specific model, nor a specific technique to specific exactly how the information is encoded. It’s a rough concept of “ideas” / “data” / “patterns”: information. And AI definitely has that.
Again, you’re morphing the discussion to make an argument.
You mean, I’m making an argument. Because yes. I am. I don’t see why this negative framing is necessary nor why this is noteworthy enough to bring up, unless you really just want to make me look bad for no apparent reason.
Mario’s likeness has to be encoded into the model in some way. Otherwise, this would not have been the image generated for “draw an italian plumber from a video game”. There is absolutely nothing in the prompt to push GPT-4 to combine those elements. There are also no “new” patterns, as you put it. That’s exactly the point of the article. As they put it:
Yes, there is some idea/pattern of “Mario-ness” in the model, I said that. This was not me trying to say no material of Mario was used in training, but that it’s not like someone pasted direct images of Mario in there, but that AI models makes logical connections between concepts and even for things we cannot put a good name to does it make those connections, and will allow you to prompt for them, but that does not mean you should.
Clearly, these models did not just learn abstract facts about plumbers—for example, that they wear overalls and carry wrenches. They learned facts about a specific fictional Italian plumber who wears white gloves, blue overalls with yellow buttons, and a red hat with an “M” on the front.
These are not facts about the world that lie beyond the reach of copyright. Rather, the creative choices that define Mario are likely covered by copyrights held by Nintendo.
I sort of already explained this without mentioning this specific example, but I’ll make it extra clear.
In the article they prompted the AI for a “video game Italian plumber”. What person, if you asked them, to think of an “Italian video game plumber”, would not think of Mario? Maybe Luigi? I’ll tell you, because there are very damn few famous Italian video game plumbers. The prompt is already locked in on Mario, and even humans make the logical connection to Mario. It might have had billions of images and texts to use, but any time a relation to an “Italian video game plumber” showed up, there’s Mario.
So this whole point the article makes about it not learning abstract facts about plumbers, is complete moot because they completely biased the outputs towards receiving what they want to receive. If you ask for just a plumber, for which it does have many, many results. It will make more generalizations and become less specific. Because there are more than 2 examples of plumbers in other types of situations. Humans do this exact same thing in the same task, yet somehow the AI must be infallible to this despite being artificial versions of the biological thing. And that is why analysis is protected, because humans simply cannot stop doing it and everyone is tainted by their knowledge of Mario, even though for whatever reason we might need to use one of the ideas Mario is built upon. And this is why AIs use this same defense. I can say this regardless of the jurisdiction because unless you live in some kind of dictatorship this is generally true.
Sadly, this kind of deceptive framing of AI output is common, particularly among those that are biased against AI. Sometimes it’s unintentional, but frequently specific parameters are used that will just generate specific bad results, ignoring that this may not even represent 0.001% of what the model can generate in normal situations.
This is contradictory to how you present it as “taking ideas”.
It is not. You can use the idea of Mario, you cannot use the totality of Mario. For the AI to be able to use the idea of Mario, it will also ‘learn’ the totality of Mario in the process, as Mario is a collection of ideas that are extracted. But those ideas are stored separately so they can be individually prompted for. You can prompt it to make Mario, because like literally almost every person in society, they know what ideas make up Mario better than I can put to words here. If I hire a human artist to make me a “video game Italian plumber”, their first question to me would be “Oh, something like Mario?” and their second response will be “Oh I can’t do that, and you should not want to, because you don’t own Mario.”. Humans use AI, so they need to be the ones to give that second response.
Just like a kitchen knife can be used to stab someone, doesn’t mean we produce kitchen knives for stabbing people. Just because an AI can be used to infringe, does not mean that they are produced to infringe. Which is evidence by the vast majority of other ways that it can be used that don’t infringe, which is self evident after just tinkering around with it for a little while.
You’re mixing up different things. I’m saying that the image contains infringing material, which is hopefully not something you have to be convinced about. The production of an obviously infringing image, without the infringing elements having been provided in the prompt, is used to show how this information is encoded inside the model in some form. Whether this copyright-protected material exists in some form inside the model is not an equivalent question to whether this is copyright infringement. You are right that the courts have not decided on the latter, but we have been talking about the former. I repeat your position which I was directly responding to before:
If it’s anything like the examples before, then the AI has definitely been prompted by the user to make infringing elements.
But anyways, to the question, you just don’t seem to grasp that collections of ideas can communicate copyright infringing material without being infringing on their own. It’s like arguing that if Paint or Photoshop knows about the color red that this is copyright infringing because it’s the same red that Mario uses. None of the ideas that make up Mario are infringing, and cannot be copyrighted. They are what the AI is designed to extract, not Mario as a totality.
You can definitely use AI to make an infringement machine by making it less likely to make leaps in ideas and just only combine the ideas it’s been taught on, which we as humans can do as well in the form of plagiarism and forgery. But if you’re going to be unethical why use an AI when you might as well just take the easy route directly with print screen or a photo. Two other technologies we didn’t ban for having this ability to capture copyrighted material, even if they far more blatantly copy the material.
This is where good AI usage deviates, because it instead tries to MAXIMIZE the amount of leaps and connections the AI makes for as little possibility to make something infringing. Even honest people trying to make new creative works sometimes have to change things because they might be too close to being infringing.
That was your implied argument regardless of intent.
I decide what my argument is, thank you very much. Your interpretation of it is outside of my control, and while I might try to avoid it from going astray, I cannot stop it from doing so, that’s on you.
Completely wrong, which invalidates the point you want to make. “Analysis” and “as is” have no place in the definition of copyright infringement. A derivative work can be very different from the original material, and how you created the derivative work, including whether you performed whatever you think “analysis” means, is generally irrelevant.
I wasn’t giving a definition of copyright infringement, since that depends on the jurisdiction, and since you and I aren’t in the same one most likely, that’s nothing I would argue for to begin with. In the most basic form of plagiarism, people do so to avoid doing the effort of transformation. More complex forms of plagiarism might involve some transformation, but still try to capture the expression of the original, instead of the ideas. Analysis is definitely relevant, since to create a work that does not infringe on copyright, you generally can take ideas from a copyrighted work, but not the expression of those ideas. If a new work is based on just those ideas (and preferably mixes it with new ideas), it generally doesn’t infringe on copyright. It’s why there are so many copycat products of everything you can think of, that aren’t copyright infringing.
No it detects patterns. You already said it correctly above. And the problem is that some patterns can be copyrighted. That’s exactly the problem highlighted here and here. For copyright law, it doesn’t matter if, for example, that particular image of Mario is copied verbatim from the training data.
While depending on your definition Mario could be a sufficiently complex pattern, that’s not the definition I’m using. Mario isn’t a pattern, it’s an expression of multiple patterns. Patterns like “an italian man”, “a big moustache”, “a red rounded hat with the letter ‘M’ in a white circle”, “overalls”. You can use any of those patterns in a new non-infringing work, Nintendo has no copyright on any of those patterns. But bring them all together in one place again without adding new patterns, and you will have infringed on the expression of Mario. If you give many images of Mario to the AI it might be able to understand that those patterns together are some sort of “Mario-ness” pattern, but it can still separate them from each other since you aren’t just showing it Mario, but also other images that have these same patterns in different expressions.
Mario’s likeness isn’t in the model, but it’s patterns are. And if an unethical user of the AI wants to prompt it for those specific patterns to be surprised they get Mario, or something close enough to be substantially similar, that’s on them, and it will be infringing just like drawing and selling a copy of Mario without Nintendo’s approval is now.
The character likeness, which is encoded in the model because it is in fact a discernible pattern, is an infringement.
You have absolutely no legal basis to claim they are infringement, as these things simply have not been settled in court. You can be of the opinion that they are infringement, but your opinion isn’t the same as law. The articles you showed are also simply reporting and speculating on the lawsuits that are pending.
That’s a very short example, but it is a new arrangement of the existing information. It’s not a new valuable arrangement of information, but new nonetheless. And yes, rearrangement is transformation. It’s very low entropy transformation, but transformation nonetheless. Collages and summaries are in fact, a thing that humans make too.
Unless you mean “new” as in, something nobody’s ever written before, in which case not even you can create new information, since pretty much everything you will ever say or write down can be broken down into pieces that have been spoken or written before, which is not exactly a useful distinction.
There’s no transformation, it’s not capable of transformation, it’s just a very complicated text jumbler that’s supposed to jumble text so that the output is readable by humans.
Saying it doesn’t make it true, especially when you follow it up with a self-debunk by saying it transforms the text by jumbling it in specific ways that keep it readable to humans, which requires transformation as like you just demonstrated, randomly swapping words does not make legible text…
You’re taking investment advice from a parrot that had the entirety of reddit investment meme subreddits beamed into its brain.
???
No, not what I said at all. If you’re trying to say I’m making this argument I’d urge you (ironically) to actually analyze what I said rather than putting words in my mouth ;) (Or just, you know, ask me to clarify)
Copyright infringement (or plagiarism) in it’s simplest form, as in just taking the material as is, is devoid of any analysis. The point is to avoid having to do that analysis and just get right to the end result that has value.
But that’s not what AI technology does. None of the material used to train it ends up in the model. It looks at the training data and extracts patterns. For text, that is the sentence structure, the likelihood of words being followed by another, the paragraph/line length, the relationship between words when used together, and more. It can do all of this without even ‘knowing’ what these things are, because they are simply patterns that show up in large amounts of data, and machine learning as a technology is made to be able to detect and extract those patterns. That detection is synonymous with how humans do analysis. What it detects are empirical, factual observations about the material it is shown, which cannot be copyrighted.
The resulting data when fed back to the AI can be used to have it extrapolate on incomplete data, which it could not do without such analysis. You can see this quite easily by asking an AI to refer to you by a specific name, or talk in a specific manner, such as a pirate. It ‘understands’ that certain words are placeholders for names, and that text can be ‘pirateitfied’ by adding filler words or pre/suffixing other words. It could not do so without analysis, unless that exact text was already in the data to begin with, which is doubtful.
Yes, this is my exact issue with some framing of AI. Creative people love their influences to the point you can ask them and they will point to parts that they reference or nudged to an influence they partially credit to getting to that result. It’s also extremely normal that when you make something new, you brainstorm and analyze any kind of material (copyrighted or not) you can find that gives the same feelings you desire to create. As is ironically said to give comfort to starting creatives that it’s okay to be inspired by others: “Good artists copy, great artists steal.”
And often people very anti AI don’t see an issue with this, yet it is in essence the same as the AI does, which is to detach the work from the ideas it was built on, and then re-using those ideas. And just like anyone who has the ability to create has the ability to plagiarize or infringe, so does the AI. As human users of AI we must be the ones to ethically guide it away from that (Since it can’t do that itself), just like you would not copy-paste your influences into a new human made work.
For OpenAI, I really wouldn’t be surprised if that happened to be the case, considering they still call themselves “OpenAI” despite being the most censored and closed source AI models on the market.
But my comment was more aimed at AI models in general. If you are assuming they indeed used non-publicly posted or gathered material, and did so directly themselves, they would indeed not have a defense to that. Unfortunately, if a second hand provided them the data, and did so under false pretenses, it would likely let them legally off the hook even if they had every ethical obligation to make sure it was publicly available. The second hand that provided it to them would be the one infringing.
If that assumption turns out to be a truth (Maybe through some kind of discovery in the trial), they should burn for that. Until then, even if it’s a justified assumption, it’s still an assumption, and most likely not true for most models, certainly not those trained recently.
They are not “analyzing” the data. They are feeding it into a regurgitating mechanism. There’s a big difference. Their defense is only “good” because AI is being misrepresented and misunderstood.
I really kind of hope you’re kidding here. Because this has got to be the most roundabout way of saying they’re analyzing the information. Just because you think it does so to regurgitate (which I have yet to see any good evidence for, at least for the larger models), does not change the definition of analyzing. And by doing so you are misrepresenting it and showing you might just have misunderstood it, which is ironic. And doing so does not help the cause of anyone who wishes to reduce the harm from AI, as you are literally giving ammo to people to point to and say you are being irrational about it.
You say it’s not capable of producing anything new, but then give an example of it creating something new. You just changed the goal from “new” to “valid” in the next sentence. Looking at AI for “valid” information is silly, but looking at it for “new” information is not. Humans do this kind of information mixing all the time. It’s why fan works are a thing, and why most creative people have influences they credit with being where they are today.
Nobody alive today isn’t tainted by the ideas they’ve consumed in copyrighted works, but we do not bat an eye if you use that in a transformative manner. And AI already does this transformation much better than humans do since it’s trained on that much more information, diluting the pool of sources, which effectively means less information from a single source is used.
Which is why the technology itself isn’t the issue, but those willing to use it in unethical ways. AI is an invaluable tool to those with limited means, unlike big corporations.
Not 1:1, overfitted images still have considerable differences to their original. If you chose “reproduce” to make that point, that’s why OP clarified it wasn’t literally copying training data, as the actual data being in the model would be a different story. Because these models are (in simplified form) a bunch of really complex math that produces material, it’s a mathematical inevitability that it produces copyrighted material, even for calculations that weren’t created due to overfitting. Just like infinite monkeys on infinite typewriters will eventually reproduce every piece of copyrighted text.
But then I would point you to the camera on your phone. If you take a copyrighted picture with that, you’re still infringing. But was the camera created with the intention to appropriate material captured by the lens? Which is why we don’t blame the camera for that, we blame the person that used it for that purpose. AI users have an ethical obligation not to steer the AI towards generating infringing material.
Although I’m a firm believer that most AI models should be public domain or open source by default, the premise of “illegally trained LLMs” is flawed. Because there really is no assurance that LLMs currently in use are illegally trained to begin with. These things are still being argued in court, but the AI companies have a pretty good defense in the fact analyzing publicly viewable information is a pretty deep rooted freedom that provides a lot of positives to the world.
The idea of… well, ideas, being copyrightable, should shake the boots of anyone in this discussion. Especially since when the laws on the book around these kinds of things become active topic of change, they rarely shift in the direction of more freedom for the exact people we want to give it to. See: Copyright and Disney.
The underlying technology simply has more than enough good uses that banning it would simply cause it to flourish elsewhere that does not ban it, which means as usual that everyone but the multinational companies lose out. The same would happen with more strict copyright, as only the big companies have the means to build their own models with their own data. The general public is set up for a lose-lose to these companies as it currently stands. By requiring the models to be made available to the public do we ensure that the playing field doesn’t tip further into their favor to the point AI technology only exists to benefit them.
If the model is built on the corpus of humanity, then humanity should benefit.
The time will come, you can slowly see the tide turning :) Most people aren’t inherently against it, just (understandably) cautious, and at worst misinformed or radicalized about hating it.
What really helps is to start with showing something you really put your heart into, where you can explain your choices and techniques, before you tell them it was done with (help from) AI. There was a recent study where people actually seemed to prefer works made with AI, until they were told it was made by AI and not a person. This just shows there is an AI-bias under people, and being able to show that whatever you’re making was in fact largely a product of your creativity, not the AI’s, really helps.
I tend to make recordings and snapshots of work in progress and then turn that into one of those animations that blends frame to frame to show the entire process. Just being able to show that it wasn’t just doing nothing yourself and that it’s still your creation really helps make people understand.
People differentiate AI (the technology) from AI (the product being peddled by big corporations) without making clear that nuance (Or they mean just LLMs, or they aren’t even aware the technology has a grassroots adoption outside of those big corporations). It will take time, and the bubble bursting might very well be a good thing for the technology into the future. If something is only know for it’s capitalistic exploits it’ll continue to be seen unfavorably even when it’s proven it’s value to those who care to look at it with an open mind. I read it mostly as those people rejoicing over those big corporations getting shafted for their greedy practices.
If you think that’s depressing, wait until you find out that it’s basically nothing in the grand scheme of things.
Most sources agree that we use about 4 trillion cubic meters of water every year worldwide (Although, this stat is from 2015 most likely, and so it will be bigger now). In 2022, using the stats here Microsoft used 1.7 billion gallons per year, and Google 5.56 billion gallons per year. In cubic meters that’s only 23.69 million cubic meters. That’s only 0.00059% of the worldwide water usage. Meanwhile agriculture uses on average 70% of a country’s daily fresh water.
Even if we just look at the US, since that’s where Google and Microsoft are based, they use 322 billion gallons of water every day, resulting in about 445 billion cubic meters per year, that’s still 0.00532%. So you can have 187 more Googles and Microsofts before you even top a single percentage.
_
And as others have pointed out the water isn’t gone, there’s some cyclicality in how the water is used.
There is so much wrong with this…
AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.
You’re most likely talking about “Chat Control”, which is a controversial EU proposal to scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.
You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.
I never anthropomorphized the technology, unfortunately due to how language works it’s easy to misinterpret it as such. I was indeed trying to explain overfitting. You are forgetting the fact that current AI technology (artificial neural networks) are based on biological neural networks. There is a range of quirks that it exhibits that biological neural networks do as well. But it is not human, nor anything close. But that does not mean that there are no similarities that can be rightfully pointed out.
Overfitting isn’t just what you describe though. It also occurs if the prompt guides the AI towards a very specific part of it’s training data. To the point where the calculations it will perform are extremely certain about what words come next. Overfitting here isn’t caused by an abundance of data, but rather a lack of it. The training data isn’t being produced from within the model, but as a statistical inevitability of the mathematical version of your prompt. Which is why it’s tricking the AI, because an AI doesn’t understand copyright - it just performs the calculations. But you do. And so using that as an example is like saying “Ha, stupid gun. I pulled the trigger and you shot this man in front of me, don’t you know murder is illegal buddy?”
Nobody should be expecting a machine to use itself ethically. Ethics is a human thing.
People that use AI have an ethical obligation to avoid overfitting. People that produce AI also have an ethical obligation to reduce overfitting. But a prompt quite literally has infinite combinations (within the token limits) to consider, so overfitting will happen in fringe situations. That’s not because that data is actually present in the model, but because the combination of the prompt with the model pushes the calculation towards a very specific prediction which can heavily resemble or be verbatim the original text. (Note: I do really dislike companies that try to hide the existence of overfitting to users though, and you can rightfully criticize them for claiming it doesn’t exist)
This isn’t akin to anything human, people can’t repeat pages of text verbatim like this and no toddler can be tricked into repeating a random page from a random book as you say.
This is incorrect. A toddler can and will verbatim repeat nursery rhymes that it hears. It’s literally one of their defining features, to the dismay of parents and grandparents around the world. I can also whistle pretty much my entire music collection exactly as it was produced because I’ve listened to each song hundreds if not thousands of times. And I’m quite certain you too have a situation like that. An AI’s mind does not decay or degrade (Nor does it change for the better like humans) and the data encoded in it is far greater, so it will present more of these situations in it’s fringes.
but it isn’t crafting its own sentences, it’s using everyone else’s.
How do you think toddlers learn to make their first own sentences? It’s why parents spend so much time saying “Papa” or “Mama” to their toddler. Exactly because they want them to copy them verbatim. Eventually the corpus of their knowledge grows big enough to the point where they start to experiment and eventually develop their own style of talking. But it’s still heavily based on the information they take it. It’s why we have dialects and languages. Take a look at what happens when children don’t learn from others: https://en.wikipedia.org/wiki/Feral_child So yes, the AI is using it’s training data, nobody’s arguing it doesn’t. But it’s trivial to see how it’s crafting it’s own sentences from that data for the vast majority of situations. It’s also why you can ask it to talk like a pirate, and then it will suddenly know how to mix in the essence of talking like a pirate into it’s responses. Or how it can remember names and mix those into sentences.
Therefore it is factually wrong to state that it doesn’t keep the training data in a usable format
If your arguments is that it can produce something that happens to align with it’s training data with the right prompt, well yeah that’s not incorrect. But it is so heavily misguided and borders bad faith to suggest that this tiny minority of cases where overfitting occurs is indicative of the rest of it. LLMs are a prediction machines, so if you know how to guide it towards what you want it to predict, and that is in the training data, it’s going to predict that most likely. Under normal circumstances where the prompt you give it is neutral and unique, you will basically never encounter overfitting. You really have to try for most AI models.
But then again, you might be arguing this based on a specific AI model that is very prone to overfitting, while I am arguing this out of the technology as a whole.
This isn’t originality, creativity or anything that it is marketed as. It is storing, encoding and copying information to reproduce in a slightly different format.
It is originality, as these AI can easily produce material never seen before in the vast, vast majority of situations. Which is also what we often refer to as creativity, because it has to be able to mix information and still retain legibility. Humans also constantly reuse phrases, ideas, visions, ideals of other people. It is intellectually dishonest to not look at these similarities in human psychology and then treat AI as having to be perfect all the time, never once saying the same thing as someone else. To convey certain information, there are only finite ways to do so within the English language.
This is an issue for the AI user though. And I do agree that needs to be more conscious in people’s minds. But I think time will change that. Perhaps when the photo camera came out there were some shmucks that took pictures of people’s artworks and claimed it as their own because the novelty of the technology allowed that for a bit, but eventually those people are properly differentiated from people properly using it.
Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing
Ehh, no almost certainly not (But it does depend on your local laws). But that honestly just sounds like some corporate boogyman to prevent you from pirating their books. The person hosting the download, if they did not have the rights to publicize it freely, would possibly be prosecuted though.
To illustrate, there’s this story of John Cena who sold a special Ford after signing a contract with Ford to explicitly forbid him from doing that. However, the person who bought the car was never prosecuted or sued, because they received the car from Cena with no strings attached. They couldn’t be held responsible for Cena’s break of contract, but Cena was held personally responsible by Ford.
For physical goods there is ‘theft by proxy’ though (receiving stolen goods that you know are most likely stolen), but that quite certainly doesn’t apply to digital, copyable goods. As to even access any kind of information on the internet, you have to download and thus, copy it.
Stocks for what? AI? I can’t have stocks for a technology. I could get stocks in companies that use AI, but the only ones that are on the stock market I’d rather die than support a single penny to since they abuse the technology (and technology in general). But they are not the only ones using the technology. I’m not really a fan of stocks to begin with, profit focused companies are a plague in my opinion.