A horse looks at a car something something. The technology is here to stay and has it’s uses, the tech industry will get bored of it’s limitations and a new thing will come along for us to scream at. AI has practical applications but I don’t think you should dismiss it entirely on principle. I think it’s about learning to use this technology practically and ethically in the long run.
I think it’s about learning to use this technology practically and ethically in the long run.
If that was happening, I think we’d probably be fine with it. But it just appears almost everywhere, uninvited, as half-baked and soaked in mile-high promises.
I’m more frustrated by the haste with which it’s implemented. I’ve seen (secondhand) instances of this Google search AI spitting out results that are either flat-out wrong (e.g., presenting fan theories as fact in response to a question about warhammeer 40k), or actively harmful (e.g., recommending self harm in response to a search for “how do I stop crying”)
You can and should dismiss LLM’s on principle though, because these are nothing. They’re fancy Markov generators, maybe one step up from the auto-correct in your keyboard.
They’re fun for researching problems and trying to further our efforts towards developing artificial intelligence, but the only thing techbros are selling is a new monkey to regurgitate the data it’s been fed. The monkey doesn’t know if the data is actually useful, or even if it is true, but to the techbros it “approximates a conversation” and therefore is good enough to replace jobs. AI might be cool eventually, but we are still lightyears away from anything that can think.
People who criticize AI seem to fall into 3 camps:
Bandwagon jumpers who just see people they like criticizing AI and regurgitate what they hear.
People who reject it out of principle because it would break their world view to consider the possibility that human beings could just be machines with no free will.
People who reject it because theyve seen capitalism use previous advances in automation to enrich the working class and entrench their power.
Largely agree but I think there are one it two more camps.
People who feel threatened to irrelevance as artists trying to use art as their primary means to make money
People who realize that CEOs and higher up people in companies actually do intend to replace human workers as soon as they can, even before it’s properly viable.
I’m pro AI, but I largely see the AI backlash as inventing complex moral justification to oppose it when the core issues are it’s impact on the livelihood of artists under capitalism.
Obviously AI art is just as valid as human art, and there is nothing inherently special about human creations. We are actually just biological machines and our behavior and output is easily emulatable.
I would just tend to group those as two sub camps under the third, anti-capitalist, camp that I mentioned, but I can see reason them to put them on the same hierarchical level.
Most of the problems with AI are with it accelerating the already ongoing effects of capitalism.
A horse looks at a car something something. The technology is here to stay and has it’s uses, the tech industry will get bored of it’s limitations and a new thing will come along for us to scream at. AI has practical applications but I don’t think you should dismiss it entirely on principle. I think it’s about learning to use this technology practically and ethically in the long run.
If that was happening, I think we’d probably be fine with it. But it just appears almost everywhere, uninvited, as half-baked and soaked in mile-high promises.
I’m more frustrated by the haste with which it’s implemented. I’ve seen (secondhand) instances of this Google search AI spitting out results that are either flat-out wrong (e.g., presenting fan theories as fact in response to a question about warhammeer 40k), or actively harmful (e.g., recommending self harm in response to a search for “how do I stop crying”)
You can and should dismiss LLM’s on principle though, because these are nothing. They’re fancy Markov generators, maybe one step up from the auto-correct in your keyboard.
They’re fun for researching problems and trying to further our efforts towards developing artificial intelligence, but the only thing techbros are selling is a new monkey to regurgitate the data it’s been fed. The monkey doesn’t know if the data is actually useful, or even if it is true, but to the techbros it “approximates a conversation” and therefore is good enough to replace jobs. AI might be cool eventually, but we are still lightyears away from anything that can think.
People who criticize AI seem to fall into 3 camps:
Largely agree but I think there are one it two more camps.
People who feel threatened to irrelevance as artists trying to use art as their primary means to make money
People who realize that CEOs and higher up people in companies actually do intend to replace human workers as soon as they can, even before it’s properly viable.
I’m pro AI, but I largely see the AI backlash as inventing complex moral justification to oppose it when the core issues are it’s impact on the livelihood of artists under capitalism.
Obviously AI art is just as valid as human art, and there is nothing inherently special about human creations. We are actually just biological machines and our behavior and output is easily emulatable.
I would just tend to group those as two sub camps under the third, anti-capitalist, camp that I mentioned, but I can see reason them to put them on the same hierarchical level.
Most of the problems with AI are with it accelerating the already ongoing effects of capitalism.