

from someone who helped build their LLM
Nice to get a look on the inside from one of the 21st-century Oppenheimers.
from someone who helped build their LLM
Nice to get a look on the inside from one of the 21st-century Oppenheimers.
Rupert Murdochās News Corporation fills its tabloid papers across Australia with right-wing slop. Now the slop will come from a chatbot ā and not a human slop churner.
The quality of its tabloids will remain exactly the same, I presume.
r/cursor is the gift that keeps on giving:
Donāt Date Robots!
Kill them instead
In other news, IETF 127 (which is being held in November) is facing a boycott months in advance. The reason? Its being held in the United States.
This likely applies to a lot of things, but that would have been unthinkable before the election.
At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.
On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.
For arts, the slop-nami has made āAIā synonymous with ācreative sterilityā and likely painted the field as, to copy-paste a previous comment, āall style, no subtance, and zero understanding of art, humanities, or how to be useful to societyā
For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.
(Not sure if this makes sense - it was 1AM where I am when I wrote this)
o7
We can add that to the list of things threatening to bring FOSS as a whole crashing down.
Plus the culture being utterly rancid, the large-scale AI plagiarism, the declining industry surplus FOSS has taken for granted, having Richard Stallman taint the whole movement by association, the likely-tanking popularity of FOSS licenses, AI being a general cancer on open-source and probably a bunch of other things Iāve failed to recognise or make note of.
FOSS culture being a dumpster fire is probably the biggest long-term issue - fixing that requires enough people within the FOSS community to recognise theyāre in a dumpster fire, and care about developing the distinctly non-technical skills necessary to un-fuck the dumpster fire.
AIās gonna be the more immediately pressing issue, of course - its damaging the commons by merely existing.
Update on the Vibe Coder Catastrophetm: heās killed his current app and seems intent to vibe code again:
Personally, I expect this case wonāt be the last āvibe codedā app/website/fuck-knows-what to get hacked to death - security is virtually nonexistent, and the business/techbros whoād be attracted to it are unlikely to learn from their mistakes.
New piece from Brian Merchant: DOGEās āAI-firstā strategist is now the head of technology at the Department of Labor, which is aboutā¦well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:
āI think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,ā Blanc tells me. āThatās much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.ā
How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as āimproving efficiencyā or āpolitically neutralā or some random claptrap like that. Between Muskās own crippling incompetence, AIās utterly rancid public image, and a variety of factors I likely havenāt factored in, imposing them will likely prove harder than they thought.
(Iād also like to recommend James Allen-Robertsonās āDevs and the Culture of Techā which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)
TV Tropes got an official app, featuring an AI āstory generatorā. Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it āif we see that users donāt find the story generator helpfulā.
Ran across a short-ish thread on BlueSky which caught my attention, posting it here:
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me thatās how it was made. i have yet to see one thatās āgoodā but i donāt doubt the tech will soon be advanced enough to write āwell.ā but iād rather see what a person thinks and how theyād phrase it
like i donāt want to see fiction in the style of cormac mccarthy. iād rather read cormac mccarthy. and when i run out of books by him, too bad, thatās all the cormac mccarthy books there are. things should be special and human and irreplaceable
i feel the same way about using AI-type tech to recreate a dead personās voice or a hologram of them or whatever. part of whatās special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself
The ālegal proofā part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like theyāre shielded from copyright law, heās not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.
Itāll probably set a very bad precedent that fucks up copyright law in various ways (because we canāt have anything nice in this timeline), but Iād like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.
In other news, Ed Zitron discovered Meg Whitmanās now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
As an added bonus, its clear heās getting trolled for his terminal startup brain:
EDIT: Found some dipshit trying to defend the guy in the wild, rehashing the arguments used for AI art:
The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.
Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. Iāve already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isnāt helping pre-existing FOSS projects.
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
Iāve updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
You could probably do good art with an AI
Hot take: A plagiarism machine built to spew signal-shaped noise is incapable of making good art
New thread from Baldur Bjarnason:
On a semi-related note, I expect the people who are currently making heavy use of AI will find themselves completely helpless without it if/when the bubble finally bursts, and will probably struggle to find sympathy from others thanks to AI indelibly staining their public image.
(The latter part is assuming heavy AI users werenāt general shitheels before - if they were, AIās stain on their image likely wonāt affect things either way. Of course, āAI broā is synonymous with ātrashfire human beingā, so Iām probably being too kind to them :P)