Accidental compliment to a bunch of forty-somethings.
Accidental compliment to a bunch of forty-somethings.
What doesn’t exist yet, but is obviously possible, is automatic tweening. Human animators spend a lot of time drawing the drawings between other drawings. If they could just sketch out what’s going on, about once per second, they could probably do a minute in an hour. This bullshit makes that feasible.
We have the technology to fill in crisp motion at whatever framerate the creator wants. If they’re unhappy with the machine’s guesswork, they can insert another frame somewhere in-between, and the robot will reroute to include that instead.
We have the technology to let someone ink and color one sketch in a scribbly animatic, and fill that in throughout a whole shot. And then possibly do it automatically for all labeled appearances of the same character throughout the project.
We have the technology to animate any art style you could demonstrate, as easily as ink-on-celluloid outlines or Phong-shaded CGI.
Please ignore the idiot money robots who are rendering eye-contact-mouth-open crowd scenes in mundane settings in order to sell you branded commodities.
Video generators are going to eat Hollywood alive. A desktop computer can render anything, just by feeding in a rough sketch and describing what it’s supposed to be. The input could be some kind of animatic, or yourself and a friend in dollar-store costumes, or literal white noise. And it’ll make that look like a Pixar movie. Or a photorealistic period piece starring a dead actor. Or, given enough examples, how you personally draw shapes using chalk. Anything. Anything you can describe to the point where the machine can say it’s more [thing] or less [thing], it can make every frame more [thing].
Boring people will use this to churn out boring fluff. Do you remember Terragen? It’s landscape rendering software, and it was great for evocative images of imaginary mountains against alien skies. Image sites banned it, by name, because a million dorks went ‘look what I made!’ and spammed their no-effort hey-neat renders. Technically unique - altogether dull. Infinite bowls of porridge.
Creative people will use this to film their pet projects without actors or sets or budgets or anyone else’s permission. It’ll be better with any of those - but they have become optional. You can do it from text alone, as a feral demo that people think is the whole point. The results are massively better from even clumsy effort to do things the hard way. Get the right shapes moving around the screen, and the robot will probably figure out which ones are which, and remove all the pixels that don’t look like your description.
The idiots in LA think they’re gonna fire all the people who write stories. But this gives those weirdos all the power they need to put the wild shit inside their heads onto a screen in front of your eyeballs. They’ve got drawers full of scripts they couldn’t hassle other people into making. Now a finished movie will be as hard to pull off as a decent webcomic. It’s gonna get wild.
And this’ll be great for actors, in ways they don’t know yet.
Audio tools mean every voice actor can be a Billy West. You don’t need to sound like anything, for your performance to be mapped to some character. Pointedly not: “mapped to some actor.” Why would an animated character have to sound like any specific person? Do they look like any specific person? Does a particular human being play Naruto, onscreen? No. So a game might star Nolan North, exclusively, without any two characters really sounding alike. And if the devs need to add a throwaway line later, then any schmuck can half-ass the tone Nolan picked for little Suzy, and the audience won’t know the difference. At no point will it be “licensing Nolan North’s voice.” You might have no idea what he sounds like. He just does a very convincing… everybody.
Video tools will work the same way for actors. You will not need to look like anything, to play a particular character. Stage actors already understand this - but it’ll come to movies and shows in the form of deep fakes for nonexistent faces. Again: why would a character have to look like any specific person? They might move like a particular actor, but what you’ll see is somewhere between motion-capture and rotoscoping. It’s CGI… ish. And it thinks perfect photorealism is just another artistic style.
Ah, so assholes trying to stomp the meaning out of an important term.
Oh hey, it’s Richard O’Brien.
Sony’s trying to prevent anyone else from using this technology, by patenting two things that already exist.
Authoritarians worldwide begging to get got.
Dear powerful assholes: it doesn’t take much to stay on the tolerable side of pissing people off, and still get a boner from exercising control over the little people. Human beings will put up with a lot. But when you start locking people up for life, just for publicly sassing you… all you’re doing is driving that near-universal anger to places you won’t see it building.
Yeah, like it’s bugged. Rules don’t work as-written because things aren’t lined-up properly. There are no remaining cards in the yellow satchel because they’re mismatched with the tile stacks. Rotating the dungeon panels traps a player because any correct alignment of doors is in NP-hard.
But once you’ve improvised that all cards get shuffled back in and one-sided walls don’t count… you can walk to the exit in like two turns, right? We’re gonna Any% this bitch.
Shit, I got charged to read text messages. I’d get annoyed with people for replying “OK.”
When they work.
Sometimes they decide there’s no word in the English language that begins with a K, so you get a long pause, the word “thus,” and no alternate guesses.
Sometimes they decide this five or six times in a row and you give up and tap it out letter by letter like some kind of neanderthal.
Jesus, people, they’re not asking ChatGPT to guess who wins.
This is rollback netcode. This is literally just rollback netcode, plus a buzzword.
Neural networks are sixty years old. All that changed recently is how hard we can train them.
And this application is where neural networks should be downright magical: given complex events, you need a simple answer, and approximate guesses work okay. If the network is wrong… you roll back. Just like we already fucking do, with the lag-reducing prediction written by human beings.
The real thing to get worked-up over is - fuck software patents.
“Sow.”
Otherwise completely correct.
… shit, that’s viable. You could make that fun and funny for real. Kinda like Keep Talking And Nobody Explodes, but in cardboard. And dice and tokens and string and clay.
This one’s a Family Guy manatee strip.
This doesn’t have a damn thing to do with what’s on TikTok.
Delaware just hanging out all impudent.
‘I bet you’re making up stuff about me, because I’ve just made up stuff about you!’
I had not. There’s a variety of demos for guessing what comes between frames, or what fills in between lines… because those are dead easy to train from. This technology will obviously be integrated into the process of animation, so anything predictable Just Works, and anything fucky is only as hard as it used to be.