It’s not always easy to distinguish between existentialism and a bad mood.
I’m not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I’m willing to bet it’s extremely dumb.
I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
To have a dead simple UI where you, a person with no technical expertise, can ask in plain language for the data you want in the way you want them presented, along with some basic analysis that you can tell it to make it sound important. Then you tell it to turn it into an email in the style of your previous emails, send it, and take a 50min coffee break. All this allegedly with no overhead besides paying a subscription and telling your IT people to point the thing to the thing.
I mean, it would be quite something if transformers could do all that, instead of raising global temperatures to synthesize convincing looking but highly suspect messaging at best while being prone to delirium at worst.
Google pivoting to selling shovels for the AI gold rush in the form of data tools should be pretty viable if they commit to it, I hadn’t thought if it that way.
It’s a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.
edit: I love that he’s chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.
debate pervert in a reply-guy world
Well done.
There’s a bit in the beginning where he talks about how actors handling and drinking from obviously weightless empty cups ruins suspension of disbelief, so I’m assuming it’s a callback.
I kinda want to replay subnautica now.
“Manifest is open minded about eugenics and securing the existence of our people and a future for high IQ children.”
Great quote from the article on why prediction markets and scientific racism currently appear to be at one degree of separation:
Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
Wasn’t 1994 right about when they stopped making movies in black and white?
This has got to be some sort of sucker filter, like it’s not that he particularly means it, it’s that he is after the exact type of rube who is unfazed by naked contrarianism and the categorically preposterous so long as it’s said with a straight face,.
Maybe there’s something to the whole pick up artistry but for nailing VCs thing.
Honestly, the evident plethora of poor programming practices is the least notable thing about all this; using roided autocomplete to cut corners was never going to be a well calculated decision, it’s always the cherry on top of a shit-cake.
this isn’t really even related to GenAI at all
Besides the ocr there appears to be all sorts of image-to-text metadata recorded, the nadella demo had the journalist supposedly doing a search and getting results with terms that were neither typed at the time nor appearing in the stored screenshots.
Also, I thought they might be doing something image-to-text-to-image-again related (which - I read somewhere - was what bing copilot did when you asked it to edit an image) to save space, instead of storing eleventy billion multimonitor screenshots forever.
edit - in the demo the results included screens.
Nightmare blunt rotation in the Rewind AI front page recommendations:
Also it appears to be different than Recall in that it’s a third party app and not pushed as the default in every new OS installation.
That you can jailbreak recall and run it on non compliant hardware seems to be the least concerning thing in that article, recommended reading.
So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.
At least it’s only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.
edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.
edit edit: the branding appears to be “Copilot+ PCs” not windows pcs.
weight classes are for wokies
This used to be a Joe Rogan staple: no weight classes, no time limits and the ring should be the size of a basketball court.
It’s really just the umpteenth reiteration of the meathead mantra of how I’d do really well in [popular combat sport] if it weren’t for those pesky rules holding me back.
It hasn’t worked ‘well’ for computers since like the pentium, what are you talking about?
The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we’re probably still at the low hanging fruit stage of R&D, it’ll stabilize as it matures, instead of proudly proclaiming that surely it’ll approach infinity and break reality.
There’s nothing smart or insightful about seeing a line in a graph trending upwards and assuming it’s gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community’s blurb that you should check out.
So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won’t matter. See also: the whole current AI debacle.