As far as I know, reconstructing faces from bones is more art than science. There is little to be done about that.
As far as I know, reconstructing faces from bones is more art than science. There is little to be done about that.
“Indeed, we have already observed an AI system deceiving its evaluation. One study of simulated evolution measured the replication rate of AI agents in a test environment, and eliminated any AI variants that reproduced too quickly.10 Rather than learning to reproduce slowly as the experimenter intended, the AI agents learned to play dead: to reproduce quickly when they were not under observation and slowly when they were being evaluated.” Source: AI deception: A survey of examples, risks, and potential solutions, Patterns (2024). DOI: 10.1016/j.patter.2024.100988
As it appears, it refered to: Lehman J, Clune J, Misevic D, Adami C, Altenberg L, et al. The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. Artif Life. 2020 Spring;26(2):274-306. doi: 10.1162/artl_a_00319. Epub 2020 Apr 9. PMID: 32271631.
Very interesting.
“But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”
Sounds like something I would expect from an evolved system. If deception is the best way to win, it is not irrational for a system to choice this as a strategy.
In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.
Interesting. Can somebody tell me which case it is?
As far as I understand, Park et al. did some kind of metastudy as a overview of literatur.
My question is: Imagine we would put all the data input of a certain task, eg. making a meal, into text fragments and send this “sense data”-pakets ( 1 to the AI, would the AI be able to cook if the teach the AI how to give output that controlls a robot arm?
If the answer of this question is yes, we already have a very usefull general tool. The LLM-AI will be able to controll and observe some situations. In the case that the answer is “no”, I guess, it would have interesting implications.
1 : Remember, some part of AI are already able to tell what is on a given photo. Not 100%, but good enough for a meal maybe. In some cases, it woul task “provokant”.
Yeah, I wouldn’t be too confident in Facebook’s implementation, and I certainly don’t believe that their interests are aligned with their users’.
I’m quite sure, they arn’t. This statement doesn’t mean that I think they have bad intention or something. It’s just, at least for me, obivious that the interest of the users and these of the companies are highly different. This is also the case with other companies and their customers.
Having access to the data means that they will be required by law to provide that data to governments in various circumstances.
A more paranoid person than myself would suspect that any big enough gouverment world simply force the companies to collect and share data.
The metadata problem is common to a lot of platforms.
From the viewpoint of the cooperations, this is a good deal. Enough privacy to keep people on the plattform and still enough data for advertisment.
What was her secret?