Why? That would only be the case if the original works already were the pinnacle of text quality and information density, which is quite a stretch.
Why? That would only be the case if the original works already were the pinnacle of text quality and information density, which is quite a stretch.
You keep posting that but it is wrong. Ignoring that disabling installation of unsigned extensions is not censoring, you can install signed extensions via file in every version of Firefox, not only the developer one.
Stupid artificial outrage
Weird responses here so far. I’ll try to actually answer the question.
I’m using copilot for 9 months at work now and it’s crazy how it accelerates wiring code. I am writing class c code in C++ and rust, and it has become a staple tool like auto formatting. That being said, it cannot really do more abstract stuff like this architecture decisions.
Just try it for some time and see if it fits your use case. I’m hoping the local code models will catch up soon so I can get away from Microsoft, but until then, copilot it is.
I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.
What a cold thing to say about humans.
What is an anti tank machine gun?
Yes and probably yes. But that answers why they are there.
Or you know, like, deterrence.
We are Weezer and we are here to make money and sell out and stuff!
Oh you sweet summer child