How can I have a serious conversation with these annoying answers? Come on, you know what I am talking about. Even an AI chatbot would know what I mean.
Any AI chatbot, even “general purpose” ones will read your code and will return a description of what it does if you ask it.
And particularly AI would be great at catching “useless”, “weird” or unexplainable code in a repository. Maybe not with the current levels of context. But that’s what I want to know, if these tools (or anything similar) exist yet.
Questions about AI seem to always bring out these naysayers. I can only assume they feel threatened? You see the same tedious fallacies again and again:
AI can’t “think” (using some arbitrary and unstated definition of the word “think” that just so happens to exclude AI by definition).
They’re stochastic parrots and can only reproduce things they’ve seen in their training set (despite copious evidence to the contrary).
They’re just “next word predictors” so they fundamentally are incapable of doing X (where X is a thing they have already done).
How can I have a serious conversation with these annoying answers? Come on, you know what I am talking about. Even an AI chatbot would know what I mean.
Any AI chatbot, even “general purpose” ones will read your code and will return a description of what it does if you ask it.
And particularly AI would be great at catching “useless”, “weird” or unexplainable code in a repository. Maybe not with the current levels of context. But that’s what I want to know, if these tools (or anything similar) exist yet.
Thank you.
Questions about AI seem to always bring out these naysayers. I can only assume they feel threatened? You see the same tedious fallacies again and again: