• 2 Posts
  • 784 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle


  • You cannot meaningfully delete your posts or comments. That’s not because of any issue with lemmy, but because you posted them publicly. They will be archived and indexed in other services.It is always best to remember that all your activity here is public, and will be linked to your username. Given that, you may wish to minimise any personally identifying information you post, and use several accounts to split up your activities by topic.

    I don’t believe other instances will receive any sort of deletion request from lemm.ee, it’ll just go away. Any instances that have a copy of your posts or comments will keep them for as long as they wish. Even if you manually delete the comments and lemm.ee federates that action out, there is no guarantee that any other server will actually act on it, and you can be certain that archiving services will not.

    Ultimately it comes down to the fact that you posted, commented or voted in public, and that will remain public indefinitely. That’s as much down to howpublic data is captured by other entities as it is to the concept of federation.






  • notabot@lemm.eeto196@lemmy.blahaj.zoneexistentialism rule
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 days ago

    Not at all, but I’d say we don’t really remember ancient kings either. We might remember the effect they had on the world, or some particularly unusual characteristic that was recorded for posterity, but I’d say that once the last person who knew them dies, we can no longer remember ‘them’, so much as witness a sort of ‘shell’ of ideas about them.

    We don’t remember what they sounded like, or smelt like, how they smiled or what they said to their nearest and dearest. We don’t really know much about them as people compared to the king that became their shell. The things that made them unique people are gone when the last person who experienced them dies, so I’d say we really don’t remember them as people, even if we do remember the ‘king’ or ‘copper merchant’.






  • 01000001 00100000 01110011 01110100 01100101 01100001 01100100 01111001 00100000 01101000 01100001 01101110 01100100 00100000 01100001 01101110 01100100 00100000 01100001 00100000 01101101 01100001 01100111 01101110 01100101 01110100 01101001 01110011 01100101 01100100 00100000 01101110 01100101 01100101 01100100 01101100 01100101 00101110



  • To be fair, it would slightly reduce the xhance of the user really messing it up. If they remove the screen too we might finally have a user-proof computer. No more “I’ve forgotten my password”, no more “I put the internet in the trash can and now I can’t find it”, and no more “I didn’t do anything (they absolutely did) and now it doesn’t work, this must be your fault. (As the local ‘techie’ it’s not my fault, but it probably is my problem). Fix it!! (sigh…)”




  • Could you let me know what sort of models you’re using? Everything I’ve tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.

    I’ve tried to get them to analyse code too, and that’s hit and miss at best, even with small programs. I’d have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It’s hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I’ve, personally, not seen any evidence that an LLM (which is what I’m assuming you’re referring to) could do anywhere near as well. I don’t doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can’t be sure what they’ve missed, and if you’re going to do that anyway, you’ve done the job yourself!

    Having said that, I’ve no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that’s actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.


  • I’m unlikely to do a full code audit, unless something about it doesn’t pass the ‘sniff test’. I will often go over the main code flows, the issue tracker, mailing lists and comments, positive or negative, from users on other forums.

    I mean, if you’re not doing that, what are you doing, just installing it and using it??!? Where’s the fun in that? (I mean this at least semi seriously, you learn a lot about the software you’re running if you put in some effort to learn about it)


  • ‘AI’ as we currently know it, is terrible at this sort of task. It’s not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that’s been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.