I have an idea. I can’t tell if it’s good or bad. Let me know what you guys think.

I think when someone posts “clone credit cards HMU for my telegram I know you’re just here sitting here waiting like gee I wish someone would post me criminal scammy get rich quick schemes, I can’t want to have a felony on my record” type spam, there should be a bot the mods can activate that will start sending messages to the person’s telegram or whatever, pretending to be interested in cloned credit cards.

It wouldn’t be that hard to make one that would send a little “probe” message to make sure it was a for-real scammer, and then if they respond positively, then absolutely flood them with thousands of interested responses. Make it more or less impossible for them to sort the genuine responses from the counter-spam, waste their time, make it not worth their while to come and fuck up our community. And if they lose their temper it can save some of the messages and post them to some sort of wall of victory.

What do people think?

  • @betterdeadthanreddit@lemmy.world
    link
    fedilink
    English
    4010 months ago

    Search around for “scambaiting”, there are people doing similar things ranging from tying up (figuratively, unfortunately) telephone scammers for hours with pointless conversations to more tech-based efforts. Kitboga on YouTube is a good place to start, he usually just sits on the phone with scammers and takes them through some wild scenarios but has some videos showing tests of an automated system that uses an LLM to interpret and respond.

    • 0x0A
      link
      English
      3310 months ago

      I started doing this after my phone number got into some kind of crazy scam call database. At the height of it I was getting somewhere in the neighborhood of 30 calls a day, basically rendering the phone unusable.

      So I started actually picking up and running these jokesters through the ringer. I’m talking 15+ minutes of pointless conversation, false info, tons of backtracking, and general bullshit. I refined my craft over a few months and would time how quickly I could make them scream at me for fucking with them. At the end of it my phone got taken off at least one of the bigger lists because the calls went to down only 10 or so a day. Now it’s one or two a day at the most, probably from me not answering like I used to.

      Favorite call was one guy who figured out I was messing with him and it turned into this general question and answer thing about life in the US. Dude wanted to know how easy it was to pick up chicks and whether or not I’d dated a lot/was married. Great guy, really into Counter-Strike.

    • KptnAutismus
      link
      fedilink
      English
      810 months ago

      kitboga’s AI thingymajig was hilarious. wastes so much time on the scammer’s end, and requires almost no effort on the user’s side.

      • @auk@slrpnk.netOP
        link
        fedilink
        English
        1310 months ago

        My absolute favorite is the one where to redeem their money from the transfer agency, the scammers have to navigate through a labyrinthine phone tree maze that never leads anywhere. He releases them to wander their way through it and just keeps statistics on how long they spend.

        https://www.youtube.com/watch?v=dWzz3NeDz3E

        He ran into someone who had dealt with it before, and started talking about transferring money through this system and the guy started protesting and sounded so defeated. “Oh, it’s so easy,” he says, and the guy sounds just purely defeated and horrified as he says “No, no ma’am, I do not think it is easy…”

  • @MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    2010 months ago

    If you’ve thought of it, they’ve thought of it. Plainly, there are already scam bots floating around, most of the time engaging with them makes it quite clear that they are not actual people, as long as you’re paying attention. Their side oftentimes is completely automated. Get paid send info. The “lifelike” messages they send are canned and only vary slightly from message to message.

    I swear, we’ll implement bots to “combat” this stuff and it won’t do anything because it will largely just be bots talking to bots forever. There’s already a nontrivial amount of internet bandwidth consumed by spam email that just gets thrown away as it arrives, now, more and more resources are going to be poured into having bots talk at eachother for centuries without getting anywhere.

    • @intensely_human@lemm.ee
      link
      fedilink
      English
      210 months ago

      Bots talking to bots is what alien explorers will find here. Except they won’t see the bots as the individuals. They’ll see the internet as one mind.

      • @MystikIncarnate@lemmy.ca
        link
        fedilink
        English
        210 months ago

        But if the scammer is using a bot too, then it becomes a null sum, since the bot can have thousands of conversations at a time.

        Spam bots should be taken down more than engaged with. If there’s a real scammer on the other end, yes, absolutely, waste that person’s time as much as you can, and as much as you like. People have made entire careers out of trolling them and I endorse it. Scammers are the worst people, taking the hard earned money of his people to try to convince them of a lie just to get their money. This is sometimes true with normal sales, caveat emptor and all that, but when the entire premise of the interaction is based on deception, then to me, it crosses over into scam territory (looking at you, entire duct cleaning industry).

        Wasting time making a bot to talk to spam bots is not very helpful. If you can identify that they are not properly filtering their inputs, I would invite you to use an SQL injection and talk to them about little Bobby tables. But by using a bot of your own to talk to spam bots will have such a negligible impact on the harm that scammers have that it’s basically not worth doing. Unless you can scale up your bot to the point of overwhelming the scammers bot into disfunction, it’s not going to provide any real help to those currently being scammed by the bot. Scaling up to the point of getting the bot to malfunction, is also something I would approach with caution, since you have no way of knowing what that limit is, and in the case of cloud systems, the capabilities of the bot may scale far and above what any attack against them could reasonably produce.

        If they’re using cloud resources and you can verify that, then there’s a good chance you can hit them financially if you push their bot to its limits since cloud compute resources are not cheap. If you can generate enough traffic for them that the bot scales up significantly, then yeah, you may be successful in forcing the scammer paying for that to shut it down. The trick is doing so without incurring significant costs yourself. It’s still likely, however, that the scammer will simply abandon it (and not pay their bill), and restart the whole thing again later with a new telegram/whatever chat system account later that you won’t be able to track down in a reasonable timeframe.

        So it’s somewhat insane to try, it’s easy for them to change the bot to avoid your usage attack and difficult for you to keep track of them and which account they’re using now.

        We need to make it globally illegal to run these kinds of remote scam operations, and strongly prosecute anyone doing it. Their ill gotten gains need to be confiscated and sent back to their victims (as much of it as possible), and they should be imprisoned for a very long time.

        As far as I’m concerned, this is the way. This is the only way. Legal reprocussions with strong penalties and strong law enforcement of those legalities is the only way to ensure that we crush this trend permanently. Most countries, even those where we see a lot of scamming coming from, have laws that enforce against scams; but the enforcement is very spotty, and IMO, the ramifications of being caught are far too light.

        Right now, most civilians don’t really have any good recourse beyond ignoring it. Scambaiters are pretty common and they’re doing good work, even working with law enforcement to get these scammers behind bars, but even that falls far short of the action required to stop such things from continuing to happen. We need strong legislation agreed upon across international boundaries with full task forces to find and prosecute these assholes; we don’t have that, and so it continues.

    • haui
      link
      fedilink
      English
      810 months ago

      I just immediately thought the same. No way would they be able to distinguish that from a real person.

      • @0x4E4F@lemmy.dbzer0.com
        link
        fedilink
        English
        610 months ago

        You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.

        • @CrayonRosary@lemmy.world
          link
          fedilink
          English
          11
          edit-2
          10 months ago

          You can preface a ChatGPT session with instructions on what length and verbosity you want as replies. Tell it to roleplay or speak in short text message like replies. Or hell, speak in haikus. It’s pretty clever for an LLM.

          And if someone’s writing code to make a bot, they can privately coach the LLM before they start forwarding any replies between the real person.

            • Deebster
              link
              fedilink
              English
              5
              edit-2
              10 months ago

              No, you don’t need to train it, it’s just about the prompt you feed it. You can (and should) add quite a lot of instructions and context to your questions (prompts) to get the best out of it.

              “Prompt engineer” is a job/skill for this reason.

              • @intensely_human@lemm.ee
                link
                fedilink
                English
                210 months ago

                My default instruction that seems to get just about the right tone includes:

                Speak to me like you’re my executive assistant, and we’re in a brief meeting we’ve had daily for many years

                So instead of me saying

                Is there any way to get mayonnaise out of a jar without using my hands

                Instead of

                It’s fun and rewarding to get mayonnaise out of jar without using your hands. [blah blah blah blog post article sales pitch blah blah 400 words blah]

                Instead I get:

                • Kick the jar
                • Use your long proboscis-like tongue
                • Hire someone
                • Deebster
                  link
                  fedilink
                  English
                  210 months ago

                  It’s weird how well making it roleplay works. A lot of the “breaks” of the system have been just by telling it to act in a different way, and the newest, best versions have various experts simulated that combine to give the best answer.

        • @poweruser@lemmy.sdf.org
          link
          fedilink
          English
          410 months ago

          I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol

          • @0x4E4F@lemmy.dbzer0.com
            link
            fedilink
            English
            1
            edit-2
            10 months ago

            Short replies and sentences is the way to go with LLMs. They get too polite if you leave them to their devices. It’s in their “nature”, they’re designed to please.

          • @Mirodir@discuss.tchncs.de
            link
            fedilink
            English
            110 months ago

            Yeah, I’ve noticed that too—there’s a distinct ‘AI vibe’ that comes through in the generated responses, even if it’s subtle.

            • @Mirodir@discuss.tchncs.de
              link
              fedilink
              English
              110 months ago

              That was a response I got from ChatGPT with the following prompt:

              Please write a one sentence answer someone would write on a forum in a response to the following two posts:
              post 1: “You sure? If it’s another bot at the other end, yeah, but a real person, you recognize ChatGPT in 2 sentences.”
              post 2: “I was going to disagree with you by using AI to generate my response, but the generated response was easily recognizable as non-human. You may be onto something lol”

              It’s does indeed have an AI vibe, but I’ve seen scammers fall for more obvious pranks than this one, so I think it’d be good enough. I hope it fooled at least a minority of people for a second or made them do a double take.

        • @kakes@sh.itjust.works
          link
          fedilink
          English
          210 months ago

          Nah, not really! I’ve chatted with people using ChatGPT, and most couldn’t tell. It’s pretty slick, blends in well with natural conversation.

          • @0x4E4F@lemmy.dbzer0.com
            link
            fedilink
            English
            3
            edit-2
            10 months ago

            Most… you’re talking about the average Joe. People that write spam bots are not your average Joe.

            Plus, if you’re talking about a chat with multiple people, yes, it might stay under the radar. But 1 on 1, probably not.

            • @kakes@sh.itjust.works
              link
              fedilink
              English
              210 months ago

              Well, fair point about the spam bot creators, but in my experience, even in one-on-one chats, it holds up. I’ve had some pretty smooth conversations without anyone suspecting it’s AI.

                • @kakes@sh.itjust.works
                  link
                  fedilink
                  English
                  310 months ago

                  This conversation is a small example. My previous messages in this comment chain were generated by ChatGPT.

                  I’m too lazy to keep that up indefinitely, but at this point you can decide for yourself whether it was convincing enough.

  • gregorum
    link
    fedilink
    English
    15
    edit-2
    10 months ago

    there are youtubers who fight back against scammers, some who have been doing so for years. some of the best ones are also hackers, and have managed, through not only their own work, but with assistance from others, and over time and working with local authorities in India, have actually managed to take down more than a few scam centers. problem is, when you take one down, 5 more pop up.

    still, they fight on

  • @rsolva@lemmy.world
    link
    fedilink
    English
    1310 months ago

    Yeah, I would use a bot like this on Telegram. Could hook it up to a tiny LLM (The Phi for example) and give it instructions to play along and then block after some time.

    • @auk@slrpnk.netOP
      link
      fedilink
      English
      310 months ago

      Next time you see a scammer, DM me a link and the details and I’ll see what I can do.

  • @0x4E4F@lemmy.dbzer0.com
    link
    fedilink
    English
    610 months ago

    It is a nice idea… but it involves some effort on this side to get that done, and I don’t think anyone would be interested. People on dbzer0, maybe, everyone else, I don’t think so.

          • db0
            link
            fedilink
            English
            310 months ago

            Sure, why not. Unfortunately I have enough on my plate already. However this doesn’t need someone to host. Such a bot could easily run as a script on your PC.

          • @0x4E4F@lemmy.dbzer0.com
            link
            fedilink
            English
            110 months ago

            Maybe write him a PM, I don’t think he reads posts that much, lol 😂. And just for the hosting, not coding the bot. He’s way too busy with AI Horde and other projects.

    • KptnAutismus
      link
      fedilink
      English
      2
      edit-2
      10 months ago

      don’t underestimate basement dwellers, they have way too much time on their hands. if one is sufficiently serotoninized, they might actually do it.

      • @0x4E4F@lemmy.dbzer0.com
        link
        fedilink
        English
        210 months ago

        They don’t have enough "jerk-off-to” (or whatever their social depravities) incentive might be.

        Lemmy is not at that critical point yet.