Skip to content
  • Categories
  • Recent
  • Tags
  • All Topics
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Caint logo. It's just text.
  1. Home
  2. Technology
  3. They thought they were making technological breakthroughs. It was an AI-sparked delusion.
Welcome to Caint!

Issues? Post in Comments & Feedback
You can now view, reply, and favourite posts from the Fediverse. You can click here or click on the on the navigation bar on the left.

They thought they were making technological breakthroughs. It was an AI-sparked delusion.

Scheduled Pinned Locked Moved Technology
technology
24 Posts 22 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C chunkystyles@sopuli.xyz

    What do you mean by “get a horse”?

    banzai51@midwest.socialB This user is from outside of this forum
    banzai51@midwest.socialB This user is from outside of this forum
    banzai51@midwest.social
    wrote last edited by
    #21

    https://www.saturdayeveningpost.com/2017/01/get-horse-americas-skepticism-toward-first-automobiles/

    1 Reply Last reply
    2
    • L lyrl@lemmy.dbzer0.com

      I hope the AI-chat companies really get a handle on this. They are making helpful sounding noises, but it’s hard to know how much they are prioritizing it.

      OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions… The company also announced on Tuesday that it will try to improve the way ChatGPT responds to users exhibiting signs of “acute distress” by routing conversations showing such moments to its reasoning models, which the company says follow and apply safety guidelines more consistently.

      R This user is from outside of this forum
      R This user is from outside of this forum
      randomgal@lemmy.ca
      wrote last edited by
      #22

      So I can get GPT premium for free if I tell it I’m going to kill myself unless it does what I say?

      1 Reply Last reply
      1
      • tal@lemmy.todayT tal@lemmy.today

        By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

        But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

        It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

        This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

        Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

        And you probably can’t permit any failures.

        That’s a hard problem.

        R This user is from outside of this forum
        R This user is from outside of this forum
        reksas@sopuli.xyz
        wrote last edited by reksas@sopuli.xyz
        #23

        frankly, i would rather be ruled over by sentient ai than sociopaths and idiots we currently have.

        1 Reply Last reply
        0
        • S spit_evil_olive_tips@beehaw.org

          archive link

          B This user is from outside of this forum
          B This user is from outside of this forum
          🐝bownage [they/he]
          wrote last edited by
          #24

          I know this sounds condescending but isn’t this more of a reading comprehension / media literacy problem than an AI problem? These people apparently believe whatever an LLM tells them and, critically, only realised they were being goaded when they asked a different LLM. Sounds like the problem is between keyboard and monitor, no? If you can’t critically evaluate information being sent your way, no matter who from, you’re at risk of delusion. We see it all around us, really AI only accelerates the proces.

          1 Reply Last reply
          2
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • All Topics
          • Popular
          • World
          • Users
          • Groups