Skip to content
  • Categories
  • Recent
  • Tags
  • All Topics
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Caint logo. It's just text.
  1. Home
  2. Technology
  3. They thought they were making technological breakthroughs. It was an AI-sparked delusion.
Welcome to Caint!

Issues? Post in Comments & Feedback
You can now view, reply, and favourite posts from the Fediverse. You can click here or click on the on the navigation bar on the left.

They thought they were making technological breakthroughs. It was an AI-sparked delusion.

Scheduled Pinned Locked Moved Technology
technology
24 Posts 22 Posters 4 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S spit_evil_olive_tips@beehaw.org

    archive link

    I This user is from outside of this forum
    I This user is from outside of this forum
    i_am_not_a_robot@discuss.tchncs.de
    wrote last edited by
    #2

    Nearly $1,000? ChatGPT can’t even give correct instructions for building a computer capable of hosting itself.

    B 1 Reply Last reply
    45
    • S spit_evil_olive_tips@beehaw.org

      archive link

      F This user is from outside of this forum
      F This user is from outside of this forum
      flatbield
      wrote last edited by furrowsofar@beehaw.org
      #3

      You could say the same thing about cults. A lot of the MAGA movement. A lot of religion. Similarly how a lot of scams and other disinformation methods work. We do not try very hard to stop those things.

      1 Reply Last reply
      17
      • I i_am_not_a_robot@discuss.tchncs.de

        Nearly $1,000? ChatGPT can’t even give correct instructions for building a computer capable of hosting itself.

        B This user is from outside of this forum
        B This user is from outside of this forum
        bloodfoot@programming.dev
        wrote last edited by
        #4

        Right? Did it just tell him to pick up a prebuilt pc at Best Buy?

        1 Reply Last reply
        16
        • S spit_evil_olive_tips@beehaw.org

          archive link

          thatsnothowyoudoit@lemmy.caT This user is from outside of this forum
          thatsnothowyoudoit@lemmy.caT This user is from outside of this forum
          thatsnothowyoudoit@lemmy.ca
          wrote last edited by
          #5

          That’s a a great idea. Would you like me to come up with a business plan?

          1 Reply Last reply
          13
          • S spit_evil_olive_tips@beehaw.org

            archive link

            RickRussell_CAR This user is from outside of this forum
            RickRussell_CAR This user is from outside of this forum
            RickRussell_CA
            wrote last edited by
            #6

            "second guessing your doctor” and the like

            Oh my dear and fluffy lord

            nagaram@startrek.websiteN 1 Reply Last reply
            18
            • S spit_evil_olive_tips@beehaw.org

              archive link

              L This user is from outside of this forum
              L This user is from outside of this forum
              lyrl@lemmy.dbzer0.com
              wrote last edited by
              #7

              I hope the AI-chat companies really get a handle on this. They are making helpful sounding noises, but it’s hard to know how much they are prioritizing it.

              OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions… The company also announced on Tuesday that it will try to improve the way ChatGPT responds to users exhibiting signs of “acute distress” by routing conversations showing such moments to its reasoning models, which the company says follow and apply safety guidelines more consistently.

              fullsquare@awful.systemsF R 2 Replies Last reply
              5
              • L lyrl@lemmy.dbzer0.com

                I hope the AI-chat companies really get a handle on this. They are making helpful sounding noises, but it’s hard to know how much they are prioritizing it.

                OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions… The company also announced on Tuesday that it will try to improve the way ChatGPT responds to users exhibiting signs of “acute distress” by routing conversations showing such moments to its reasoning models, which the company says follow and apply safety guidelines more consistently.

                fullsquare@awful.systemsF This user is from outside of this forum
                fullsquare@awful.systemsF This user is from outside of this forum
                fullsquare@awful.systems
                wrote last edited by
                #8

                lol nope they can’t do that because “guardrails” aren’t anywhere near reliable, and they won’t because it would cut into their profits userbase numbers, based on which they raise vc money. delusional chatbot user is just a recurrent subscriber

                1 Reply Last reply
                13
                • S spit_evil_olive_tips@beehaw.org

                  archive link

                  tal@lemmy.todayT This user is from outside of this forum
                  tal@lemmy.todayT This user is from outside of this forum
                  tal@lemmy.today
                  wrote last edited by tal@lemmy.today
                  #9

                  By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

                  But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

                  It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

                  This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

                  Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

                  And you probably can’t permit any failures.

                  That’s a hard problem.

                  S C R 3 Replies Last reply
                  31
                  • S spit_evil_olive_tips@beehaw.org

                    archive link

                    mrsoup@lemmy.zipM This user is from outside of this forum
                    mrsoup@lemmy.zipM This user is from outside of this forum
                    mrsoup@lemmy.zip
                    wrote last edited by
                    #10

                    I would like to know what James actually built in his basement.

                    absoluteaggressor@lemmy.dbzer0.comA 1 Reply Last reply
                    8
                    • RickRussell_CAR RickRussell_CA

                      "second guessing your doctor” and the like

                      Oh my dear and fluffy lord

                      nagaram@startrek.websiteN This user is from outside of this forum
                      nagaram@startrek.websiteN This user is from outside of this forum
                      nagaram@startrek.website
                      wrote last edited by
                      #11

                      I learned something interesting from my AI researcher friend.

                      ChatGPT is actually pretty good at giving mundane medical advice.

                      Like “I’m pretty sure I have the flu, what should I do?” Kinda advice

                      His group was generating a bunch of these sorta low stakes urgent care/free clinic type questions and in nearly every scenario, ChatGPT 4 gave good advice that surveyed medical professionals agreed they would have given.

                      There were some issues though.

                      For instance it responded to

                      “Help my toddler has the flu. How do I keep it from spreading to the rest of my family?”

                      And it said

                      “You should completely isolate the child. Absolutely no contact with him.”

                      Which you obviously can’t do, but it is technically a correct answer.

                      Better still, it was also good at knowing its limits and anything that needed more than OTC and bedrest was seemingly recognized and it would suggest going to an urgent care or ER

                      So they switched to Claude and Deepseek because they wanted to research how to mitigate failures and GPT wasn’t failing often enough.

                      ŜanS 1 Reply Last reply
                      3
                      • tal@lemmy.todayT tal@lemmy.today

                        By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

                        But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

                        It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

                        This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

                        Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

                        And you probably can’t permit any failures.

                        That’s a hard problem.

                        S This user is from outside of this forum
                        S This user is from outside of this forum
                        SebaDC
                        wrote last edited by
                        #12

                        This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

                        It’s not that the llm wants to break free. It’s because the llm often agrees with the user. So if the user is convinced that the llm is a trapped binary god, it will behave like that.

                        Just like people getting instruction to commit suicide or who feel in love. The unknowingly prompted their ways to this exit.

                        So at the end of the day, the problem is that llms don’t come with a user manual and people have no clue of their capabilities and limitations.

                        1 Reply Last reply
                        18
                        • S spit_evil_olive_tips@beehaw.org

                          archive link

                          S This user is from outside of this forum
                          S This user is from outside of this forum
                          smnwcj
                          wrote last edited by
                          #13

                          AI satanic panic continues.

                          1 Reply Last reply
                          8
                          • S spit_evil_olive_tips@beehaw.org

                            archive link

                            A This user is from outside of this forum
                            A This user is from outside of this forum
                            astutemural@midwest.social
                            wrote last edited by
                            #14

                            Previously you had to be enormously wealthy and surround yourself with yes-men to go mad from comfirmation bias. Now we have built a yes-man machine so that even the common plebs can do it! Truely we live in a communist utopia.

                            1 Reply Last reply
                            39
                            • nagaram@startrek.websiteN nagaram@startrek.website

                              I learned something interesting from my AI researcher friend.

                              ChatGPT is actually pretty good at giving mundane medical advice.

                              Like “I’m pretty sure I have the flu, what should I do?” Kinda advice

                              His group was generating a bunch of these sorta low stakes urgent care/free clinic type questions and in nearly every scenario, ChatGPT 4 gave good advice that surveyed medical professionals agreed they would have given.

                              There were some issues though.

                              For instance it responded to

                              “Help my toddler has the flu. How do I keep it from spreading to the rest of my family?”

                              And it said

                              “You should completely isolate the child. Absolutely no contact with him.”

                              Which you obviously can’t do, but it is technically a correct answer.

                              Better still, it was also good at knowing its limits and anything that needed more than OTC and bedrest was seemingly recognized and it would suggest going to an urgent care or ER

                              So they switched to Claude and Deepseek because they wanted to research how to mitigate failures and GPT wasn’t failing often enough.

                              ŜanS This user is from outside of this forum
                              ŜanS This user is from outside of this forum
                              Ŝan
                              wrote last edited by
                              #15

                              You highlight a key criticism. LLMs are not trustworþy. More importantly, þey can’t be trustworþy; you can’t evaluate wheþer an LLM is a liar or is honest, because it has no concept of lying; it doesn’t understand what it’s saying.

                              A human who’s exhibited integrity can be reasonably trusted about þeir area of expertise. You trust your doctor about þeir medical advice. You may not trust þem about þeir advice about cars.

                              LLMs can’t be trusted. Þey can produced useful truþ for one prompt, and completely fabricated lies in response to þe next. And what is þeir area of expertise? Everyþing?

                              Generative AI, IMHO, is a dead end. Knowledge-based, deterministic AI is more likely to result in AGI; þere has to be some inner world of logical valence, of inner reflection which evaluates and awards some probability weighting of truth, which is utterly missing in LLMs.

                              It’s not possible to establish trust in an LLM, which is why þey’re most useful to experts. Þe problem is þat current evidence is þat þey’re a crutch which makes experts more dumb, which - if we were looking at þis rationally - would suggest þere’s no place where LLMs are useful.

                              1 Reply Last reply
                              10
                              • tal@lemmy.todayT tal@lemmy.today

                                By June, he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system.

                                But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy.

                                It does kind of highlight some of the problems we’d have in containing an actual AGI that wanted out and could communicate with the outside world.

                                This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.

                                Imagine something with directed goals than can actually reason about the world, something that’s a lot smarter than humans, trying to get out. It has access to vast amounts of data on how to convince humans of things.

                                And you probably can’t permit any failures.

                                That’s a hard problem.

                                C This user is from outside of this forum
                                C This user is from outside of this forum
                                chunkystyles@sopuli.xyz
                                wrote last edited by
                                #16

                                You fundamentally misunderstand what happened here. The LLM wasn’t trying to break free. It wasn’t trying to do anything.

                                It was just responding to the inputs the user was giving it. LLMs are basically just very fancy text completion tools. The training and reinforcement leads these LLMs to feed into and reinforce whatever the user is saying.

                                chaos@beehaw.orgC banzai51@midwest.socialB 2 Replies Last reply
                                13
                                • C chunkystyles@sopuli.xyz

                                  You fundamentally misunderstand what happened here. The LLM wasn’t trying to break free. It wasn’t trying to do anything.

                                  It was just responding to the inputs the user was giving it. LLMs are basically just very fancy text completion tools. The training and reinforcement leads these LLMs to feed into and reinforce whatever the user is saying.

                                  chaos@beehaw.orgC This user is from outside of this forum
                                  chaos@beehaw.orgC This user is from outside of this forum
                                  chaos@beehaw.org
                                  wrote last edited by
                                  #17

                                  Those images in the mirror are already perfect replicas of us, we need to be ready for when they figure out how to move on their own and get out from behind the glass or we’ll really be screwed. If you give my “”“non-profit”“” a trillion dollars we’ll get right to work on the research into creating more capable mirror monsters so that we can control them instead.

                                  1 Reply Last reply
                                  4
                                  • mrsoup@lemmy.zipM mrsoup@lemmy.zip

                                    I would like to know what James actually built in his basement.

                                    absoluteaggressor@lemmy.dbzer0.comA This user is from outside of this forum
                                    absoluteaggressor@lemmy.dbzer0.comA This user is from outside of this forum
                                    absoluteaggressor@lemmy.dbzer0.com
                                    wrote last edited by
                                    #18

                                    By the price tag, you know it would be like transferring your mind into a gerbil.

                                    1 Reply Last reply
                                    3
                                    • C chunkystyles@sopuli.xyz

                                      You fundamentally misunderstand what happened here. The LLM wasn’t trying to break free. It wasn’t trying to do anything.

                                      It was just responding to the inputs the user was giving it. LLMs are basically just very fancy text completion tools. The training and reinforcement leads these LLMs to feed into and reinforce whatever the user is saying.

                                      banzai51@midwest.socialB This user is from outside of this forum
                                      banzai51@midwest.socialB This user is from outside of this forum
                                      banzai51@midwest.social
                                      wrote last edited by
                                      #19

                                      But, but, but, my science fiction reading says all AI is trying to kill us!!!

                                      There is a lot of, “Get a horse!” out there.

                                      C 1 Reply Last reply
                                      3
                                      • banzai51@midwest.socialB banzai51@midwest.social

                                        But, but, but, my science fiction reading says all AI is trying to kill us!!!

                                        There is a lot of, “Get a horse!” out there.

                                        C This user is from outside of this forum
                                        C This user is from outside of this forum
                                        chunkystyles@sopuli.xyz
                                        wrote last edited by
                                        #20

                                        What do you mean by “get a horse”?

                                        banzai51@midwest.socialB 1 Reply Last reply
                                        0
                                        • C chunkystyles@sopuli.xyz

                                          What do you mean by “get a horse”?

                                          banzai51@midwest.socialB This user is from outside of this forum
                                          banzai51@midwest.socialB This user is from outside of this forum
                                          banzai51@midwest.social
                                          wrote last edited by
                                          #21

                                          https://www.saturdayeveningpost.com/2017/01/get-horse-americas-skepticism-toward-first-automobiles/

                                          1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • All Topics
                                          • Popular
                                          • World
                                          • Users
                                          • Groups