An AI Social Coach Is Teaching Empathy to People with Autism
-
This post did not contain any content.
Idk, does AI have anything to offer on the empathy side, except sycophancy and repeating what I just said, and then repeating itself three times before slowly steering towards a reply?
-
Every autistic person I’ve met has had more empathy than every Republican I’ve met.
Glad you brought up US politics here.
-
The implication being that autistic people have none? Wow.
No, the implication is that they have trouble expressing it. Which is accurate.
-
half were assigned to use Noora for four weeks and half received no intervention.
If only they gave a control group an off-the-shelf social game like LA Noire or a D&D play group
I really don’t think a random D&D table is the place to learn to express empathy. I really wish people would stop acting like local D&D groups are a good way to learn how to socialize in general. I’m not saying you can’t learn things at the table, but the games are not actual reflections of reality and there’s a lot of go along to get along, or just run of the mill toxic group dynamics. The hobby overall can be hard for other minorities to enter, and having a table with someone still learning social skills (especially how to express empathy) and someone from a marginalized group can lead to unfortunate outcomes that your standard DM/group do not have the ability to address. It can lead one or both parties to have negative experiences that reinforce the idea they are unwelcome and leave the rest of the table with negative experiences of playing with ND people or minorities.
Sometimes practicing first with people trained to do this is the best step, and second to that would be practicing empathy in a space where the main goal is bonding rather than another nebulous goal of having fun playing a game. I don’t know if AI is the answer, but trusting your local DM/table to be able to teach empathy is a big ask. It’s almost insulting to the people that teach this and to people with ASD. Teaching empathy can’t be as passive as it is for non-ASD people, and acting like it’s just something they are expected to pick up while also dealing with all these other elements makes it seems like you don’t think it’s something they actually have to work to achieve. I’m not on the spectrum but I have a lot of autistic friends and I would not put just any of them in a D&D situation and expect them and the rest of the table to figure it out.
Also, generally comparing to an unaffected control is the gold standard. They did what is generally needed to show their approach has some kind of effect.
-
This post did not contain any content.
Research has shown that practicing social interactions with professionals in a clinical face-to-face intervention can improve outcomes for individuals, but these solutions are often costly or not widely available.
the common theme every single time I read about LLM chatbots being used for mental health - having human therapy is great but it’s just too expensive for regular people. and that’s treated as an immutable fact about society that can’t be changed. (“it is easier to imagine an end to the world than an end to capitalism”)
human therapy is too costly? OK, make it cheaper, or free, for the patients. it’s not widely available? OK, pay the therapists more, and give them better working conditions.
but where will the money to do that come from?
Silicon Valley is spending billions of dollars building AI datacenters. so I dunno, where is that money coming from?
resource allocation is a choice that we as a society, and a species, make. we can make different choices. we don’t need to confine ourselves to “well human therapy is expensive, so only rich people can access it, and poor people have to settle for AI slop, but they should be grateful because without the AI slop they’d have nothing at all”.
-
I really don’t think a random D&D table is the place to learn to express empathy. I really wish people would stop acting like local D&D groups are a good way to learn how to socialize in general. I’m not saying you can’t learn things at the table, but the games are not actual reflections of reality and there’s a lot of go along to get along, or just run of the mill toxic group dynamics. The hobby overall can be hard for other minorities to enter, and having a table with someone still learning social skills (especially how to express empathy) and someone from a marginalized group can lead to unfortunate outcomes that your standard DM/group do not have the ability to address. It can lead one or both parties to have negative experiences that reinforce the idea they are unwelcome and leave the rest of the table with negative experiences of playing with ND people or minorities.
Sometimes practicing first with people trained to do this is the best step, and second to that would be practicing empathy in a space where the main goal is bonding rather than another nebulous goal of having fun playing a game. I don’t know if AI is the answer, but trusting your local DM/table to be able to teach empathy is a big ask. It’s almost insulting to the people that teach this and to people with ASD. Teaching empathy can’t be as passive as it is for non-ASD people, and acting like it’s just something they are expected to pick up while also dealing with all these other elements makes it seems like you don’t think it’s something they actually have to work to achieve. I’m not on the spectrum but I have a lot of autistic friends and I would not put just any of them in a D&D situation and expect them and the rest of the table to figure it out.
Also, generally comparing to an unaffected control is the gold standard. They did what is generally needed to show their approach has some kind of effect.
I’ll be honest, I find the framing of the study offensive and I’m not sure if I have the words but I’ll try.
It’s less about this study comparing itself to no intervention instead, but the social & political context of AI being pushed as a way to make care giving more efficient while sacrificing quality.
-
Research has shown that practicing social interactions with professionals in a clinical face-to-face intervention can improve outcomes for individuals, but these solutions are often costly or not widely available.
the common theme every single time I read about LLM chatbots being used for mental health - having human therapy is great but it’s just too expensive for regular people. and that’s treated as an immutable fact about society that can’t be changed. (“it is easier to imagine an end to the world than an end to capitalism”)
human therapy is too costly? OK, make it cheaper, or free, for the patients. it’s not widely available? OK, pay the therapists more, and give them better working conditions.
but where will the money to do that come from?
Silicon Valley is spending billions of dollars building AI datacenters. so I dunno, where is that money coming from?
resource allocation is a choice that we as a society, and a species, make. we can make different choices. we don’t need to confine ourselves to “well human therapy is expensive, so only rich people can access it, and poor people have to settle for AI slop, but they should be grateful because without the AI slop they’d have nothing at all”.
It’s not about capitalism:
- 1 human can talk to 1 human
- 1 chatbot can talk to 8 billion humans
Human therapy will be more expensive, for as long as we value human time more than machine time.
-
You can read that from the article text, but a) the text doesn’t appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don’t.
At best, this is a terrible headline. But if I’m being honest, I don’t have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I’m not inclined to take a charitable interpretation.
-
You can read that from the article text, but a) the text doesn’t appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don’t.
At best, this is a terrible headline. But if I’m being honest, I don’t have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I’m not inclined to take a charitable interpretation.
the text doesn’t appear to actually suggest autistic people do have empathy
Is that something that you need explained?
-
the text doesn’t appear to actually suggest autistic people do have empathy
Is that something that you need explained?
Did you stop reading the rest of the post when you saw that? Because it really looks like you did.
-
Idk, does AI have anything to offer on the empathy side, except sycophancy and repeating what I just said, and then repeating itself three times before slowly steering towards a reply?
Try an RP chatbot.
They are far from perfect, but also far from the “helpful assistant” sycophants.
-
Did you stop reading the rest of the post when you saw that? Because it really looks like you did.
Sure didn’t.
-
You didn’t stop reading? Then it’s a bit weird that you’d think I don’t know autistic people have empathy, unless you decided to arbitrarily take the most bad faith reading you could’ve done. If that’s the case, I recommend taking breathers before posting so that you don’t do that.
-
This post did not contain any content.
We have empathy. wtf?
-
You didn’t stop reading? Then it’s a bit weird that you’d think I don’t know autistic people have empathy, unless you decided to arbitrarily take the most bad faith reading you could’ve done. If that’s the case, I recommend taking breathers before posting so that you don’t do that.
it’s a bit weird that you’d think I don’t know autistic people have empathy
It’s a bit weird that you think that I think that, and not that I was suggesting that no one else needs it explained to them either.
-
I’ll be honest, I find the framing of the study offensive and I’m not sure if I have the words but I’ll try.
It’s less about this study comparing itself to no intervention instead, but the social & political context of AI being pushed as a way to make care giving more efficient while sacrificing quality.
I don’t personally find the framing offensive, but I’m not on the spectrum so I can’t speak to it from that perspective. My comment was less about the article and more about not offloading that work onto unsuspecting and unprepared people.
That being said, I’m not as anti-ai as maybe some other people might be when it comes to these kinds of tools. The study itself highlights the fact that not everyone has the resources to get the kind of high quality care they need and this might be an option. I agree that sacrificing quality for efficiency is bad, in my post history you can see I made that argument about ai myself, but realistically so many people can potentially benefit from this that would have no alternatives. Additionally, AI will only be getting better, and hopefully you’ve never had a bad experience with a professional, but I can speak from personal experience that quality varies drastically between individuals in the healthcare industry. If this is something that can be offered by public libraries or school systems, so that anyone with the need can take advantage, I think that would be a positive because we’re nowhere near universal physical healthcare, much less universal mental healthcare or actual social development training. I know people who cannot afford healthcare even though they have insurance, so if they were able to go to a specialized ai for an issue I would think it’s a net positive even if it’s not a real doctor. I know that ai is not there yet, and there’s a lot of political and social baggage there, but the reality is people need help and they need it now and they are not getting it. I don’t know how good this ai is, but if the alternative is telling people that are struggling and have no other options that they have to tough it out, I’m willing to at least entertain the idea. For what it’s worth, if I could snap my fingers and give everyone all the help and support they need and it excluded ai, I would choose that option, I just don’t have it. I also don’t know that LLMs really can do this successfully on a large scale, so I would need evidence of that before really supporting it, I just think it shouldn’t be written off completely if it’s showing promise.
-
it’s a bit weird that you’d think I don’t know autistic people have empathy
It’s a bit weird that you think that I think that, and not that I was suggesting that no one else needs it explained to them either.
I was suggesting that no one else needs it explained to them either.
You’d hope so! But alas, some idiots exist. And when a title like this appears, it becomes difficult to tell if such an idiot wrote it at first glance, and more to point, a title like that tends to create more idiots (and it’s also just kinda offensive). That’s why it’s important not to write headlines like this.
Sidenote: If you want people to not take things personally, avoid personal pronouns. “Is that something that you need explained?” → “Is that something that people need explained?” It makes a world of difference and I’m confident I’ve avoided several arguments that could’ve spawned from my own posts thanks to making that kind of change. Not foolproof, sure – we are on the internet – but it helps.
-
I was suggesting that no one else needs it explained to them either.
You’d hope so! But alas, some idiots exist. And when a title like this appears, it becomes difficult to tell if such an idiot wrote it at first glance, and more to point, a title like that tends to create more idiots (and it’s also just kinda offensive). That’s why it’s important not to write headlines like this.
Sidenote: If you want people to not take things personally, avoid personal pronouns. “Is that something that you need explained?” → “Is that something that people need explained?” It makes a world of difference and I’m confident I’ve avoided several arguments that could’ve spawned from my own posts thanks to making that kind of change. Not foolproof, sure – we are on the internet – but it helps.
avoid personal pronouns
You were the only one here suggesting this required an explanation.
Explaining this would be like explaining that women can dress themselves. It’s unnecessary and suggests the opposite.
-
avoid personal pronouns
You were the only one here suggesting this required an explanation.
Explaining this would be like explaining that women can dress themselves. It’s unnecessary and suggests the opposite.
You were the only one here suggesting this required an explanation.
Alright, I think you’re being deliberately antagonistic now. Bye!
-
it’s a bit weird that you’d think I don’t know autistic people have empathy
It’s a bit weird that you think that I think that, and not that I was suggesting that no one else needs it explained to them either.
Dude, you’re just being a contrarian dick