"How can you get more accurate answers from an LLM?” is like asking "how can you get more love from a prostitute?" That's just not the service provided.
-
"How can you get more accurate answers from an LLM?” is like asking "how can you get more love from a prostitute?"
That's just not the service provided.
-
-
"How can you get more accurate answers from an LLM?” is like asking "how can you get more love from a prostitute?"
That's just not the service provided.
@thomasfuchs not quite. I've found using language like "using only well regarded scientific studies from reputable sources in the last 10 years" can be pretty magical. Basically, can you summarize and aggregate scientific study results properly? This is one of LLM strengths. Slightly lossy compression for words, just adjust the quality slider up to around 97%.
-
-
@thomasfuchs not quite. I've found using language like "using only well regarded scientific studies from reputable sources in the last 10 years" can be pretty magical. Basically, can you summarize and aggregate scientific study results properly? This is one of LLM strengths. Slightly lossy compression for words, just adjust the quality slider up to around 97%.
@codinghorror Just like with other generative outputs, in summaries LLMs often omit important things and also often confabulate things that aren’t there.
LLMs don’t (and can’t) understand what they’re summarizing. They’re purely statistical and just happen to sometimes make passable summaries but often they don’t (this is also true for e.g. when they’re used for translations).
-
@codinghorror Just like with other generative outputs, in summaries LLMs often omit important things and also often confabulate things that aren’t there.
LLMs don’t (and can’t) understand what they’re summarizing. They’re purely statistical and just happen to sometimes make passable summaries but often they don’t (this is also true for e.g. when they’re used for translations).
@codinghorror You can’t hack yourself to magically change how they work—they will always be inaccurate and make things up. ¯\_(ツ)_/¯
-
@codinghorror You can’t hack yourself to magically change how they work—they will always be inaccurate and make things up. ¯\_(ツ)_/¯
@thomasfuchs right, it always requires centaur -- human review. But I ardently stand by the words "using only well regarded scientific studies from reputable sources in the last 10 years"
-
@thomasfuchs right, it always requires centaur -- human review. But I ardently stand by the words "using only well regarded scientific studies from reputable sources in the last 10 years"
@thomasfuchs I am just one data point, but I look VERY CLOSELY at these outputs, and it's rare for the LLM to screw up a summary of a scientific study in my experience over a few hundred data points.
-
@thomasfuchs I am just one data point, but I look VERY CLOSELY at these outputs, and it's rare for the LLM to screw up a summary of a scientific study in my experience over a few hundred data points.
@codinghorror You perhaps do. But 99.99% of people using these tools don’t.
-
@codinghorror You perhaps do. But 99.99% of people using these tools don’t.
@thomasfuchs that's the thing to warn people about. Sourcing (label the ingredients) and warning that you need to double-check and build upon this starting point YOURSELF, not use it as a magical "do my work" button.
-
@thomasfuchs that's the thing to warn people about. Sourcing (label the ingredients) and warning that you need to double-check and build upon this starting point YOURSELF, not use it as a magical "do my work" button.
@thomasfuchs basically it is a research assistant who is enthusiastic and never gets tired, but is not completely reliable.