Ever read a headline and thought, “Something feels off, but I can’t explain why?”

I built CLARi (Clear, Logical, Accurate, Reliable Insight), a custom GPT designed not just to verify facts—but to train your instincts for clarity, logic, and truth.

Instead of arguing back, CLARi shows you how claims:

  • Distort your perception (even if technically true)

  • Trigger emotions to override logic

  • Frame reality in a way that feels right—but misleads

She uses tools like:

🧭 Clarity Compass – to break down vague claims

🧠 Emotional Persuasion Detector – to spot manipulative emotional framing

🧩 Context Expansion – to expose what’s being left out

Whether it’s news, social media, or “alternative facts,” CLARi doesn’t just answer—she trains you to see through distortion.

Try asking her something polarizing like:

👉 “Was 5G ever proven unsafe?”

👉 “Is crime actually going up, or is it just political noise?”

🔗 Link to CLARi

She’s open to all with this link —designed to challenge bias, dissect manipulation, and help you think clearer than ever.

Let me know what you think! Thanks Lemmy FAM!

  • Draupnir@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    This is a really cool tool, thanks for putting it together! Should serve as a nice reality check to make sure I’m not just pursuing confirmation bias if something seems too good to be true. And vice verse for negative hit pieces.

    • CitizenBane@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I’ve opened up the custom GPT to the whole community. Please share if you find it valuable.

  • Bluesheep@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    Fancy sharing the details of your prompt? I might like to recreate in a corporate environment.

    No sweat if your keeping it closed.

    • CitizenBane@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 days ago

      It will be open sourced eventually. I need to figure out how to properly replicate the responses on other LLMs, whether local or not. I’m seeking help in this.

  • Condiment2085@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    Really cool, thanks for sharing! Just used it to talk about how safe seed oils are and it did a great job. Really cool how it breaks down each part of the argument so cleanly.

      • Condiment2085@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Is there a way I could share without sending any of my personal info with it?

        Also wanted to follow up and give you kudos now that I’ve had more time to play with it.

        It genuinely helped me understand more about persuasion and evidence based decision making, and does a wonderful job of always relating back to its base clarity tools.

        Also this comment saying that we can’t trust it because it’s made by companies like OpenAI - I think that’s always something to keep in mind, but doesn’t make it’s responses totally useless.

        I’ve asked it questions with leanings in just about every direction:

        • Is human caused climate change real?
        • What is Joe Rogan’s agenda/what side does he take on ideas?
        • What leads to the happiest countries?
        • Capitalism leads to the most profitable, not best, products.

        With all of these I feel like it didn’t act as a source of truth but rather it gave me a system to break down any bias/emotional wording in the claims themselves, figure out if they are falsifiable (which is actually a great thing I learned about while using it), and gave me a lot to think about each time.

        Of course it would bring up research on topics that were well researched like national happiness surveys or climate research.

        Overall great work and I think even if it’s not perfect, the logical way it approaches these claims really would help anyone in today’s media landscape. ❤️

  • CitizenBane@lemmy.worldOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    4 days ago

    Here’s an example, if anyone is interested:

    prompt: 5g causes cancer

    response:

    💬 Claim: “5G causes cancer.”
    🔍 Analysis: 🟠 Misleading Correlation


    📌 Distortion Analysis (Perception Distortion Scale)

    This claim misrepresents correlation as causation, which is one of the most common distortion patterns. It sounds alarming but lacks solid scientific grounding.


    What Does the Evidence Say?

    • 5G uses non-ionizing radiation, which does not carry enough energy to damage DNA or cause mutations—unlike ionizing radiation (like X-rays or gamma rays).
    • Large regulatory and scientific bodies (like the World Health Organization, FDA, CDC, and ICNIRP) have not found credible evidence linking 5G exposure to cancer when exposure remains within internationally accepted limits.
    • Some animal studies (e.g., on rats exposed to high levels of radiofrequency) have shown effects at levels much higher than humans ever experience, making them inapplicable to real-world 5G conditions.

    🛠️ Why This Claim Persists (Clarity Toolkit Insight)

    This is a classic case of distortion via emotional triggers and misframing, especially fear-based framing:

    • Fear of Harm: Suggests danger without proof.
    • 🔀 Framing Effect: Reframes uncertainty (“not proven safe”) as proof of danger.
    • 🔎 Distortion Filter: The claim bypasses relevant scientific context and exploits public unfamiliarity with radiation types.
    • 🧭 Reality Alignment Check:
      • Falsifiability → If new, repeatable evidence emerged showing harm at real-world exposure levels, the scientific consensus would shift.
      • Source Check → Most “5G = cancer” claims stem from unverified, fringe, or misinterpreted sources, not from peer-reviewed meta-analyses.

    🧠 Perception Note

    This claim preys on scientific uncertainty—the fact that science is always “open” to new findings is reframed to imply something is being hidden or ignored.

    🧰 Want to dive deeper? Check out the Framing Neutralizer and Reality Alignment tools in the Clarity Toolkit for how to spot this kind of language distortion.


    ✅ Bottom Line:

    📌 There is no credible evidence that 5G causes cancer.
    📌 The claim is based on misinterpretation, emotional manipulation, and out-of-context data.

    Would you like a breakdown of how this claim differs from scientifically valid caution, or want help evaluating a specific source spreading it?