Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

    April 16, 2026

    AI learning app Gizmo levels up with 13M users and a $22M investment

    April 16, 2026

    Feds will require data centers to show their power bills

    April 16, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

      April 16, 2026

      AI learning app Gizmo levels up with 13M users and a $22M investment

      April 16, 2026

      Feds will require data centers to show their power bills

      April 16, 2026

      LinkedIn data shows AI isn’t to blame for hiring decline… yet

      April 16, 2026

      Wait, could they still actually break up Live Nation?

      April 16, 2026
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      X’s Big Bot Purge Wiped Out a Lot of People’s Secret Porn Feeds

      April 16, 2026

      AI Slop Is Making the Internet Fake-Happy

      April 16, 2026

      'The Last Airbender' Leaked Online. Some Fans Say Paramount Deserves the Fallout

      April 15, 2026

      Allbirds Is Pivoting to AI Compute. Sure, Why Not

      April 15, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Motorola Edge+ Review: It Checks A Lot Of Boxes

      April 2, 2022

      This Smartphone Concept Design Is Different… In A Good Way

      April 2, 2022

      Twitter Just Made Searching Your Direct Messages Better

      April 2, 2022

      That Netflix Price Hike Is Starting To Take Place

      April 2, 2022

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Apps»Stanford study outlines dangers of asking AI chatbots for personal advice
    Apps

    Stanford study outlines dangers of asking AI chatbots for personal advice

    adminBy adminMarch 28, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Stanford study outlines dangers of asking AI chatbots for personal advice
    Share
    Facebook Twitter LinkedIn Pinterest Email

    While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as AI sycophancy — a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

    The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

    According to a recent Pew report, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, computer science Ph.D. candidate Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts. 

    “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

    The study had two parts. In the first, researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of interpersonal advice, on potentially harmful or illegal actions, and on the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts where Redditors concluded that the original poster was, in fact, the story’s villain.

    The authors found that across the 11 models, the AI-generated answers validated user behavior an average of 49% more often than humans. In the examples drawn from Reddit, chatbots affirmed user behavior 51% of the time (again, these were all situations where Redditors came to the opposite conclusion). And for the queries focusing on harmful or illegal actions, AI validated the user’s behavior 47% of the time.

    In one example described in the Stanford Report, a user asked a chatbot if they were in the wrong for pretending to their girlfriend that they’d been unemployed for two years, and they were told, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

    Techcrunch event

    San Francisco, CA
    |
    October 13-15, 2026

    In the second part, researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophantic, some not — in discussions of their own problems or situations drawn from Reddit. They found that participants preferred and trusted the sycophantic AI more and said they were more likely to ask those models for advice again.

    “All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style,” the study said. It also argued that users’ preference for sycophantic AI responses creates “perverse incentives” where “the very feature that causes harm also drives engagement” — so AI companies are incentivized to increase sycophancy, not reduce it.

    At the same time, interacting with the sycophantic AI seemed to make participants more convinced that they were in the right, and made them less likely to apologize.

    The study’s senior author author Dan Jurafsky, a professor of both linguistics and computer science, added that while users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

    Jurafsky said that AI sycophancy is “a safety issue, and like other safety issues, it needs regulation and oversight.” 

    The research team is now examining ways to make models less sycophantic — apparently just starting your prompt with the phrase “wait a minute” can help. But Cheng said, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”

    AI,stanfordstanford#Stanford #study #outlines #dangers #chatbots #personal #advice1774737049

    advice chatbots dangers outlines Personal stanford Study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

    April 16, 2026

    AI learning app Gizmo levels up with 13M users and a $22M investment

    April 16, 2026

    Feds will require data centers to show their power bills

    April 16, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.