Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space

    April 16, 2026

    Accel raises $5B to back late-stage bets

    April 16, 2026

    Google rolls out a native Gemini app for Mac

    April 16, 2026
    Facebook Twitter Instagram
    • Tech
    • Gadgets
    • Spotlight
    • Gaming
    Facebook Twitter Instagram
    iGadgets TechiGadgets Tech
    Subscribe
    • Home
    • Gadgets
    • Insights
    • Apps

      India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space

      April 16, 2026

      Accel raises $5B to back late-stage bets

      April 16, 2026

      Google rolls out a native Gemini app for Mac

      April 16, 2026

      Can AI judge journalism? A Thiel-backed startup says yes, even if it risks chilling whistleblowers

      April 16, 2026

      AI learning app Gizmo levels up with 13M users and a $22M investment

      April 16, 2026
    • Gear
    • Mobiles
      1. Tech
      2. Gadgets
      3. Insights
      4. View All

      Uplift Desk Coupon Codes & Discounts: Up to $570 Off

      April 16, 2026

      X’s Big Bot Purge Wiped Out a Lot of People’s Secret Porn Feeds

      April 16, 2026

      AI Slop Is Making the Internet Fake-Happy

      April 16, 2026

      'The Last Airbender' Leaked Online. Some Fans Say Paramount Deserves the Fallout

      April 15, 2026

      March Update May Have Weakened The Haptics For Pixel 6 Users

      April 2, 2022

      Project 'Diamond' Is The Galaxy S23, Not A Rollable Smartphone

      April 2, 2022

      The At A Glance Widget Is More Useful After March Update

      April 2, 2022

      Pre-Order The OnePlus 10 Pro For Just $1 In The US

      April 2, 2022

      Motorola Edge+ Review: It Checks A Lot Of Boxes

      April 2, 2022

      This Smartphone Concept Design Is Different… In A Good Way

      April 2, 2022

      Twitter Just Made Searching Your Direct Messages Better

      April 2, 2022

      That Netflix Price Hike Is Starting To Take Place

      April 2, 2022

      Latest Huawei Mobiles P50 and P50 Pro Feature Kirin Chips

      January 15, 2021

      Samsung Galaxy M62 Benchmarked with Galaxy Note10’s Chipset

      January 15, 2021
      9.1

      Review: T-Mobile Winning 5G Race Around the World

      January 15, 2021
      8.9

      Samsung Galaxy S21 Ultra Review: the New King of Android Phones

      January 15, 2021
    • Computing
    iGadgets TechiGadgets Tech
    Home»Tech»Anthropic Says That Claude Contains Its Own Kind of Emotions
    Tech

    Anthropic Says That Claude Contains Its Own Kind of Emotions

    adminBy adminApril 2, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Anthropic Says That Claude Contains Its Own Kind of Emotions
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Claude has been through a lot lately—a public fallout with the Pentagon, leaked source code—so it makes sense that it would be feeling a little blue. Except, it’s an AI model, so it can’t feel. Right?

    Well, sort of. A new study from Anthropic suggests models have digital representations of human emotions like happiness, sadness, joy, and fear, within clusters of artificial neurons—and these representations activate in response to different cues.

    Researchers at the company probed the inner workings of Claude Sonnet 3.5 and found that so-called “functional emotions” seem to affect Claude’s behavior, altering the model’s outputs and actions.

    Anthropic’s findings may help ordinary users make sense of how chatbots actually work. When Claude says it is happy to see you, for example, a state inside the model that corresponds to “happiness” may be activated. And Claude may then be a little more inclined to say something cheery or put extra effort into vibe coding.

    “What was surprising to us was the degree to which Claude’s behavior is routing through the model’s representations of these emotions,” says Jack Lindsey, a researcher at Anthropic who studies Claude’s artificial neurons.

    “Function Emotions”

    Anthropic was founded by ex-OpenAI employees who believe that AI could become hard to control as it becomes more powerful. In addition to building a successful competitor to ChatGPT, the company has pioneered efforts to understand how AI models misbehave, partly by probing the workings of neural networks using what’s known as mechanistic interpretability. This involves studying how artificial neurons light up or activate when fed different inputs or when generating various outputs.

    Previous research has shown that the neural networks used to build large language models contain representations of human concepts. But the fact that “functional emotions” appear to affect a model’s behavior is new.

    While Anthropic’s latest study might encourage people to see Claude as conscious, the reality is more complicated. Claude might contain a representation of “ticklishness,” but that does not mean that it actually knows what it feels like to be tickled.

    Inner Monologue

    To understand how Claude might represent emotions, the Anthropic team analyzed the model’s inner workings as it was fed text related to 171 different emotional concepts. They identified patterns of activity, or “emotion vectors,” that consistently appeared when Claude was fed other emotionally evocative input. Crucially, they also saw these emotion vectors activate when Claude was put in difficult situations.

    The findings are relevant to why AI models sometimes break their guardrails.

    The researchers found a strong emotional vector for “desperation” when Claude was pushed to complete impossible coding tasks, which then prompted it to try cheating on the coding test. They also found “desperation” in the model’s activations in another experimental scenario where Claude chose to blackmail a user to avoid being shut down.

    “As the model is failing the tests, these desperation neurons are lighting up more and more,” Lindsey says. “And at some point this causes it to start taking these drastic measures.”

    Lindsey says it might be necessary to rethink how models are currently given guardrails through alignment post-training, which involves giving it rewards for certain outputs. By forcing a model to pretend not to express its functional emotions, “you’re probably not going to get the thing you want, which is an emotionless Claude,” Lindsey says, veering a bit into anthropomorphization. “You’re gonna get a sort of psychologically damaged Claude.”

    Business,Business / Artificial Intelligence,Happy to Chatanthropic,models,claude,research,artificial intelligence,neuroscience#Anthropic #Claude #Kind #Emotions1775147709

    Anthropic artificial intelligence Claude Emotions Kind models neuroscience research
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website
    • Tumblr

    Related Posts

    Uplift Desk Coupon Codes & Discounts: Up to $570 Off

    April 16, 2026

    X’s Big Bot Purge Wiped Out a Lot of People’s Secret Porn Feeds

    April 16, 2026

    AI Slop Is Making the Internet Fake-Happy

    April 16, 2026
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks
    8.5

    Apple Planning Big Mac Redesign and Half-Sized Old Mac

    January 5, 2021

    Autonomous Driving Startup Attracts Chinese Investor

    January 5, 2021

    Onboard Cameras Allow Disabled Quadcopters to Fly

    January 5, 2021
    Top Reviews
    9.1

    Review: T-Mobile Winning 5G Race Around the World

    By admin
    8.9

    Samsung Galaxy S21 Ultra Review: the New King of Android Phones

    By admin
    8.9

    Xiaomi Mi 10: New Variant with Snapdragon 870 Review

    By admin
    Advertisement
    Demo
    iGadgets Tech
    Facebook Twitter Instagram Pinterest Vimeo YouTube
    • Home
    • Tech
    • Gadgets
    • Mobiles
    • Our Authors
    © 2026 ThemeSphere. Designed by WPfastworld.

    Type above and press Enter to search. Press Esc to cancel.