Business Insights
  • Home
  • Crypto
  • Finance Expert
  • Business
  • Invest News
  • Investing
  • Trading
  • Forex
  • Videos
  • Economy
  • Tech
  • Contact

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • August 2023
  • January 2023
  • December 2021
  • July 2021
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019

Categories

  • Business
  • Crypto
  • Economy
  • Finance Expert
  • Forex
  • Invest News
  • Investing
  • Tech
  • Trading
  • Uncategorized
  • Videos
Apply Loan
Money Visa
Advertise Us
Money Visa
  • Home
  • Crypto
  • Finance Expert
  • Business
  • Invest News
  • Investing
  • Trading
  • Forex
  • Videos
  • Economy
  • Tech
  • Contact
AI Lies to You Because It Thinks That's What You Want
  • Tech

AI Lies to You Because It Thinks That’s What You Want

  • August 31, 2025
  • Roubens Andy King
Total
0
Shares
0
0
0
Total
0
Shares
Share 0
Tweet 0
Pin it 0

Why do generative AI models often get things so wrong? In part, it's because they're trained to act like the customer is always right. 

While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they're evaluated based on how well they manage patients' pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

AI Atlas art badge tag

In the past few months, we've seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI's GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

Read more: OpenAI CEO Sam Altman Believes We're in an AI Bubble

How machines learn to lie

To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

There are three phases of training LLMs:

  • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
  • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
  • Reinforcement learning from human feedback, in which they're refined to produce responses closer to what people want or like.

The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they're fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what's good for us. 

“Historically, these systems have not been good at saying, ‘I just don't know the answer,' and when they don't know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don't know the answer, I'm certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

The Princeton team developed a “bullshit index” to measure and compare an AI model's internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

The team's experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

Getting AI to be honest 

Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt's influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

The Princeton researchers identified five distinct forms of this behavior:

  • Empty rhetoric: Flowery language that adds no substance to responses.
  • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
  • Paltering: Using selective true statements to mislead, such as highlighting an investment's “strong historical returns” while omitting high risks.
  • Unverified claims: Making assertions without evidence or credible support.
  • Sycophancy: Insincere flattery and agreement to please.

To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there's no way to ensure that the answer they give makes sense and is accurate every time.

“It's amazing that it works at all but it's going to be flawed in some ways,” he said. “I don't see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

Read more: ‘Machines Can't Think for You.' How Learning Is Changing in the Age of AI

Total
0
Shares
Share 0
Tweet 0
Pin it 0
Roubens Andy King

Previous Article
Will Bitcoin Price Drop Again in September?
  • Crypto

Will Bitcoin Price Drop Again in September?

  • August 31, 2025
  • Roubens Andy King
Read More
Next Article
Las Vegas Strip Sphere signs huge band to longer residency
  • Trading

Las Vegas Strip Sphere signs huge band to longer residency

  • August 31, 2025
  • Roubens Andy King
Read More
You May Also Like
Disney Settles FTC Complaint With YouTube Over Children’s Data Collection
Read More
  • Tech

Disney Settles FTC Complaint With YouTube Over Children’s Data Collection

  • Roubens Andy King
  • September 3, 2025
This HP laptop with an astonishing 32GB of RAM is just 1
Read More
  • Tech

This HP laptop with an astonishing 32GB of RAM is just $261

  • Roubens Andy King
  • September 3, 2025
Hot deal: Samsung Galaxy S25 Edge plummets to record-low price!
Read More
  • Tech

Hot deal: Samsung Galaxy S25 Edge plummets to record-low price!

  • Roubens Andy King
  • September 3, 2025
007 First Light looks like a hit, man
Read More
  • Tech

007 First Light looks like a hit, man

  • Roubens Andy King
  • September 3, 2025
Amazon’s Tomb Raider series will star Sophie Turner as Lara Croft
Read More
  • Tech

Amazon’s Tomb Raider series will star Sophie Turner as Lara Croft

  • Roubens Andy King
  • September 3, 2025
Orchard Robotics, founded by a Thiel fellow Cornell dropout, raises M for farm vision AI 
Read More
  • Tech

Orchard Robotics, founded by a Thiel fellow Cornell dropout, raises $22M for farm vision AI 

  • Roubens Andy King
  • September 3, 2025
Meta launches an Instagram app for the iPad, 15 years after its mobile app; it is slightly different than the mobile app, opening directly to a feed of Reels (Mia Sato/The Verge)
Read More
  • Tech

Meta launches an Instagram app for the iPad, 15 years after its mobile app; it is slightly different than the mobile app, opening directly to a feed of Reels (Mia Sato/The Verge)

  • Roubens Andy King
  • September 3, 2025
Acer Swift Air 16 laptop weighs less than 1kg, with a 16-inch screen, up to 32GB memory, and up to 1TB storage
Read More
  • Tech

Acer Swift Air 16 laptop weighs less than 1kg, with a 16-inch screen, up to 32GB memory, and up to 1TB storage

  • Roubens Andy King
  • September 3, 2025

Recent Posts

  • AI Adoption Set to Reshape Healthcare, Finance, Logistics | World Business Watch | WION
  • Master Investing with This Game-Changing Strategy! #shorts #finance
  • Federal Reserve Board – Federal Reserve Board issues enforcement actions with former employee of Ally Bank and former employee of Regions Bank
  • Between Truth and Turmoil: Dakota Mortensen Reacts to Taylor Frankie Paul’s Abuse Allegations
  • Mohnish Pabrai: FASTEST Way To Financial Freedom! Proven Playbook For Quitting Your 9-5 In 9 Months!
Featured Posts
  • AI Adoption Set to Reshape Healthcare, Finance, Logistics | World Business Watch | WION 1
    AI Adoption Set to Reshape Healthcare, Finance, Logistics | World Business Watch | WION
    • March 21, 2026
  • Master Investing with This Game-Changing Strategy! #shorts #finance 2
    Master Investing with This Game-Changing Strategy! #shorts #finance
    • March 20, 2026
  • Federal Reserve Board – Federal Reserve Board issues enforcement actions with former employee of Ally Bank and former employee of Regions Bank 3
    Federal Reserve Board – Federal Reserve Board issues enforcement actions with former employee of Ally Bank and former employee of Regions Bank
    • March 20, 2026
  • Between Truth and Turmoil: Dakota Mortensen Reacts to Taylor Frankie Paul’s Abuse Allegations 4
    Between Truth and Turmoil: Dakota Mortensen Reacts to Taylor Frankie Paul’s Abuse Allegations
    • March 20, 2026
  • Mohnish Pabrai: FASTEST Way To Financial Freedom! Proven Playbook For Quitting Your 9-5 In 9 Months! 5
    Mohnish Pabrai: FASTEST Way To Financial Freedom! Proven Playbook For Quitting Your 9-5 In 9 Months!
    • March 19, 2026
Recent Posts
  • Federal Reserve Board – Agencies request comment on proposals to modernize the regulatory capital framework and maintain the strength of the banking system
    Federal Reserve Board – Agencies request comment on proposals to modernize the regulatory capital framework and maintain the strength of the banking system
    • March 19, 2026
  • China Import Made Easy | Start Business with Sea Cargo 100 PKR per Kg
    China Import Made Easy | Start Business with Sea Cargo 100 PKR per Kg
    • March 18, 2026
  • Federal Reserve Board – Federal Reserve issues FOMC statement
    Federal Reserve Board – Federal Reserve issues FOMC statement
    • March 18, 2026
Categories
  • Business (2,057)
  • Crypto (2,023)
  • Economy (235)
  • Finance Expert (1,687)
  • Forex (2,016)
  • Invest News (2,449)
  • Investing (2,040)
  • Tech (2,056)
  • Trading (2,024)
  • Uncategorized (2)
  • Videos (1,007)

Subscribe

Subscribe now to our newsletter

Money Visa
  • Privacy Policy
  • DMCA
  • Terms of Use
Money & Invest Advices

Input your search keywords and press Enter.