What if saying “thank you” to an artificial intelligence cost millions? Sam Altman, CEO of OpenAI, reveals that courtesy phrases in requests addressed to ChatGPT significantly impact operational costs. Behind these seemingly trivial human interactions lies an unexpected tension between friendliness and technical performance. This paradox raises crucial questions about AI design, their daily uses, and the economic sustainability of a rapidly growing model.
During a statement covered by Bloomberg, Sam Altman revealed an unexpected consequence of linguistic habits in exchanges with ChatGPT :
People say please and thank you to ChatGPT, which is charming, but it costs us a huge amount of money.
According to the CEO of OpenAI , these polite expressions, while socially valued, increase the processing of requests and lead to additional expenses estimated at several tens of millions of dollars. He emphasized on April 17, 2025, on the social network X (formerly Twitter) how much even well-meaning human behavior can impact the economic models of tech giants.
Several technical factors explain this situation:
In summary, politeness in digital interactions, valued ethically and humanely, represents a concrete challenge in terms of technical performance and budget control for players in the AI sector.
Beyond numbers, users’ motivations to address an artificial intelligence respectfully are varied and sometimes unexpected. A survey conducted in December 2024 by the think tank Future reveals that 67% of American users show politeness with AI assistants AI .
Among them, 55% say it is simply “the right thing to do”, while 12% admit fearing being judged or penalized in the future if AI were endowed with some form of consciousness or memory.
Engineer Carl Youngblood explains his stance on the X platform on April 17:
Treating artificial intelligences with courtesy is for me a moral imperative. I do it out of personal interest.
He believes that disrespect towards a machine could reflect negatively on his own human relational skills.
This perception of an ethical connection with machines reveals an underlying trend: users project social norms onto technologies, as if AI deserved to be treated as a peer. Also, some internet users explain wanting to “practice” kindness, even in a digital setting, to maintain a form of civility in their daily lives.
Others, more pragmatic, think this stance might bring them luck if one day AI became capable of remembering past interactions. This projection calls into question our relationship with technologies and the illusion of reciprocity that an efficient chatbot may induce.
For now, OpenAI does not plan any technical measures to filter or ignore polite words, but the issue could arise in the coming months. As AI integrates into daily uses and global demand explodes, the need for energy optimization will become critical. Between humanizing interfaces and rationalizing costs, developers may need to decide: does politeness have a place in requests addressed to a machine? And if so, at what cost?