In the hierarchy of relationships, there are a few questions that are very simple: “Can you hear me?” (Can I hear it?).
A robot appears next to a person at a testing site in Beijing, China. (Reuters)
When we hear it, we speak loudly. We also become collaborators in others’ technological struggles. In the Indian telemarketing landscape of 2026, this problem has been reframed as the “Turing Trap”. When a voice asks you if you can hear it before starting a speech, you are being social engineered by a masterpiece of intentional friction.
From a first principles perspective, we have always defined “machine-like” as “perfect.” We expect robots to be sterile and instantaneous. Instead, we define “humanity” by our flaws: stuttering, background noise, struggling with an incomplete network. AI entities like Skit.ai or Yellow.ai realized early on that to win, they didn’t need smarter brains; they needed smarter brains. They need a more convincing fight.
Engineers solved two problems by programming the robot to ask, “Can you speak louder?” They use “human failure” to mask the time it takes for the AI to handle the problem. They hijack our impulse to help others. Once you help the bot, you subconsciously participate in the conversation.
I first saw this when Google Startups mentor Shrinath V pointed it out. The technology powering the backend sounds fascinating. But earlier this month, new rules on synthetically generated information (SGI) stipulated that any audio that is indistinguishable from real life must carry a predefined identifier. and using lines like “Can I hear you?” shall be punished with a fine up to $1 million. Such calls must also originate from a specific series of numbers, to legislate authenticity and ensure the “social handshake” is not hijacked by algorithms without badges.
At first glance, this seems like a law with the right intentions. But when deconstructed through the lens of Biju Dominic, Chief Evangelist of Fractal Analysis, moral dualism begins to blur.
He offers a controversially simple analogy: If good drip coffee is made from beans that have been machine-mashed rather than hand-ground, is that a crime? If the purpose is to provide better coffee to more people, then the method is secondary to the outcome. If there is no malicious intent, why is “synthetic friction” considered a mortal sin? To Dominic, the outrage was absurd.
He recounts an encounter with Hippocratic AI, a system designed to make up for the shortage of medical staff. While traveling, Biju communicated with an artificially intelligent “nurse.” The voice was human, compassionate, and even joked while retrieving his health records. Instead of using the “your call is waiting” metaphor, he engages meaningfully. He doesn’t mind “cheating” because the system is addressing systemic flaws. For Dominic, punishing such a system for imitating human warmth would be counterproductive. There will always be malicious actors, but killing the efficiency of “machine brewing coffee” because someone is selling grounds is killing necessary evolution.
This sentiment is echoed by Shrinath V, a hardcore technologist who is the polar opposite of Dominic. He was fascinated by the ingenuity of these systems. To him, “clever and wicked” tweaks are just the next frontier in interface design. By leveraging these technologies, he can build faster and better. Like Dominic, he doesn’t believe punishing the AI’s characters is the right move. If technology makes life easier and mundane things are outsourced to compelling scripts, then who are we protecting by demanding monotony from robots?
This represents a profound shift from the Turing baseline. We used to use the Turing test to see if machines could be as smart as humans. Now, the industry has reversed this: the most successful AIs are those that are as dumb as us. This imperfection creates an identity crisis. When bad audio equals “human,” perfect audio becomes the only way to spot bots.
We are entering an era where human error can no longer be used as a proxy for human identity.
We’ve reached a point where machines are no longer trying to sound smarter than us; Its triumph lies in sounding just as frustrated with the world as we are. The next time someone asks you if you can hear them, remember this: Someone’s probably checking the lines; Or it could be a robot with enough “warmth” to make your day better. In the age of synthetic friction, the most humane thing you can do is decide whether “filter coffee” is good enough to justify the machine that grinds the beans.
As a reporter at the intersection of technology and public policy, I find the tension between regulators’ caution and technologists’ optimism to be a hopelessly beautiful story worth documenting.
Our team of more than 15 experienced writers brings diverse perspectives, deep research, and on-the-ground insights to deliver accurate, timely, and engaging stories. From breaking news to in-depth analysis, they are committed to credibility, clarity, and responsible journalism across every category we cover.