AI Chatbots: A Billion Users, Big Risks and a New Rule Book
From mental health support to salary negotiations to cybercrime, AI chatbots have embedded themselves into every dimension of modern life. The latest global data tells a story of extraordinary promise — and equally extraordinary peril.
Imagine a tool used by more people than any hospital network, any school system, any government welfare programme in human history. Imagine that tool operating 24 hours a day, in every language, at near-zero cost to the user — offering medical advice, career coaching, emotional support, and, if misused, a blueprint for crime. That tool is the AI chatbot, and in 2026, over 1.5 billion people are using one.
The global chatbot market, valued at $15.6 billion in 2025, is expanding at over 23% annually. In a single month last year, generative AI platforms recorded over 1.1 billion referral visits — a 357% year-on-year surge. What began as a party trick for generating poems and fixing code has matured into a parallel infrastructure for human decision-making. How that infrastructure is used — and misused — is the defining digital literacy question of our era.
The Therapy Room Nobody Booked
The most striking trend in AI usage is not productivity. It is therapy. A landmark 2025 study found that nearly 49% of respondents who self-reported mental health challenges were using major AI chatbots — including ChatGPT, Claude, and Gemini — as their primary source of therapeutic support. Among young adults aged 18 to 21, one in five reported turning to AI chatbots for mental health guidance. Among teens aged 12 to 17, the figure was over 12%.
The reasons are not hard to understand: AI is immediate, free of social stigma, available at 3 a.m. during a panic attack, and does not judge. In a country like India, where the ratio of psychiatrists to patients stands at roughly 0.3 per 100,000 people and where mental health still carries enormous social taboo, this access is genuinely transformative for millions.
Yet the clinical picture is far more sobering. The United States Food and Drug Administration has authorised no generative AI tool for mental health treatment. Research published in leading medical journals confirms that AI systems can inadvertently reinforce delusional thinking, perpetuate stigma around conditions like schizophrenia and addiction, and critically fail to recognise emergencies. Healthcare experts are now warning that growing dependence on AI-based emotional support could — paradoxically — trigger more mental health crises, not fewer.
The responsible use of AI in mental health means using it as a first step, never a final one — for coping strategies, journaling frameworks, psychoeducation, and referrals to professionals. For crises, the human on the helpline is irreplaceable.
The New Career Coach in Your Pocket
If the mental health trend is complex, the employability revolution is more straightforward — and more immediately actionable. The World Economic Forum reports that 88% of companies now use AI to screen job applicants before a human recruiter ever reads a CV. AI systems analyse language patterns, framing choices, tone, and key concept density against the job description. They produce composite scores. They decide who gets the call.
At the same time, nearly two-thirds of job seekers are using AI to apply for those very jobs — drafting cover letters, researching salary benchmarks, practising interview answers, and negotiating offers. PwC’s 2025 Global AI Jobs Barometer found that workers who demonstrate AI skills command significantly higher wages than their peers, with the premium growing year on year.
This is, on balance, a net gain for individual candidates. Asking AI to run a mock interview for a senior role, to critique a negotiation script, or to benchmark your salary expectation against verified industry data is smart, legal, and increasingly necessary. The caveat is strategic: an AI-polished CV that sounds like every other AI-polished CV defeats its own purpose. Your authentic voice, your specific achievements, your human story — those are the differentiators that still win the room.
When the Chatbot Becomes a Criminal Tool
The most alarming development in AI usage in the past 18 months is the systematic weaponisation of chatbots by criminal actors. The FBI has formally warned that cybercriminals are using AI to craft highly personalised phishing emails with an 48% higher victim-response rate than traditional attempts. AI-generated phishing content can now be produced 40% faster than before. In 2025, global losses from AI-enabled cybercrime exceeded $193 billion.
Even more disturbing: research from leading AI safety institutions confirms that criminal groups have used thousands of iterative prompts to bypass chatbot safeguards and generate functional malicious code — code that in documented cases affected at least 17 organisations. AI voice and video cloning is now routinely used to impersonate family members, bank officials, and business partners in financial fraud schemes. Older adults in particular have suffered devastating losses, with a reported 8-fold increase in complaints related to AI-assisted fraud.
It is important to be clear: no legitimate AI platform facilitates these activities willingly. These systems are monitored, requests are logged, and the legal consequences under cybercrime statutes in India, the EU, the UK, the USA, and most jurisdictions are severe. The misuse exists despite safeguards, not because of their absence. The lesson for every user is equally clear — if you are attempting to use AI for any activity you would not describe openly to a law enforcement official, stop immediately.
A New Literacy for a New Era
What the data demands is not fear of AI — it demands informed use. The same technology that is helping a student in Kolkata prepare for a competitive interview, helping a working mother in Pune understand her child’s anxiety, and helping a startup in Bengaluru draft investor-ready documents is also being turned into a phishing machine and a fake therapist by those who either lack awareness or choose harm.
The emerging consensus among researchers, regulators, and practitioners can be distilled into five principles. First, never act on high-stakes AI advice — medical, legal, or financial — without professional verification. Second, never share personal identification numbers, passwords, or sensitive records with any AI system. Third, use AI as a bridge to professional help, not a replacement for it, especially in mental health. Fourth, understand that AI-assisted fraud is now the dominant form of cybercrime and that your voice, your face, and your writing can all be cloned without your consent. Fifth, treat AI output as a first draft, not a final truth — because hallucination, bias, and outdated information remain live risks in every system.
The billion-user threshold crossed in 2025 is not a warning sign. It is a milestone. AI is now part of the fabric of human life. The question is whether we engage with it as informed, critical, and ethically grounded citizens — or as passive consumers of a technology we do not yet fully understand. The answer to that question will determine whether AI becomes humanity’s most powerful lever for equity and progress, or its most efficiently distributed instrument of harm.
Ask AI for…
- Resume, cover letter & interview prep
- Salary benchmarking & negotiation scripts
- Stress management & journaling prompts
- Understanding a doctor’s diagnosis
- Decoding contracts or legal concepts
- Research, writing & learning support
- Business plans, SOPs & SEO content
Never ask AI for…
- Crisis mental health or suicide support — call a helpline
- Self-diagnosis or prescription changes
- Hacking tools, malware, or exploit code
- Deepfakes or identity impersonation
- Sharing Aadhaar, PAN, passwords, or bank details
- Definitive legal or financial decisions
- Any activity illegal in your jurisdiction
Discover more from
Subscribe to get the latest posts sent to your email.










