Loading...

 

State of the Scamiverse

 

New research shows that scams have become so realistic that most consumers don't realize they've been targeted until after the damage is done.

Summary

Two in five Indians surveyed (41%) say they feel less confident spotting scams than they did a year ago, and the data shows why. Indians now receive an average of 13 scam messages every single day, scattered across text, email, social media, phone calls, and even QR codes. And the scams look and sound more realistic than ever.

In fact, scams have become so ubiquitous in everyday digital life that, according to our survey, the average person spends 102 hours a year trying to determine between what’s real or fake online. That’s nearly three full work weeks of time lost.

More than half of Indians (70%) surveyed say they had a social media account compromised in the last year, underscoring why so many feel the traditional signs used to spot scams, such as poor grammar or obvious impersonation, no longer work.

Scammers increasingly use professional language, polished branding, and believable scenarios: fake delivery notices, account verification requests, subscription renewals, tax messages, job offers, charity appeals, and bank alerts that closely resemble legitimate communications. They layer in deepfake videos and voice calls and hide malicious sites behind QR codes that appear on menus, parking meters, posters, and emails that otherwise look innocuous.

People are trying to adapt, 82% of Indians surveyed by McAfee say they’re now more cautious about opening messages from unknown senders, and 84% have made an effort to educate themselves. Compare that to 2024, when just 43% of respondents to our survey said they were more cautious about opening messages from unknown senders. Even with an increase in self-education and vigilance, the gap between what people feel prepared for and what scammers are able to trick them into continues to widen.

One thing our research clearly highlighted is that the 2026 State of the Scamiverse is evolving to outpace even the most vigilant users. Scams have grown more realistic, leading people to grow less confident in who and what to trust.

Research Highlights: Scam Signs Are Harder to Spot

Over the past year, artificial intelligence has become part of everyday internet culture. AI-generated content now appears everywhere, from politics and celebrity impersonations to surreal viral clips like bunnies jumping on trampolines, dogs hosting podcasts, and bears caught on backyard cameras. This low-effort “AI-generated slop”, named Merriam-Webster’s 2025 Word of the Year, fills many social feeds. Much of this content is harmless entertainment. Some of it is not.

The constant exposure to harmless AI-generated content can have a subtle effect, lowering people’s guard and making it harder to recognize when similar tools are being used with malicious intent. And that's dangerous as scammers are increasingly borrowing the same AI tools and techniques to make their schemes more convincing.

Phishing scams have upped their game, with scammers able to quickly and easily craft a malicious site that looks almost identical to a legitimate company or carry on a conversation that feels real, luring recipients into a false sense of security.

As technology and AI tools continue to advance and become more accessible, scam content is becoming both more prolific and realistic making it challenging to identify. The traditional hallmarks people have relied on to spot scams, such as strange links, odd grammar, and bizarre requests, are no longer enough.

The data shows how quickly this happened:

Facts from the consumer survey:

  • 13 scam messages/day → a time tax of 102 hours or three work weeks per year.
  • More than 1 in 5 people say suspicious social messages now contain no link at all, nothing to hover over, no URL to question.
  • 66% of people reply to those linkless DMs, often triggering the scam’s next step.
  • 70% of Indians say their social account was compromised in the past year
  • Nearly 9 in 10 Indians encountered a suspicious QR code, and 38% of those people landed on a dangerous page after scanning one.
  • Among people who ended up harmed by a scam, the typical scam played out in about 30 minutes or less.
  • People see an average of 4 deepfakes every day.
  • 87% of Indians say they’ve personally experienced/encountered an online scam.
  • Over half of all Indians (51%) say they’ve lost money to a scam.
  • Indians who lost money to a scam reported losing an average of ₹93,915.
  • 5 minutes – Time it took a scammer to cheat their target out of money or information.

We haven’t just seen more scams. We’ve seen familiar scam tactics become far more convincing.

The structure of a scam message is now often indistinguishable from the structure of a normal message. And that’s dangerous.

Deepfakes 2.0: When Anyone’s Face or Voice Can Be Faked

Just a few years ago, deepfakes were easy to spot. They had telltale glitches: extra fingers, strange lighting, stiff expressions, or that unmistakable uncanny-valley shimmer. Today, they’re more lifelike than ever, and they increasingly show up in ordinary digital spaces, not just viral videos or political misinformation.

According to our survey findings, Indians see four deepfakes per day on average, often mixed seamlessly with real content. And as the technology improves, one of the clearest signs people used to trust, “does this look or sound real?” no longer works.

That erosion of confidence is reflected in the data, as more than one in three  Indians surveyed by McAfee say they aren’t confident they can identify a deepfake scam, and a similar share say they don’t feel confident protecting themselves if a deepfake targets them.

Consumers report seeing deepfakes, not necessarily scams, but AI-generated videos, everywhere: most commonly on Instagram (65%) Facebook (59%), but also across YouTube (48%), Telegram (44%) and WhatsApp (41%). Among consumers who encounter deepfakes, most believe a significant share are tied to scams. Almost half say atleast 50% of the deepfakes they see are deceptive, and another 31% believe half of the deepfakes they see are scams. Even private messaging and community spaces aren’t immune. WhatsApp, Snapchat, Telegram, Reddit, Discord, and LinkedIn all surfaced in consumer reports. Deepfakes are no longer a niche phenomenon. They’re part of the social fabric.

For many, the threat feels personal: one in five Indians surveyed by McAfee say they has already experienced a voice-clone scam. These scams come in a variety of forms, from mimicking a celebrity to impersonating a loved one’s voice. They often generate a sense of urgency that pushes victims for a quick transfer of money or personal information before there’s time to verify the situation.

Deepfakes also introduce a different layer of risk. Not all deepfakes are scams, and many are created for entertainment or creative expression. However, as AI-generated video becomes more common in everyday online content, people grow more accustomed to seeing it and less confident in their ability to question it. More than one in four Indians (27-35%) in our survey say they are not confident they can spot deepfake scams, and that familiarity can lower skepticism, making scam-related impersonations and deceptive content easier to believe. Bad actors leverage this false sense of security along with a sense of urgency to anchor their scams:

  • A recruiter whose intro video looks exactly like a real HR rep
  • A bank agent who appears on a video call to discuss an account issue
  • A celebrity endorsing an investment that never existed
  • A distressed family member asking for urgent help
  • A government or service agent with an ai-generated voice and callback number

The result is a confluence of manipulating a recipient’s mindset and creating a digital environment where scams feel plausible before the victim has a chance to feel suspicious, leaving people to navigate confusion in real time.

Scam Economics: Losses, Time Costs, and Emotional Toll

The financial impact of scams is rising, but the currency of those losses is changing. Scams today span everything from high-rupee investment fraud to everyday impersonation messages that steal personal information and drain time, attention, and emotional bandwidth.

Together, they form a scam economy that is both more expensive and more exhausting for consumers to navigate.

Investment scams remain the most financially devastating category.

According to data compiled by the Indian Cyber Crime Coordination Centre (I4C), Indians lost Rs 19,813 crore to fraud and cheating cases in 2025, with 21.77 lakh complaints registered on the National Cyber Crime Reporting Portal. Around 77% of these losses stemmed from fraudulent investment schemes, making scams increasingly sophisticated and harder to spot. These figures, current as of early January 2026, are preliminary as final reporting continues through the portal.

Scams cost more than just money. They cost time.

According to McAfee’s survey, Indians now lose 102 hours per year simply trying to determine whether a message, alert, call, or notification is real. 

That’s nearly three full work weeks spent evaluating everyday digital interactions. Instead of a one-off event, scams have become an ongoing time tax embedded into modern life.

The emotional toll is also rising.

In the same survey, over half od all Indians (51%) reported that they had lost money to a scam, and 24% of those victims were targeted again within a year. Younger adults report the highest recurrence rates, underscoring that scams affect all age groups, not just seniors, as many may assume.

Beyond financial harm, scams introduce anxiety, hesitation, and second-guessing into everyday tasks, from opening messages to checking account alerts. Nearly two-thirds of Indians (63%) believe their personal information is more at risk today than a year ago, and more than one in three say they feel less confident spotting scams.

The result is an economic picture defined by more than monetary loss. It includes an erosion of time, trust, and confidence.

The Future of Scams: What Our Research Tells Us About 2026

Scams are moving past one-off messages and becoming systems that are longer, more coordinated, and designed to blend into the digital routines people complete every day. McAfee Labs research shows several patterns from 2025 that point directly to how scams are likely to evolve in 2026, especially as scammers mimic the workflows people trust most.

Everyday online storage impersonation scams: the next major frontier

One of the clearest signs of this shift was the rise in cloud storage and account-notice impersonation. Millions of consumers use cloud storage services, such as Google Drive, iCloud or Dropbox to store and share everything from important documents to cherished family photos, making it a target-rich environment for scammers to exploit.

In October and November of 2025, McAfee Labs observed a significant increase in scams mimicking cloud service providers. These messages appeared very similar to the real thing and were designed to instill a sense of urgency with a need for immediate action:

  • “Your account storage is full”
  • “Your password expired”
  • “A new device signed in”
  • “A file has been shared with you”

They succeed because they resemble the routine notifications people handle every day. Cloud services are so embedded in modern life -- email, photos, authentication, documents -- that people rarely pause to question whether an alert is legitimate.

In 2026, we expect these scams to become multi-step impersonations, not one-off notifications. Instead of prompting one action, scammers may try to replicate a normal cloud workflow: an account warning → a login request → a two-factor authorization (2FA) -style prompt → a document preview.

Each step feels ordinary on its own and, in fact, the complexity can make the process seem official, which is exactly why consumers may overlook subtle signs of fraud.

Other patterns that point to future risks

  • Job scams will grow more personal.
    • McAfee Labs analysis shows job scams are also becoming more targeted, with fraudsters tailoring outreach to specific roles, industries, and career stages. Labs identified increasingly tailored job scams in 2025. In 2026, scammers may use AI tools to customize postings, onboarding steps, and even contracts to mirror a victim’s real background or industry. As the job market grows more competitive, “hustle” job scams will become a real risk to job seekers.
  • Malicious ads are poised to climb.
    • Deepfake ads and synthetic celebrity endorsements are already widespread, and their quality is improving. These can be used to drive people toward fraudulent investment platforms, fake downloads, or credential-harvesting pages.
  • A long-con that begins as a simple conversation may become more common.
    • Scammers now run relationship-based scams that unfold over days or weeks, starting with simple messages like “hi” or “how are you?” instead of urgent warnings. Once a victim replies, iOS and Android treat that number differently than an unknown sender, moving it into a more trusted message state and making future scam messages more likely to reach the main inbox. This allows scammers to maintain context, build trust, and later introduce links, requests for codes, or financial asks that feel like part of an ongoing conversation rather than a cold scam.
  • Targeting will continue to sharpen.
    • AI tools make it easier to scrape public social content and build detailed profiles based off photos, posts and information shared online. This enables more convincing impersonation, more relevant outreach, and scams that feel tailored to a person’s habits.
  • Crypto and financial scams are likely to intensify.
    • Periods of market volatility traditionally create openings for investment fraud. Fake crypto platforms, fraudulent trading apps, and misleading financial ads may increase as scammers exploit economic uncertainty.
  • VPN misuse will create new scam entry points.
    • VPNs remain an important privacy tool, and a trusted VPN is critical, particularly on untrusted networks. However, recent age-verification laws tied to adult content have driven spikes in VPN use as consumers attempt to bypass local restrictions. This increased demand creates opportunities for scammers to promote fake or malicious VPN apps, browser extensions, and look-alike download sites.

What these trends mean for consumers

Scams are becoming systemic, adaptive, and embedded in the tools people use every day. Instead of relying on obvious warning signs, consumers are increasingly asked to evaluate alerts, messages, and prompts that look and behave like the real thing.

The takeaway for 2026 is simple: scams will become harder to recognize as they increasingly resemble the trusted digital workflows people use without thinking twice.

How to Protect Yourself When Scams Get Harder to Spot

As scams become more realistic and blend into everyday digital life, protection shifts from only looking for traditionally clear red flags to also picking out the subtle giveaways that indicate the content may not be legitimate.  Protection in 2026 is less about looking for bad grammar, spelling mistakes and poorly designed imitation websites, it requires a combination of personal skepticism and automated protection that is capable of spotting even minute idiosyncrasy across multiple platforms.

Because modern scams operate across email, text, social media, calls, and QR codes, consumers increasingly rely on cybersecurity tools that provide real-time scam detection, identity monitoring, account takeover protection, and AI-driven analysis. These layers go beyond what the human eye can catch, especially when the scam blends in.

That’s not to say that the traditional indicators of a scam are obsolete. They’re still a good first line of defence, and many traditional scams are still out there, and the traditional hallmarks make a good first pass, but increasingly, they can’t be the only pass.

  1. Before you reply to any message
    • Confirm you know who it’s from, even if it looks familiar.
    • Don’t respond to unexpected “verification” or “urgent” requests.
    • Treat linkless messages as suspicious if they appear out of context.
    • Avoid engaging in communications with unknown and untrusted senders
  2. Before you click or scan anything
    • Preview QR codes with a trusted QR scanner.
    • Make sure QR codes aren’t a sticker covering a legitimate QR code
    • Avoid scanning codes from flyers, parking lots, restaurant tables, or random screens.
    • Don’t click login or payment links in DM notifications.
  3. When setting up your accounts
    • Use separate passwords for every account and consider a password manager.
    • Where possible, set up 2FA on your accounts for an added layer of protection.
  4. Before you share personal information
    • Never give codes, passwords, or 2FA approvals to anyone.
    • Verify government notices through official websites, not by the message you received.
    • Call your bank or service provider using the number on the company’s website, not the one given in a message.
  5. Before you trust a face or voice
    • Be skeptical of “urgent” calls from family members asking for money.
    • Hang up and call back using a known number.
    • Don’t rely on appearance or audio alone—deepfakes can mimic both.
  6. Review your social media privacy settings
    • Turn on 2FA across key accounts.
    • Use strong, unique passwords.
    • Enable alerts for new logins, password changes, and account recovery attempts.
  7. Choose security software that can
    • Detect unsafe texts, emails, and DMs, even without visible links.
    • Scan QR codes for malicious redirects.
    • Flag deepfake audio/video in suspicious interactions.
    • Monitor for identity leaks, breached credentials, and account takeover attempts.
    • Provide real-time warnings across SMS, email, social platforms, and browsers.

Conclusion

Scams continued to change in 2025. They became more realistic, more routine, and harder to distinguish from the messages people already trust. Alerts, account notices, job leads, delivery updates, and even familiar faces and voices now have credible imitations.

The shift continues: Scams have become part of the noise

As scammers automate, personalize, and move across every platform, people are facing more threats with fewer reliable signs to guide them. Confidence is dropping, time spent verifying messages is rising, and instinct alone isn’t enough.

Staying safe in 2026 comes down to three essentials: awareness, skepticism, and protection that can detect risks in real time. Consumers shouldn’t have to navigate this alone, and with McAfee, they don’t have to.

Methodology

McAfee Labs Data Sources

Insights in this report draw from McAfee Labs’ ongoing monitoring and analysis of global scam activity across email, SMS, social media, cloud platforms, and emerging AI-driven vectors. Labs data incorporates:

This combined dataset enables cross-validation of consumer-reported experiences with observed threat activity in the wild.

Consumer Survey Methodology

In addition to Labs research, McAfee commissioned a global consumer survey to assess attitudes, behaviors, and real-world experiences related to online scams.

  • Fieldwork: November 2025
  • Method: Online survey
  • Sample size: 7,592 adults (age 18+)
  • Countries surveyed: United States, Australia, India, United Kingdom, France, Germany, and Japan
  • Focus areas:
    • Frequency and types of scams encountered
    • Self-reported victimization and financial impact
    • Confidence in recognizing scams and deepfakes
    • Emerging behaviors (QR codes, linkless scams, impersonation attempts)

Findings reflect consumer-reported experiences combined with real-time threat intelligence from McAfee Labs, providing a comprehensive view of how scams evolved in 2025 and what to expect in 2026.