Tech news
9 min read time

AI Is Changing Cybersecurity Forever

Written by
Rajender Pal Singh & Laura Havok
Published on
March 11, 2026

Technology has always been a driving force behind the development of cybercrime, but artificial intelligence marks a tipping point. What once required time, coordination, and technical expertise can now be automated, customized, and scaled with a level of efficiency that is truly alarming, and without any real tech knowledge. The barriers to entry are disappearing; anyone can be a hacker now.

AI-powered phishing and deepfakes aren’t exactly new, but they’re threats that have grown to become the standard.

Let’s talk about that.

From Large-Scale Fraud to Targeted Attacks

Phishing in and of itself is actually not very efficient. One email will almost never cut it. Traditionally, a staggering number of phishing emails had to be sent out, usually over a long period of time. The focus was always on quantity rather than quality. This meant that phishing emails were often easy to spot, since they were overly generic, full of grammatical errors, suspicious links, and obviously bad formatting. They relied on a sense of urgency to make people click before they think (or look too closely). And it works. To this day, phishing remains one of the most widely used and reliable methods of gaining initial access to any system, rather paradoxically.

But AI is changing the game.

Today, cybercriminals use large language models (LLMs), what we colloquially call “AI,” to compose messages that are far more convincing than ever before. And now, with these higher quality, more targeted attacks, they don’t even need to send as many as they did in the past.

Instead of generic emails from Arabian princes or Micros0ft support, we have messages tailored to specific industries or positions. They look like they’re coming from your boss’s personal email account, or a vendor your company does business with. They’re written in a businesslike tone. They might reference things that are currently happening within the company, to make them look more credible. Any information about your company that might be floating around the internet can be collected and used by an LLM.

An email is sent to the finance department by someone who appears to be the CFO. They’re authorizing a large payment to a new vendor. Someone doesn’t dig any deeper and pushes it through. No prizes for guessing, that money isn’t going to a vendor.

An email arrives in a client’s inbox, reminding them of an overdue invoice and providing a link to make a payment. It looks like it’s from your company. They don’t think twice. They don’t call and talk to you about it. They make a payment.

You get the idea.

AI Deepfakes: Is Nothing Real Anymore?

AI-generated phishing emails are one thing, and certainly a huge concern in the world of cybersecurity. But something that’s becoming even more concerning is the rise of deepfakes among scammers.

Remember Will Smith eating spaghetti? Yeah, that was rough. For those who don’t, the AI video first popped up in 2023, and in it, Will Smith ate spaghetti. Sort of. It highlighted the early limitations of AI video, and many have adopted it as an unofficial test of the capabilities of generative AI. Since then, many attempts were made to feed poor Will Smith his delicious spaghetti, each more convincing than the last.

In February 2026, Seedance 2.0 succeeded.1 Will Smith ate his spaghetti, and it was pretty much flawless. Anyone else feel like we might be cooked?

All this is important because if they can make Will Smith eat spaghetti, they can make anyone do pretty much anything.

Of course, trolls will use this technology to create fake videos of their favourite celebrities doing and saying wild things. Whether it was meant as a harmless joke is irrelevant; it can be incredibly damaging. How do you prove you didn’t say a racial slur in a viral video when it’s so convincing? How do you convince your wife that girl isn’t real when she sees the “evidence” right there on Facebook?

But this goes beyond internet trolls. AI-generated videos and voices can be used to trick employees and business owners alike. With new AI tools, any voice can be cloned and turned into a text-to-speech model. Imagine someone leaving a voicemail for your spouse, or the voice of your CEO calling to authorize a payout? It sounds so real—and so urgent. Why would you doubt it?

That’s the problem. Your instinct is not to doubt it. These attacks play on human psychology, our willingness to trust the familiar. And might I just say, that suuuuucks.

But wait, it goes even further beyond that! In extreme cases, threat actors are using AI to generate fraudulent identities (complete with a fake ID) to get themselves hired at an organization to attack it from within.2 If they work remotely, it becomes much harder to verify identity.

In today’s AI-powered world, the Zero Trust mindset is more important than ever.

Let’s Take a Moment to Process This

This is a lot so far, so let’s break it down and summarize:

  1. Anything you post on social media, any public information on your company’s website or any other business databases out there, can be used as ammunition for LLMs when crafting phishing emails or deepfakes.
  2. LLMs are capable of producing substantially more emails, higher quality emails, and more targeted emails than traditional scammers.
  3. New AI models can create hyper-convincing videos and voice clones, making it that much harder to recognize when things aren’t real.
  4. Urgency is used to put victims on tilt, forcing them into quick decisions they might later regret.
  5. Scam communications can be tested, tweaked, and optimized in real time by LLMs based on the responses of victims. Even failures educate scammers and make their next attempt that much better.
  6. Cybercriminals don’t need to be good at their job anymore. In the past, even the most basic script kiddie needed the skills to create malware and malicious scripts. Now, anyone can gain access to Malware as a Service (MaaS) and use it to launch attacks with the help of bog-standard AI tools.

Who’s Most at Risk?

Everyone is at risk, but organizations are the most susceptible. This mostly boils down to risk-for-reward. A scammer might get a small payout from an individual, but attacking a business or organization offers a much greater reward for the risk taken.

As ever, small and medium businesses (SMBs) are at the greatest risk. We’ve talked about this before, and the sad truth is, that will probably never change. Small businesses just don’t have the resources or IT staff to fight cybercrime the way big corporations do, nor are they as educated about the dangers. Without anyone providing them business IT services, they’re dead in the water.

Businesses with a lot of remote or hybrid workers are also at greater risk. Yes, they’re at risk of bad actors using deepfakes to plant themselves within the company, but far more commonly, legitimate remote workers can become victims very easily. Without the same level of in-person verification, these remote and hybrid workers are more vulnerable to impersonation.

Awareness Is Half the Battle, but It’s Not Enough

Cybersecurity training and education is a hard must for everyone, but especially businesses. If you are a living, breathing human in 2026 and beyond, you need to be aware of cyber threats. It’s no longer optional. But awareness alone is only half the battle. If businesses don’t act on this knowledge, adjust their behaviours, and adopt a stronger cybersecurity posture, nothing will improve.

  • A multistep verification process for financial transactions is a hard must. Always confirm with supervisors or vendors before sending any money anywhere. In person, if possible.
  • Stricter rules regarding demands for payment or data must be put in place. Simply taking a single call or email at face value is too risky.
  • Verification of inbound calls, to ensure legitimacy. For clients, verbal support PINs can be created so that no sensitive information is given without authentication.
  • Anyone can be spoofed, even the Big Boss. Question anything and everything that seems even remotely out of place. Managers will be less upset about double-checking than they will be about a security breach (and if not, perhaps an evaluation of company culture is needed).
  • Adoption of more powerful cybersecurity tools, including AI tools that can monitor networks and systems 24/7, flag anything out of the ordinary, and alert IT teams. (Yes, AI tools will likely become the best defence against AI attacks. Shocker.)

If you don't have your own internal IT team, partner with a Managed Services Provider (MSP) like us. Our job is to take as much of the tech burden off your shoulders as possible, and we pride ourselves in providing businesses across Alberta with top-notch cybersecurity and IT services.

Trust No 1: The Fox Mulder Way

Zero Trust is the best way to protect yourself and your business in today’s ever-evolving threat landscape. I really cannot stress that enough. That includes tools like conditional access, proper endpoint protection, and even partnering with a Managed Services Provider (MSP) to patch the holes your systems can’t cover on their own.

We need to shift the mindset from “default allow” to “default deny.” Question everything, think critically always, never just accept things at face value.

  • Why would this person be sending you this email?
  • Why haven’t you been told about this new vendor before now, when you’re supposed to send them $10,000?
  • Why does this person, who doesn’t work in finance, need access to your accounting files?
  • Why would someone send you a bid request with a link to a PDF, rather than posting on a verified bidding platform?
  • Why would you receive a reminder of an overdue invoice if your records show you’re all paid up?

Think before you click.

It’s no longer enough to earn trust. Just because something looks like it’s from a trusted source, doesn’t mean it is. Always authenticate. Always verify. Always ask. These are not new concepts, they’re just more important now than ever.

Cybercriminals think you’re an easy target. Let’s change that.

Sources and Further Reading

  1. AI Nailed The ‘Will Smith Eating Spaghetti’ Test—What Comes Next?
  2. AI and Deepfakes Supercharge Sophisticated Cyber-Attacks: Cloudflare - Infosecurity Magazine