Case In Point: Lessons for the Proactive Manager

Volume 18 Issue 02 | February 2026

Each month in Case in Point we attempt to bring relevant information on new and emerging risks. One risk we’ve noticed increasing here at AU recently is that of phishing. Scammers are using artificial intelligence to enhance phishing attempts, and frankly, some of these can be very good and hard to detect. Recent attacks have particularly targeted leadership at our campus.

For more insight on this topic, I’m handing the reigns to one of our front-line defenders against attacks here at AU. Jay James is a cybersecurity leader and educator serving as Cybersecurity Operations Manager at Auburn University, where he oversees the Security Operations Center and various AI enabled security initiatives across the institution.

Social engineering has always relied on one basic principle: convincing someone to act. Artificial intelligence has made that easier.

When a message feels routine, phishing becomes far more effective. It remains one of the most common ways attackers gain access to systems because a well-timed email referencing a real project, event, or deadline rarely raises suspicion. With AI tools making that level of detail easy to generate, the likelihood of falling for a phishing email increases substantially.

In many cases, email serves as the first step in a broader effort. Email is often only the starting point.

A phone call may follow, using AI voice tools that replicate speech from short audio samples. The caller may sound like a supervisor requesting urgent action or approving a transaction. Short video clips created through what is commonly known as “Deep Fakes” can appear to show a familiar face delivering instructions. A text message can arrive minutes later with a link, reinforcing the earlier request.

In some cases, these contacts are coordinated. In an example scenario, an employee first receives an email about a payroll update. Shortly afterward, a call comes from someone who sounds like a department leader confirming it. A text message follows with a link to complete the change.

Each interaction makes the situation feel more legitimate.

These tactics rely on urgency, authority, and familiarity. When multiple channels are used together, the pressure increases and the time to think decreases.
Technical safeguards block many threats before they reach users, yet some messages still arrive in inboxes or on phones. That is where individual judgment matters.

Here are a few consistent habits that can significantly reduce risk:

  1. Pause when a request feels urgent or unusually specific: AI can generate detailed messages that reference real projects, deadlines, or colleagues to prompt quick action.
  2. Verify independently, even if the sender looks or sounds familiar: AI voice cloning and deepfake tools can imitate trusted individuals. Confirm through a known phone number or direct conversation.
  3. Access systems directly and decline unexpected prompts: AI-generated phishing pages closely mimic legitimate login portals. Type official web addresses yourself and reject unexpected multi-factor authentication requests.
  4. Do not trust surface indicators: Caller ID, display names, writing style, and even video clips can be fabricated using AI tools.
  5. Report suspicious activity immediately: AI enables attackers to target multiple individuals quickly. Early reporting helps identify patterns and limit impact.

Social engineering succeeds when action happens before verification. Taking a brief pause to confirm a request through a trusted channel can prevent account compromise, financial loss, and disruption to operations.

Thank you, Jay, for this great information and advice. While phishing is important, it is but one of many risks we continually face in higher education. Therefore, we invite you to review the events of the prior month with a view toward proactively managing risks. As always, we welcome your feedback.

robinson.jpg

Kevin Robinson
Vice President
Institutional Compliance & Security

Information Security & Technology Events


Fraud & Ethics Related Events


Compliance / Regulatory & Legal Events


Campus Life & Safety Events