Empathy Is the Superpower in Cybersecurity

Cybersecurity is often talked about in terms of rules, tools, and tech jargon. However, as a culture of security is created and maintained by people an unexpected humanistic element plays a large part in creating a strong security culture: empathy. 

Yes, empathy. That very human skill we use every day on campus: when a professor helps a struggling student, when a staff member supports a colleague after a tough week, or when a student offers a friend a listening ear during finals. Empathy helps us connect, collaborate, and thrive. 

And it also helps us stay secure. 

Security is Personal 

Last semester a staff member woke up to an inbox flooded with thousands of messages. Their email client slowed to a crawl and real emails were buried, and office workflows. They worried about the fallout, about missing real emails and what they may have done to cause this to happen, they hadn’t done anything. 

Another time, a student was misled into believing that their Microsoft account would be turned off during midterms. This led to their account being compromised and a phishing email being sent from their account. It was an honest mistake, as bad actors are very convincing and take advantage of people’s fears. It was both stressful and embarrassing for the student to deal with the aftermath of this occurrence. 

In both cases, no one was trying to be careless. They were busy, trying to keep up with their day, and seemingly targeted as someone with a digital presence online, just like the rest of us. 

The difference in how we respond to these situations is what defines our campus culture. When we lead with empathy, we stop shaming people for making mistakes and start building a stronger, more supportive security mindset. 

Why Empathy Makes Us Safer 

Cybersecurity isn’t just about firewalls and password managers. It’s about how we treat each other when something goes wrong, and how willing we are to speak up and support one another before things go wrong. 

Empathy helps us: 

  • Encourage questions, even when someone feels unsure 
  • Foster a culture where people report mistakes quickly, without fear 
  • Teach others with patience instead of frustration 
  • Recognize that everyone from students to the administration, can be a target of scams 

Security is a shared responsibility. Here’s how empathy helps us all take part: 

  • Think before you click
    If something seems off in an email, slow down. Ask a colleague, reach out to IT, or report it. Better to double-check than regret it later. 
  • Be supportive
    If someone falls for a phishing scam or makes a security mistake, resist the urge to blame. Instead, help them report it and see it as a learning moment for everyone. 
  • Report suspicious activity
    When you speak up, you might prevent the same thing from happening to someone else. You’re not overreacting, you’re protecting the community. 
  • Keep learning
    Take the time to go through university provided training or tips. Even a few minutes can help you learn something new or bolster security skills you already have. 

Empathy may not be a security tool you can download, but it might be the most powerful one we have. When we treat cybersecurity as a shared, human responsibility, everyone wins. 

Let’s protect each other not just with strong passwords, but with stronger support, understanding, and care. 

Contact the ITS Service Center if you need help. 

Visit securecuse.syr.edu for more information on security practices at Syracuse. For assistance, call the ITS Service Center at 315.443.2677 or email help@syr.edu.

Data Breaches and Higher Education: Why We Need to be Secure

In late June 2025, Columbia University announced that a cyberattack compromised the personal information of students and employees. Among the exposed data were Social Security numbers and other sensitive identifiers, putting both privacy and financial security at risk. 

This wasn’t just a theoretical risk: with stolen SSNs and personal info, victims may face identity theft, fraud, or other long‑term harm. The breach serves as a reminder that even big, well resourced institutions are vulnerable. Let’s unpack what happened, what we do to secure our campus, and what you can do to protect yourself. 

What Went Wrong 

From public information: 

  • The attackers were able to access systems holding personally identifying information (PII) of a large number of students and staff.  
  • It isn’t clear how the breach began (e.g. phishing, vulnerable software, insider threat), but the fact that SSNs were exposed means the attackers breached deep enough to access very sensitive, regulated data. 

Universities often have complex IT environments: many different systems, lots of users, research data, third‑party services, legacy software, etc. All of this increases attack surface. So, when one part is weak, attackers can try to use it as an entry point. 

What Syracuse Does to Protect the University 

Here are key steps Syracuse takes to protect our community and data: 

  1. Protect the Crown Jewels
    We identify the most sensitive data (SSNs, health info, financial aid, research data) and ensure it is under strong protection. Encryption, access controls, logging, and least privilege are essential. 
  1. Layered Defenses & Redundancy
    We don’t rely on one line of defense. Firewalls, intrusion detection, MFA (Multi‑Factor Authentication), network segmentation, regular audits, all of these reduce risk and limit damage if a breach occurs. 
  1. Third‑Party & Vendor Risk Management
    Many breaches happen because of weak security in third‑party tools or services the university uses. ITS makes sure contracts, security assessments, and continuous oversight cover vendors’ cybersecurity practices. 
  1. Incident Response Plan & Fast Notification
    Have a clear plan: how to detect, respond, communicate, and recover. Particularly: how to inform affected individuals quickly and transparently. The University is not only responsible for fixing a breach; if one should occur, but also for helping those impacted by an incident(faculty/staff/students) navigate identity theft or credit monitoring if needed. 
  1. Regular Monitoring & Auditing
    IT staff monitor logs, unusual behavior, access patterns. Monitoring helps in detecting anomalies early, buying time to stop damage before it spreads. 
  1. Training & Awareness
    Many attacks start with phishing or human error. Students, faculty, and staff are trained to recognize/report suspicious emails, not to reuse passwords, and to safeguard personal information. 

What You Should Do 

Even if you’re not an IT staff member, you can protect yourself and the campus as a whole: 

  • Use strong, unique passwords and enable MFA wherever possible. 
  • Watch for emails that look off, especially those asking for personal info, account verification, or linking you to login pages. 
  • Regularly check your credit/financial statements if your SSN or similar info is exposed. 
  • Know your rights: what support the university offers in case of identity theft or data misuse (credit monitoring, notification, etc.). 

Final Thoughts 

The Columbia University breach is a wake‑up call: even prestigious, well‑funded institutions are not immune. Exposure of sensitive data can have long, cascading effects for both the university and the individuals involved. But the upside? Many of the defensive measures are well‑known, practical, and doable. 

Universities that implement better planning, protection, and communication reduce not just risk, but anxiety for everyone. For students, faculty, staff staying vigilant, practicing safe digital habits, and knowing what to do when things go wrong can make a real difference. 

 

Contact the ITS Service Center if you need help. 

Visit securecuse.syr.edu for more information on security practices at Syracuse. For assistance, call the ITS Service Center at 315.443.2677 or email help@syr.edu.

AI Insights for September 18, 2025

This message was originally shared to subscribers September 18, 2025.

AI at Work Returns Oct. 9

Join Information Technology Services for AI at Work on Oct. 9 from 1–2:30 p.m. in the K.G. Tan Auditorium (and via Microsoft Teams). This timely discussion will explore the safe, ethical and effective use of generative AI in the workplace and classroom. Speakers will include Associate Professor Johannes Himmelreich of the Maxwell School of Citizenship and Public Affairs, along with representatives from ITS and Deloitte.

In This Issue

This edition’s articles look at how automation is reshaping the job market, how educators are racing to build AI literacy, and how trillion-dollar investments are fueling new tools and corporate rivalries. Policymakers are stepping in too, with fresh inquiries into chatbots, new transparency bills, and global pushes for technological dominance. Alongside the hype, researchers are uncovering stubborn flaws like hallucinations, and ethical questions are surfacing in unexpected places like therapy sessions.

News and Views

Access to The New York Times, The Wall Street Journal and The Washington Post is available to all students, faculty and staff with a valid Syracuse University NetID. Learn more.

Academia and Education

  • We’re Entering a New Phase of AI in Schools. How Are States Responding? (EducationWeek)
  • Thinking Big, Starting Small: Insights from a Summit on AI Adoption in Higher Education (Educause Review)
  • BNY and Carnegie Mellon University Join Forces To Advance AI Education and Research (Carnegie Melon University)
  • Major Organizations Commit to Supporting AI Education (The White House)

Media, Publishing and Content Regulation

Policy, Ethics and Governance

  • First Lady Melania Trump: Presidential AI Challenge (The White House)
  • We’re Getting the Argument About AI’s Environmental Impact All Wrong (Transformer)
  • Albania Appoints World’s First AI-Made Minister (Politico)
  • President Trump, Tech Leaders Unite to Power American AI Dominance (The White House)

Science and Society

Tech Industry and Market Moves

Tools, Research and Capabilities

  • The Rise of the AI Influencer (Financial Times)
  • Why Language Models Hallucinate (OpenAI)
  • Why We Don’t Believe MIT NANDA’s Weird AI Study (Futuriom)
  • How People Use ChatGPT (OpenAI)
  • OpenAI is Building a ChatGPT for Teens (OpenAI)
  • Anthropic’s Claude is Getting Better at Building Itself, Amodei Says (Axios)
  • How Do AI Models Generate Videos? (MIT Technology Review)
  • New AI Research Roundup: Chatbots, Fairness, and More (Rest of World)
  • The Latest AI News we Announced in August (Google)

This Issue’s Tip: Ask AI to Compare Options, Not Just Generate Ideas

Most people ask AI for lists — “Give me 10 paper topics” or “Suggest some marketing strategies.” That’s a great start, but the real power comes when you push a step further: ask AI to weigh the trade-offs. For example: “Compare the pros and cons of these three project ideas if I only have two weeks to complete one.”

This shifts AI from being just a brainstorming partner to acting like a decision-support tool. You’ll get a clearer sense of strengths, weaknesses, and feasibility — helping you move from too many choices to a more confident decision.

This Issue’s Prompt: Everyday AI Upgrade

A prompt is how you ask generative AI tools to do something for you (e.g., creating, summarizing, editing or transforming). Treat it like a conversation, using clear language and enough context to get the result you have in mind.

To get more practice, use the generative AI tool of your choice (for example, Microsoft Copilot, OpenAI ChatGPT or Anthropic Claude) to execute the following prompt:

“Take one everyday task I do regularly (like planning meals, organizing meetings, or managing deadlines) and suggest three creative ways AI could make it easier, faster, or more enjoyable — including at least one idea I probably haven’t thought of before.

Helpful Resources

Thank you for reading. Go Orange!

AI Insights for September 4, 2025

This message was originally shared to subscribers September 4, 2025.

AI at Work Returns Oct. 9

Join Information Technology Services for AI at Work on Oct. 9 from 1–2:30 p.m. in the K.G. Tan Auditorium (and via Microsoft Teams). This timely discussion will explore the safe, ethical and effective use of generative AI in the workplace and classroom.

Speakers include Associate Professor Johannes Himmelreich of the Maxwell School of Citizenship and Public Affairs, along with representatives from ITS and Deloitte. Don’t miss this opportunity to learn how AI is shaping the future of higher education and professional practice.

In This Issue

AI is shaking things up everywhere — from the jobs young people are landing (or losing) to the way professors teach and students write. In this issue, we dive into billion-dollar bets, high-stakes lawsuits and the surprising truth that most companies have seen little payoff from their initial AI investments. You’ll also find stories on new laws, big energy costs and fresh guardrails for mental health. It’s a snapshot of AI’s promise, pitfalls and the big questions shaping our future.

News and Views

Access to The New York Times, The Wall Street Journal and The Washington Post is available to all students, faculty and staff with a valid Syracuse University NetID. Learn more.

Academia and Education

Media, Publishing and Content Regulation

  • Amy Klobuchar: What I Didn’t Say About Sydney Sweeney (The New York Times)
  • Trusted News Sites May Benefit In an Internet Full of AI-Generated Fakes, a New Study Finds (NiemanLab)
  • Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors (Wired)

Policy, Ethics and Governance

  • AI Writing Disclosures Are a Joke, Here’s How to Improve Them (The Chronicle of Higher Education)
  • States Have Introduced 260 AI-Related Bills So Far This Year Despite Federal Opposition (PYMNTS)
  • We Must Build AI for People; Not to Be a Person (Mustafa Suleyman)
  • First Lady Melania Trump Will Head Effort to Teach Next Generation About AI (New York Post)
  • Anthropic and OpenAI Evaluate Safety of Each Other’s AI Models (PYMNTS)
  • Anthropic Warns of ‘Sophisticated’ Cybercrime Via Claude LLM (PYMNTS)
  • Call Me A Jerk: Persuading AI to Comply with Objectionable Requests (The University of Pennsylvania)

Science and Society

  • How AI is Impacting 700 Professions — and Might Impact Yours (The Washington Post)
  • There Is Now Clearer Evidence AI Is Wrecking Young Americans’ Job Prospects (The Wall Street Journal)
  • A Teen Was Suicidal. ChatGPT Was the Friend He Confided In (The New York Times)
  • Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence (Stanford University)
  • ChatGPT & Teen Mental Health (Axios)

Tech Industry and Market Moves

  • MIT Study Finds 95% of Organizations Studied Get Zero Return on Their AI Pilot Projects (Axios)
  • NFL and Microsoft Expand Partnership to Bring Copilot to the Sidelines and Beyond (Microsoft)
  • Perplexity Launches Subscription Tier for ‘Premium Content’ (PYMNTS)
  • How the NFL Is Leveling Up With AI for Game Day and Beyond (PYMNTS)
  • Microsoft Releases Two In House Models (Microsoft)
  • Anthropic Raises $13B Series F at $183B Post-Money Valuation (Anthropic)
  • WIRED Roundup: Meta’s AI Brain Drain (Wired)

Tools, Research and Capabilities

  • Google Shares How Much Energy is Used for New Gemini AI Prompts (Axios)
  • Building More Helpful ChatGPT Experiences for Everyone (OpenAI)

This Issue’s Tip: Ask Ai to Be Your “First Draft Partner” 

When you’re starting a project — whether it’s drafting an email, designing a syllabus, outlining a report or even planning a family trip — the hardest part is often staring at the blank page. Instead of waiting for inspiration, try asking an AI tool to give you a rough first draft. You don’t need to use it word-for-word, but having something to react to can save you time, spark new ideas and help you organize your thoughts.

Many people are discovering that AI is most powerful not when it gives the final answer, but when it jumpstarts your creativity.

This Issue’s Prompt: Multi-Layered Summarizer

A prompt is how you ask generative AI tools to do something for you (e.g., creating, summarizing, editing or transforming). Treat it like a conversation, using clear language and enough context to get the result you have in mind.

To get more practice, use the generative AI tool of your choice (for example, Microsoft Copilot, OpenAI ChatGPT or Anthropic Claude) to execute the following prompt:

“Take this article (paste text or link) and give me three versions of a summary: A headline-style, one-sentence summary for quick scanning, a short paragraph summary (4–5 sentences) that captures the main argument or findings and a bulleted list of the 5–7 most important details, facts, or takeaways.

Helpful Resources

Thank you for reading. Go Orange!

Password Awareness and the World Series of Security

Step Up to the Plate: Password Awareness and the World Series of Security 

The Major League Baseball Playoffs and World Series are almost here. Whether you are cheering on the Mets, Yankees, or another team dear to your heart, you can make sure your passwords aren’t striking out. Cybercriminals play hardball every day, and the best defense is a strong password game plan. Here’s how to keep your digital team in the win column. 

Don’t Commit Errors 

  1. Don’t write them down.
    Writing passwords on sticky notes is the equivalent of dropping a routine ground ball. It may look harmless, but it’s an easy error that can cost you the game. Anyone walking by can pick up that password and steal home.
  2. Don’t store them in Word or Excel.
    Keeping a file called “Passwords.docx” on your desktop is like leaving your playbook on the bench for the other team to copy. If your computer is compromised, the opponent has all your signals—and the game’s as good as lost.
  3. Don’t share them with others.
    Handing over your password is like letting the other team’s pitcher throw for you. It’s your account, your swing—keep control of the bat.
  4. Don’t email or chat them.
    Sending passwords over email or Teams is like lobbing an underhand pitch to Aaron Judge. It’s just asking to get knocked out of the park. Messages can be intercepted, forwarded, or linger in archives long after you’ve forgotten them.

Play Like a Pro 

  1. Use a password manager.
    Think of it as your bullpen closer. Reliable, secure, and there when you need it. A password manager stores all your unique, complex passwords in one encrypted vault. You only need to remember one master password, and the manager handles the rest—no errors, no blown saves.
  2. Turn on multi-factor authentication (MFA).
    MFA is like having a solid defense behind you. Even if the other team gets a hit (your password), they still must make it past your shortstop (your phone code or token). It’s an extra layer of protection that keeps your lead safe.
  3. Swing for strong, unique passwords.
    Don’t bunt with Password123. Go for a grand slam: at least 12 characters, mixing letters, numbers, and symbols. A password manager can even generate random ones—think Slugger!92CurveBall instead of Baseball2025.
  4. Watch the scoreboard.
    When a service you use suffers a breach, it’s like a rain delay—you need to act fast. Change your password right away to keep the game in your favor. Many password managers will even alert you when this happens.

Final Inning 

The World Series is about crowning a champion, but when it comes to cybersecurity awareness, the trophy is your safety. Writing passwords down, storing them in files, or sharing them casually is like leaving the bases loaded without bringing anyone home. Don’t let bad habits put you in the loss column. 

Step up to the plate: use a password manager, enable MFA, and keep your digital playbook safe. With strong password habits, you’ll be hitting home runs in the World Series of security. 

Contact the ITS Service Center if you need help. 

Visit securecuse.syr.edu for more information on security practices at Syracuse. For assistance, call the ITS Service Center at 315.443.2677 or email help@syr.edu.