
Start up
Passion. Potential. Pitches. Don't miss any of the 2025 New Venture Challenge excitement.
Tune in Friday, April 11 at 1 p.m. for great ideas and fierce competition. Then, join the judges, mentors, spectators and teams as they see who is going home with thousands of dollars in venture financing. The awards broadcast begins at 6:30 p.m. and one team will walk away as the overall best venture.
Central Michigan University’s College of Business Administration is the home of the Isabella Bank Institute for Entrepreneurship and the first Department of Entrepreneurship in the state of Michigan. We are a student-centric hub where experiential, curricular, and external entrepreneurial opportunities intersect.
Our mission is to maximize student success by fostering a campus-wide entrepreneurial mindset that promotes inter-disciplinary collaboration and the creation of new ventures.
We aim to create innovative programming, boost cross-campus and ecosystem collaboration and provide a comprehensive mentoring program.
Our institute provides extracurricular opportunities and is open to all undergraduate and graduate CMU students.
Are you interested in becoming an entrepreneur?
Every journey is unique. Explore the opportunities that interest you.
Artificial intelligence is expected to touch every part of our lives. That includes a growing AI arms race in cybersecurity that places your privacy and financial data at risk in an increasingly sophisticated environment.
Qi Liao is a professor of computer science in Central Michigan University’s Department of Computer Science. He shared his expertise on developments in cybersecurity related to artificial intelligence.
Traditionally, artificial intelligence (AI) and machine learning (ML) have been powerful tools for cybersecurity defense, aiding in anomaly detection, intrusion prevention, spam filtering and mitigating threats like distributed denial-of-service (DDoS) attacks and malware.
However, in recent years, attackers have also begun leveraging AI and ML to launch sophisticated cyberattacks. For example, adversarial machine learning (AML) can be used to manipulate and poison training data, making defensive AI systems less effective. My research [1,2] has demonstrated that AML can generate spam emails capable of bypassing spam filters by tricking them into misclassifying harmful messages as benign.
In another study [3], we developed an AI system that autonomously exploited system vulnerabilities to gain administrative access. This was achieved by fine-tuning large language models (LLMs) with Retrieval-Augmented Generation (RAG), similar to the AI technology behind ChatGPT.
Beyond exploiting system weaknesses, AI is revolutionizing social engineering attacks. Attackers can now automate and personalize phishing schemes by analyzing social media data. AI-generated deepfakes, including realistic audio, video and images, have been weaponized for scams such as blackmail, impersonation and financial fraud. These tools enable attackers to execute crimes like online banking fraud, fake ransom demands and large-scale financial scams.
Data breaches remain the most significant privacy threat posed by AI-driven cyberattacks, and AI can enhance every stage of these breaches. Attackers use AI to generate and crack passwords, automate the exploitation of zero-day vulnerabilities, and deploy sophisticated phishing attacks to deliver malware such as ransomware and computer virus.
Our research findings [4,5,6] suggest that "ransomware 2.0", which not only locks victims out of their data but also steals and sells it, will become the dominant form of attack. This evolution increases the damage inflicted, as attackers can both demand ransom and profit from selling stolen data.
Even publicly available data can compromise user privacy when processed with AI. Photos, videos and audio recordings shared on social media can be manipulated to create deepfake content for identity fraud. AI algorithms can also cross-reference and correlate anonymized data from medical, financial and voter records, as well as smart home and mobile device activity, effectively re-identifying individuals and revealing their behavior patterns.
AI-powered attacks pose a serious risk to financial security by enabling fraud, identity theft, and large-scale scams. For example:
AI can generate fake blackmail schemes, using fabricated images or videos to extort money.
Beyond impersonation, AI-driven attacks can compromise bank security questions by analyzing publicly available personal data, enabling identity theft and fraudulent account creation. AI can also automate ransomware attacks that steal financial information, such as credit cards and bank details and high-value cryptocurrency wallets.
One emerging scam, the “pig butchering” scheme, uses AI-powered chatbots to build long-term trust with victims in online relationships before convincing them to invest in fraudulent, high-return financial schemes.
Yes, AI-powered attacks are increasingly difficult to identify and counter. A good example is the challenge educators face when students use generative AI to complete assignments. While AI detection tools exist, they are not foolproof, and some students have even used AI to evade AI by manipulating AI detection scores.
Similarly, deepfake detection technologies exist, but they struggle to keep pace with AI advancements. The cybersecurity landscape is caught in a constant arms race, where attackers and defenders continuously improve their tactics, much like the ongoing battle between evolving viruses and vaccines.
AI also lowers the barrier to entry for cybercriminals. Previously, launching a cyberattack required expertise. Now, AI enables even non-experts to execute highly effective attacks. My current research explores what happens when both attackers and defenders use AI, studying adversarial mutual machine learning within a game-theoretic framework to understand this dynamic.
Adopting a zero-trust mindset is key. Always remain vigilant against AI-powered phishing and scams. Here are some essential steps:
About Qi Liao
Qi Liao is a professor of computer science in Central Michigan University’s Department of Computer Science. He received his bachelor’s degree in computer science from Hartwick College, and his master’s and doctorate degrees from the University of Notre Dame.
His research interests include artificial intelligence and machine learning, computer and network security, economics and game theory of networks and cybersecurity, and visual analytics. More information may be found at https://people.se.cmich.edu/liao1q/.
References:
[1] Bhargav Kuchipudi, Ravi Teja Nannapaneni, and Qi Liao. Adversarial machine learning for spam filters. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES) - 15th ACM International Workshop on Frontiers in Availability, Reliability and Security (FARES), number 38, pages 1-6, Dublin, Ireland, August 25-28 2020.
[2] Jonathan Gregory and Qi Liao. Adversarial spam generation using adaptive gradient-based word embedding perturbations. In IEEE International Conference on Artificial Intelligence, Blockchain, and Internet of Things, (AIBThings), pages 1–5, Central Michigan University, USA, September 16-17 2023.
[3] Jonathan Gregory and Qi Liao. Autonomous cyberattack with security-augmented generative artificial intelligence. In IEEE International Conference on Cyber Security and Resilience (CSR), pages 270-275, London, UK, September 2-4 2024.
[4] Zhen Li and Qi Liao. Preventive portfolio against data-selling ransomware - a game theory of encryption and deception. Computers & Security, 116:1–11, Article 102644, May 2022.
[5] Zhen Li and Qi Liao. Game theory of data-selling ransomware. Journal of Cyber Security and Mobility, 10(1):65-96, March 2021. DOI: 10.13052/jcsm2245-1439.1013.
[6] Zhen Li and Qi Liao. Ransomware 2.0: To sell, or not to sell a game-theoretical model of data-selling ransomware. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES) - 9th ACM International Workshop on Cyber Crime (IWCC), number 59, pages 1-9, Dublin, Ireland, August 25-28 2020.
Explore special opportunities to learn new skills and travel the world.
Present your venture and win BIG at the New Venture Challenge.
Boost your entrepreneurial skills through our workshops, mentor meetups and pitch competitions.
Learn about the entrepreneurship makerspace on campus in Grawn Hall.
Present a 2-minute pitch at the Make-A-Pitch Competition and you could win prizes and bragging rights!
Connect with mentors and faculty who are here to support the next generation of CMU entrepreneurs.
Are you a CMU alum looking to support CMU student entrepreneurs? Learn how you can support or donate to the Entrepreneurship Institute.