News Story
AI Security Expert Yizheng Chen Joins MC2
Published September 11, 2023
Malicious software, also known as malware, first appeared in the 1970s, mostly as a way to cause mischief. It subsequently evolved into more nefarious threats like profitable cybercrime, espionage, nation-state attacks, and more.
Yizheng Chen, who joined the University of Maryland this semester as an assistant professor of computer science, says that there will always be new security threats in our increasingly interconnected world. Combatting those threats, she says, will take new knowledge, dedication and the latest technological tools, including artificial intelligence (AI).
As a core faculty member in the Maryland Cybersecurity Center (MC2), Chen will focus her research on developing AI models that protect billions of users from malicious activities such as malware, phishing and other cyber scams. More specifically, she is working to improve the robustness of AI models against emerging threats.
Chen also has an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), which she says provides the perfect environment for her research and teaching thanks to powerful computing resources and UMIACS faculty with expertise in both AI and security.
“Working in the intersection of these two research areas, I value the opportunities at UMIACS to collaborate with faculty and students from diverse backgrounds,” she says.
Chen first became interested in computer security after discovering Windows malware on her family’s home computer in middle school. She began researching ways to remove it and was eventually successful.
“That was just the start,” she says. “After I became a researcher, I was inspired by the advanced capabilities of AI, so I wanted to study how to use AI to solve security problems.”
Currently, Chen is interested in developing new techniques to let humans and AI work in tandem to solve security problems. This includes integrating security experts to effectively update the AI models, exploring new capabilities of large language models to help security experts, and protecting users from new vulnerabilities introduced by the latest AI products.
For example, plugins from AI platforms such as ChatGPT can be used to steal someone’s chat history, obtain personal information, and allow code to be remotely executed on someone’s machine.
Chen earned her doctorate in computer science from the Georgia Institute of Technology and was previously a postdoctoral scholar at University of California, Berkeley and Columbia University. Her work has received an Association for Computing Machinery Computer and Communications Security Best Paper Runner-Up Award and a Google ASPIRE Award.
—Story by Melissa Brachfeld, UMIACS communications group