Article Details
Retrieved on: 2024-11-18 19:02:32
Tags for this article:
Click the tags to see associated articles and topics
Summary
The article discusses how Ning Zhang's research explores jailbreak prompts in large language models like ChatGPT, examining their ability to bypass security measures and their implications in computer science, particularly for AI and machine learning security.
Article found on: source.washu.edu
This article is found inside other hiswai user's workspaces. To start your own collection, sign up for free.
Sign UpAlready have an account? Log in here