Pinned Loading
-
Poisoned-Prompt-Tuning
Poisoned-Prompt-Tuning PublicThis is the implementation of our paper 'PPT:Bakcdoor Attacks on Pre-trained Models via Poisoned Prompt Tuning', accepted by the IJCAI 2022.
-
Lordog/dive-into-llms
Lordog/dive-into-llms Public《动手学大模型Dive into LLMs》系列编程实践教程
-
Backdoor-NLP-Models-via-AI-Generated-Text
Backdoor-NLP-Models-via-AI-Generated-Text PublicThis is the implementation of our paper 'Backdoor-NLP-Models-via-AI-Generated-Text', accepted by the COLING 2024.
Python 2
-
UOR-Universal-Backdoor-Attacks-on-Pre-trained-Models
UOR-Universal-Backdoor-Attacks-on-Pre-trained-Models PublicThis is the implementation of our paper 'UOR:Universal Backdoor Attacks on Pre-trained Models', accepted by the ACL-Findings 2024.
Python 1
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.