0


【安全】大模型安全综述

大模型相关非安全综述

LLM演化和分类法

  • A survey on evaluation of large language models,” arXiv preprint arXiv:2307.03109, 2023.
  • “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.
  • “A survey on llm-gernerated text detection: Necessity, methods, and future directions,” arXiv preprint arXiv:2310.14724, 2023.
  • “A survey on large language models: Applications, challenges, limitations, and practical usage,” TechRxiv, 2023.
  • “Unveiling security, privacy, and ethical concerns of chatgpt,” 2023.
  • “Eight things to know about large language models,” arXiv preprint arXiv:2304.00612, 2023.

LLM on 软件工程

  • “Large language models for software engineering: Survey and open problems,” 2023.
  • “Large language models for software engineering: A systematic literature review,” arXiv preprint arXiv:2308.10620, 2023.

医学

  • “Large language models in medicine,” Nature medicine, vol. 29, no. 8, pp. 1930–1940, 2023.
  • “The future landscape of large language models in medicine,” Communications Medicine, vol. 3, no. 1, p. 141, 2023.

安全领域

LLM on 网络安全

  • “A more insecure ecosystem? chatgpt’s influence on cybersecurity,” ChatGPT’s Influence on Cybersecurity (April 30, 2023), 2023.
  • “Chatgpt for cybersecurity: practical applications, challenges, and future directions,” Cluster Computing, vol. 26, no. 6, pp. 3421–3436, 2023.
  • “What effects do large language models have on cybersecurity,” 2023.
  • “Synergizing generative ai and cybersecurity: Roles of generative ai entities, companies, agencies, and government in enhancing cybersecurity,” 2023.LLM 帮助安全分析师开发针对网络威胁的安全解决方案。

突出针对 LLM 的威胁和攻击

主要关注点在于安全应用程序领域,深入研究利用 LLM 发起网络攻击。

  • “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
  • “A security risk taxonomy for large language models,” arXiv preprint arXiv:2311.11415, 2023.
  • “Survey of vulnerabilities in large language models revealed by adversarial attacks,” 2023.
  • “Are chatgpt and deepfake algorithms endangering the cybersecurity industry? a review,” International Journal of Engineering and Applied Sciences, vol. 10, no. 1, 2023.
  • “Beyond the safeguards: Exploring the security risks of chatgpt,” 2023.
  • From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review, 2023.
  • “Adversarial attacks and defenses in large language models: Old and new threats,” 2023.
  • “Do chatgpt and other ai chatbots pose a cybersecurity risk?: An exploratory study,” International Journal of Security and Privacy in Pervasive Computing (IJSPPC), vol. 15, no. 1, pp. 1–11, 2023.
  • “Unveiling the dark side of chatgpt: Exploring cyberattacks and enhancing user awareness,” 2023.

网络犯罪分子利用的漏洞,关注与LLM相关的风险

  • “Chatbots to chatgpt in a cybersecurity space: Evolution, vulnerabilities, attacks, challenges, and future recommendations,” 2023.
  • “Use of llms for illicit purposes: Threats, prevention measures, and vulnerabilities,” 2023.

LLM隐私问题

  • “Privacy-preserving prompt tuning for large language model services,” arXiv preprint arXiv:2305.06212, 2023.分析LLM的隐私问题,根据对手的能力对其进行分类,并探讨防御策略。
  • “Privacy and data protection in chatgpt and other ai chatbots: Strategies for securing user information,” Available at SSRN 4454761, 2023. 探讨了已建立的隐私增强技术在保护LLM隐私方面的应用
  • “Identifying and mitigating privacy risks stemming from language models: A survey,” 2023. 讨论了LLM的隐私风险。
  • A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. 隐私问题和安全性问题。

本文转载自: https://blog.csdn.net/qq_43543209/article/details/136244946
版权归原作者 Xinyao Zheng 所有, 如有侵权,请联系我们删除。

“【安全】大模型安全综述”的评论:

还没有评论