leads the Security Architecture practice at Auxin Security, providing technical leadership for application security, DevSecOps, threat modeling, and LLM Application development. I believe in shift-left and security mesh methodology. My work has resulted in a 70% reduction in security noise data, a high level of incident detection, and the creation of incident response playbooks.
Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized natural language understanding and generation and offer immense potential, but their security vulnerabilities can pose significant risks. This talk outlines how organizations can leverage SecOps best practices to secure LLMs. We emphasize the importance of a holistic approach, integrating security considerations throughout the LLM lifecycle, from training data hygiene to deployment and monitoring. The talk highlights the role of automation tools in fortifying the training pipeline. Techniques like data sanitization and adversarial training can be automated to mitigate bias, data poisoning, and prompt injection attacks. Continuous Integration/Continuous Deployment (CI/CD) pipelines can streamline secure deployments, enabling rapid integration of security patches and updates. Furthermore, this talk underscores the value of DevSecOps practices and how organizations can proactively identify and address security concerns throughout the LLM development process by fostering collaboration between developers, security professionals, and operations teams. This collaborative approach ensures that security becomes an inherent aspect of LLM development, fostering a more robust and trustworthy AI ecosystem.