Table of Contents
As organizations rapidly integrate artificial intelligence and large language models (LLMs) into thier workflows, cybersecurity teams face a new imperative: understanding how to secure these powerful tools. Effective logging and monitoring are paramount,not just for detecting threats,but also for mitigating the risks of errors and ensuring responsible AI implementation.
The push to “capture fierce advantages possible with AI” is well underway, though the realization of those advantages remains to be seen. Companies are actively experimenting with and deploying AI/LLM technologies, both internally and externally. While AI promises streamlined processes and increased efficiency, it also introduces vulnerabilities, as evidenced by recent incidents of AI-generated errors in legal and regulatory filings.cybersecurity, a field where accuracy is critical, demands a proactive approach to these emerging risks.As one analyst noted,the current focus is often on weather organizations can integrate AI,rather than if they shoudl.
The evolution of the cyber threat landscape, coupled with increased security awareness, has
for security investigations, forensic analysis, and compliance auditing, offers valuable insights into Copilot usage. Microsoft provides thorough documentation on Audit logs for Copilot and AI applications,which is essential for analyzing relevant records. Key attributes for security investigations include:
- AccessedResources: Details all resources Copilot accessed in response to a user’s request.
- Messages: Contains prompt and response details, including a “JailbreakDetected” flag to indicate potential attempts to bypass AI safety protocols.
- Contexts: Provides information about the origin of the prompt, such as the file, request, or service used.
- RecordType: Categorizes the type of Copilot or AI application interaction.
Microsoft’s documentation is regularly updated to reflect Copilot’s evolving capabilities,making it a vital resource for security teams. A deeper technical dive into tracking Copilot usage via M365 Security Audit Logs is available from Martina Grom.
Open WebUI: Configuring Logging for Offline LLMs
Open WebUI is a self-hosted platform designed to integrate with offline AI/LLM platforms like Ollama and LM Studio. The platform’s documentation, “Understanding Open WebUI,” details the available logging information and its location. Several application server/backend logging levels are available, offering targeted debugging and security operations capabilities.To capture detailed logs, including prompts, the global logging level must be adjusted to DEBUG or NOTSET by setting the GLOBAL_LOG_LEVEL surroundings variable. Teams should carefully test diffrent logging levels to balance capturing valuable data with avoiding excessively verbose logs.
However, adjusting logging levels to capture prompts requires caution. Logging user input can inadvertently create security and privacy concerns. Cybersecurity teams must operate under the assumption that these logs may contain sensitive, confidential, or proprietary information submitted by users, intentionally or unintentionally. Treating log generation, availability, and retention with the same rigor as other sensitive data is crucial to prevent creating a new attack vector.
securing the Future of AI: A Collaborative Approach
Third-party and local AI/LLM technologies can provide valuable data for cybersecurity teams, but realizing this potential requires collaboration. Working with development and operations teams to understand the underlying technology stack is essential to capitalize on relevant logging opportunities.
AI/LLM systems generate unique telemetry – from API calls and token usage to model interactions and data flows – that customary security tools frequently enough miss. Early collaboration with engineering teams is vital to identify available data (inference logs, prompt/response metadata, authentication events, rate limiting triggers) and its location (cloud provider logs, application logs, model serving infrastructure). Establishing robust retention policies that balance security investigation needs with privacy requirements and storage costs is also critical, particularly given the potential for sensitive data to appear in prompts and outputs.
Ultimately, proactive logging, access control, and retention practices are essential for aligning security operations with the evolving landscape of AI and LLMs.
