The Tenable Salesforce Breach: A Critical Lesson in SaaS Supply-Chain Security

The cybersecurity world was recently shaken by news that Tenable, a leader in the industry, was among a long list of organisations impacted by a data breach stemming from a third-party application's integration with Salesforce. This incident, which also affected companies like Zscaler and Cloudflare, serves as a powerful reminder of the risks associated with the software-as-a-service (SaaS) supply chain. It highlights how a single misstep in a connected application can lead to a widespread compromise of sensitive data.
What Happened?
The attack, attributed to the threat actor UNC6395, exploited a vulnerability within the Salesloft Drift application. This AI/chat bot, used for customer communication, was integrated with Salesforce instances. By compromising OAuth and refresh tokens tied to this application, attackers were able to gain unauthorised access to data stored in the Salesforce instances of various organisations. The exfiltrated data included customer support case information, and business contact details, which for Tenable included names, business email addresses, phone numbers, locations, and regions. In some cases, even sensitive credentials like AWS access keys and API tokens that customers had shared in support tickets were exfiltrated.
The Core Lesson: SaaS Supply-Chain Risk
This incident is a prime example of a SaaS supply-chain attack. While Salesforce itself was not compromised, the attackers exploited the trust placed in a third-party application. Modern businesses rely heavily on a complex ecosystem of interconnected SaaS tools, each of which can become a potential point of entry for an attacker. The breach demonstrates that even if your core systems are secure, a vulnerability in a seemingly minor third-party integration can grant attackers extensive access to your most critical data.
The weakest link in your security chain might not be your own infrastructure, but a third-party application you’ve integrated with it.
Key Takeaways and Mitigations
This breach provides actionable insights for any business using SaaS platforms:
1. Scrutinise Third-Party Integrations
Don't blindly trust an application's security. Conduct a thorough risk assessment of any third-party tool before integrating it with a critical platform like Salesforce. Understand what permissions it requires and why.
2. Enforce the Principle of Least Privilege
Limit the scope of permissions granted to third-party applications. Only give them access to the data they absolutely need to function. The fewer privileges an application has, the less damage a compromise can cause.
3. Secure Customer Data
Warn customers and employees against sharing sensitive information like API keys, passwords, or personal data in support tickets. Use secure channels for this type of communication. Organisations impacted by this breach are now urging customers to rotate any credentials shared in this manner.
4. Monitor OAuth Tokens and API Activity
Actively monitor for unusual activity related to OAuth tokens and API calls. Look for spikes in data exfiltration, access from new IP addresses, or API calls outside of normal business hours. Proactively revoke tokens that are no longer needed.
Our AI Security Testing Focus Areas
We assess your AI systems against emerging threats, focusing on the OWASP Top 10 for Large Language Model Applications.
Prompt Injection & Manipulation
We test for vulnerabilities where attackers can hijack the LLM's output by crafting malicious inputs, causing it to ignore its original instructions or perform unintended actions.
Insecure Output Handling
We assess whether the application properly sanitises and handles model outputs, preventing downstream vulnerabilities like XSS or remote code execution.
Sensitive Information Disclosure
Our team attempts to trick the model into revealing sensitive data from its training set, such as personal information, intellectual property, or proprietary algorithms.
Insecure Plugin Design
For LLMs that use external plugins, we test for insecure handling of inputs and insufficient access controls that could lead to widespread system compromise.
Model Denial of Service
We test the model's resilience against resource-intensive queries that could lead to a denial of service, impacting availability and incurring high operational costs.
Training Data Poisoning
We evaluate the controls in place to protect your model's training data from being maliciously manipulated, which could introduce vulnerabilities or biased outputs.
To learn more about how Hackyde's AI/LLM service can protect your organisation, please read our detailed offering here: LLM & AI Application Testing Details.