Nightfall AI, the leader in cloud data leak prevention (DLP), announces the launch of the industry’s first complete data security platform designed specifically for Generative AI (GenAI).
Nightfall AI, the leader in cloud data leak prevention (DLP), announces the launch of the industry’s first complete data security platform designed specifically for Generative AI (GenAI).
Nightfall for GenAI consists of three products: Nightfall for ChatGPT, Nightfall for LLMs, and Nightfall for SaaS. Nightfall for ChatGPT integrates directly with end-users’ browsers to redact sensitive data sent in prompts. This tool has already been adopted by companies on the cutting edge of workflow innovation, such as Snyk. Next up, Nightfall for LLMs is a set of developer APIs that detects and redacts data that developers input to train large language models. Many industry leaders have already integrated these APIs into their workflows. Finally, Nightfall for SaaS integrates with SaaS apps such as Notion and Teams to protect sensitive data like PII and PHI before it’s sent to third-party processors. Nightfall for SaaS has been implemented by innovative software companies like MovableInk, Aaron’s, and Klaviyo. All of these products are available today. Users can also explore Nightfall for ChatGPT as part of a free 14-day trial.
Eric Cohen, Nightfall customer and VP of Security at Genesys, summarizes that “Generative AI offers significant productivity gains for organizations across teams… but until Nightfall AI, there was a lack of security products that allowed us to use these types of tools safely.”
Among Nightfall’s customers, one in five employees exposes sensitive data at least once a month. This proof of pervasive data sprawl is why many businesses like Amazon, Apple, Verizon, JP Morgan Chase, Goldman Sachs, and Citigroup have cautioned or banned employees from using GenAI like ChatGPT (The Washington Post, Forbes, Fortune).
CISOs, security leaders, and IT leaders have three main concerns when it comes to using GenAI in the workplace. Firstly, they’re wary that employees might include sensitive data (such as software credentials or customer PII) in chatbot prompts. Secondly, they’re worried that employees might inadvertently expose confidential company data by using SaaS apps that use third-party AI sub-processors such as OpenAI and Anthropic. Last but not least, their third concern revolves around engineers and data scientists using confidential data to build and train their own LLMs. This last concern is underscored by a recent incident where users tricked ChatGPT into generating active API keys for Windows.
Isaac Madan, CEO and co-founder of Nightfall, summarizes that “GenAI has the potential to offer substantial productivity benefits for employers and employees, but the lack of a complete DLP solution is impeding the safe adoption of AI. As a result, many organizations have either completely blocked these tools or have resorted to using multiple security products as a patchwork solution to mitigate the risk.” This struggle ultimately drove the creation of Nightfall’s latest innovation: Nightfall for GenAI.
Frederic Kerrest, co-founder and Executive Vice Chairman of Okta, commends Nightfall and compares its latest initiatives to Okta’s early days. “When using Nightfall I have seen a lot of similarities with our early vision at Okta, where we centralized the security of user access and management for all cloud apps. Nightfall is now doing the same of data security across Generative AI and the cloud.”