OpenAI has reportedly revised its security protocols in response to perceived corporate espionage, according to information obtained by the Financial Times. This intensification of security measures followed the January release of a competing model by Chinese startup DeepSeek, which OpenAI claims improperly replicated its models through “distillation” techniques.
The enhanced security framework incorporates “information tenting” policies, which restrict employee access to sensitive algorithms and new product developments. For instance, during the development phase of OpenAI’s o1 model, only designated team members with explicit project clearance were authorized to discuss it within shared office environments, as detailed in the Financial Times report.
NYT forces OpenAI to retain chat data in court
Further modifications include the isolation of proprietary technology within offline computer systems. The company has also implemented biometric access controls, utilizing fingerprint scans for entry into specific office areas. A “deny-by-default” internet policy is now in place, mandating explicit authorization for all external network connections. The report further indicates that OpenAI has augmented physical security measures at its data centers and expanded its cybersecurity staffing.
These revisions are understood to address broader concerns regarding foreign adversaries attempting to acquire OpenAI’s intellectual property. However, the ongoing recruitment competition among American AI companies and frequent leaks of CEO Sam Altman’s statements suggest OpenAI may also be addressing internal security vulnerabilities. OpenAI has been contacted for comment regarding these developments.
All Rights Reserved. Copyright , Central Coast Communications, Inc.