Apple Tells Employees Not to Use ChatGPT Due to Data Leak Concerns The tech giant company also advised workers not to use Microsoft's GitHub Copilot, which uses AI to generate code.
Our biggest sale — Get unlimited access to Entrepreneur.com at an unbeatable price. Use code SAVE50 at checkout.*
Claim Offer*Offer only available to new subscribers
Opinions expressed by Entrepreneur contributors are their own.
Apple has prohibited employees from using ChatGPT and other artificial intelligence tools over fears of leaking confidential information, The Wall Street Journal reported.
According to an internal document viewed by the outlet as well as individuals familiar with the matter, Apple has restricted the use of the prompt-driven chatbot along with Microsoft's GitHub Copilot (which uses AI to automate software code).
The company fears that the AI programs could release confidential data from Apple, per the outlet.
OpenAI (the creator of ChatGPT) stores all chat history from interactions between the chatbot and users as a way to train the system and improve accuracy over time, as well as be subjected to OpenAI moderators for review over any possible violations of the company's terms of service.
Related: Walmart Leaked Memo Warns Against Employees Sharing Corporate Information With ChatGPT
While OpenAI released an option last month where users can turn off chat history, the new feature still allows OpenAI to monitor conversations for "abuse," retaining conversations for up to 30 days before deleting them permanently.
A spokesperson for Apple told the WSJ that employees who want to use ChatGPT should use its own internal AI tool instead.
Apple is not the first big company to ban the use of ChatGPT. Earlier this year, JP Morgan Chase, Goldman Sachs, and Verizon all banned the use of the AI-powered chatbot for employees over similar fears of data leakage.
Earlier this week, OpenAI CEO Sam Altman spoke before Congress about the pressing need for government regulation of AI development, calling it "crucial."