According to a report from The Wall Street Journal, Apple has taken measures to restrict the use of external artificial intelligence tools, including ChatGPT, among its employees. This move comes as Apple focuses on developing its own similar technology.
The company is reportedly concerned about the potential risk of AI tools leading to the leakage of confidential data. Alongside ChatGPT, Apple has also prohibited the use of GitHub’s Copilot, an autocompletion tool for code writing.
The concerns surrounding these AI tools are not unique to Apple.
Many businesses in industries such as banking, finance, and healthcare have refrained from adopting ChatGPT due to worries about inadvertently disclosing sensitive proprietary information to the chatbot.
Samsung, for example, banned its employees from utilizing generative AI utilities, including ChatGPT, after instances were discovered where sensitive source code had been uploaded to the platform. The fear was that data transmitted to artificial intelligence platforms like Bing and Google Bard could potentially be exposed to other users. In a similar vein, JPMorgan Chase and Verizon have also imposed restrictions on the use of such AI tools.
Interestingly, OpenAI has already provided a private ChatGPT service to Morgan Stanley, enabling the company’s employees to ask questions and analyze content within thousands of the bank’s market research documents. Meanwhile, Microsoft is actively working on a version of ChatGPT designed specifically for business customers, aiming to address privacy concerns.
Apple’s decision to limit employee access to external AI tools aligns with the company’s own efforts to develop large language models and AI technologies. This initiative is led by John Giannandrea, Apple’s senior vice president of Machine Learning and AI Strategy, who joined the company after previously working at Google. However, the specifics of Apple’s AI endeavors have not been disclosed in the Wall Street Journal’s report.
Also Read: