Samsung is restricting the use of so-called generative artificial intelligence tools such as ChatGPT for employees after the company discovered such services were being misused.
The South Korean technology giant confirmed that it is temporarily restricting the use of generative AI through the company’s personal computers.
Employees of one of Samsung’s biggest divisions were informed of the move in a memo at the end of April after there had been cases of misuse of the technology.
Bloomberg reported on Tuesday that some staff had uploaded sensitive code to ChatGPT.
The privacy risks involved in using ChatGPT vary based on how a user accesses the service.
If a company is ChatGPT’s API, then conversations with the chatbot are not visible to OpenAI’s support team and are not used to train the company’s models. However, this is not true of text inputted into the general web interface using its default settings.
The company says it reviews conversations users have with ChatGPT to improve its systems and ensure that it complies with its policies and safety requirements. It advises users to not “share any sensitive information in your conversations” and notes that any conversations may also be used to train future versions of ChatGPT.