ShiftDelete.Net Global

Google tells employees to be careful when using AI tools like ChatGPT

Ana sayfa / AI

The popularization of productive artificial intelligence has created a significant impact in the business world. Previously, Apple had emphasized the need for caution when employees use tools like ChatGPT, as it could potentially jeopardize company data. Now, Google’s parent company, Alphabet, has made a similar move regarding ChatGPT.

Google releases a statement regarding generative-AI

Alphabet, the parent company of Google, takes a cautious approach to the use of productive artificial intelligence, including tools like Bard. The company advises employees not to use chatbots based on their policy of protecting confidential information.

Bard and ChatGPT are both bots that utilize productive artificial intelligence. Developers can see users’ requests and simultaneously train the AI with sensitive information. This situation poses a risk of information leakage for companies like Apple and Alphabet.

You may get paid from privacy settlement if you used Google Search

Google settles long-standing lawsuit alleging search data sale for $23 million, marking a milestone in user privacy and data protection.

In addition to warning employees about confidential data, Alphabet stated that the code generated by chatbots should not be used directly. Google informed that the code in tools like Bard would only serve as an assistant to programmers.

Alphabet’s cautious approach aligns with its security standards. Many companies, such as Samsung, Amazon, and Deutsche Bank, have expressed similar concerns regarding AI-based chatbots.

Addressing privacy concerns, Google held detailed discussions with the Irish Data Protection Commission. It announced that Bard would not be used in EU countries until the concerns were resolved.

Yorum Ekleyin