The surging progress of Artificial Intelligence (AI) has also given rise to potential risks – that range from plagiarism, eating away of jobs, data leaks, data breaches, hallucinations and data privacy. 

When you are talking to chatbots like Alexa, or the immensely famous Microsoft backed OpenAI ‘s ChatGPT, who is listening to you?

Several companies have claped down on the viral AI chatbot ChatGPT that first kicked off Big Tech’s AI arms race, due to compliance concerns related to employees’ use of third-party software.

OpenAI has earlier last month informed that they had to take the ChatGPT offline on 20 March, after the AI started leaking other people’s transaction details, and allowed others to see the subject lines from other users’ chat history.

The same bug, now fixed, also made it possible “for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date,” OpenAI said in a blog post.

Who has access to my private data on AI chatbots?

Following ChatGPT, Google and Microsoft have also rolled out AI tools which work the same way and are powered by large language models that are trained on vast troves of online data.

A recent Samsung employee data leak on AI chatbot, Italy imposing a temporary ban citing data privacy issues, really leaves one wondering where is my data getting stored when I have opened up to an AI?

Steve Mills, the chief AI ethics officer at Boston Consulting Group, told CNN that the biggest privacy concern that most companies have around these tools is the “inadvertent disclosure of sensitive information.”

If the data people input is being used to further train these AI tools, as many of the companies behind the tools have stated, then you have “lost control of that data, and somebody else has it,” added Mills

“You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,’” Mills said. “But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.” Mills explains to CNN

ChatGPT, Bard’s privacy policy

OpenAI, the Microsoft-backed company behind ChatGPT, says in its privacy policy that it collects all kinds of personal information from the people that use its services. 

It says it may use this information to improve or analyze its services, to conduct research, to communicate with users, and to develop new programs and services, among other things.

The privacy policy states it may provide personal information to third parties without further notice to the user, unless required by law.

OpenAI also published a new blog post Wednesday outlining its approach to AI safety. “We don’t use data for selling our services, advertising, or building profiles of people — we use data to make our models more helpful for people,” the blogpost states. “ChatGPT, for instance, improves by further training on the conversations people have with it.”

Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.”

Google also told CNN that users can “easily choose to use Bard without saving their conversations to their Google Account.” Bard users can also review their prompts or delete Bard conversations via this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” Google said.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less



Source link