As artificial intelligence becomes ever more prevalent, the debate is now about what happens to all user data, which is likely to be used for training algorithms at some point. The case in point is Google updating their privacy policy to now allow the tech giant to collect data from its users for what it calls ‘training’ of its AI models. These will include the foundation for products such as the Bard AI chatbot, Google Translate and the Cloud AI capabilities.

For representational purposes only. (Google Image)

“We may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities. Or, if your business’ information appears on a website, we may index and display it on Google services,” reads Google’s updated privacy policy, which now applies to all users.

Microsoft, and other players who are leading the AI transition, including Open AI, Amazon and Meta, are yet to announce similar trends in user data utilisation. Yet, this leads to the inevitable questions of how much data is important to be collected, what safeguards (if any) are in place, the removal of identifiers and how this data will be handled post collection and during the AI training period.

Indian technology services company Mastek, which has been part of the process to build critical public IT infrastructure in the UK, including the system that drives the collection of London’s congestion charge on vehicles, believes its proper handling of data is very important in the present environment. Even more so, with AI in the mix.

“We’ve been doing this work for more than a decade now. In that time, privacy particularly became very pronounced and cybersecurity as an element is a key,” Abhishek Singh, president for UKI and Europe at Mastek, told HT. He talked about how that has come into greater focus ever since the GDPR, or General Data Protection Regulation, came into force in Europe a few years ago.

He also spoke about “compartmentalisation” of data for two important reasons – first, the data shouldn’t be available for access to anyone within the organisation and secondly, it is easy to contain a scenario such as a data breach.

“In our office in Reading, certain parts of the office are not accessible to me despite being head of business in the geography,” Singh says. “I just can’t walk in there since I am not credentialed to access that site,” he adds.

Himanshu Jaiswal, who is chief executive officer at another Indian tech company Virtual Height IT Services, believes in the use of blockchain technology by enterprises for more secure data handling and storage.

“Blockchain is a technology where you have even healthcare reports as no one can forge it or hack it. Nobody owns in a blockchain network, and it’s a decentralised technology,” says Jaiswal. He believes more companies are moving towards adopting such technology as safeguards against data mishandling and breaches.

“When we talk about digital Rupee, it will help as one cannot fake the currency because it is on blockchain, where everything is unique. Every ledger on blockchain stays as a record forever. Nobody can change a ledger,” says Jaiswal, likening it to a traditional banking system where there is a ledger for every transaction.

Also Read:Auto-generating AI poised to be the next evolution, beyond generative AI

Data handling in real time is also helping companies such as another tech company Phantom Digital Effects. The firm, which does animation and visual effects for movies and TV shows, has invested to develop in-house software and tools which rely on AI to manage projects. User data privacy is paramount, even more so because of the big-ticket projects they work with regularly.

“We have a strong technology team and a lot of great architects who design project management software. End to end, we give access to our clients, so they can see live data and know what is happening with their project and the status of each and every scene,” says Bejoy Arputharaj, founder and CEO.

Big tech is building AI powered data management solution for enterprises. Earlier this summer, Microsoft announced the Microsoft Fabric platform for organisations, which will bring different elements of data management and analytics in one window, with specific focus on privacy.

“Copilot in Microsoft Fabric builds on our existing commitments to data security and privacy in the enterprise. Copilot inherits an organisation’s security, compliance, and privacy policies, Arun Ulagaratchagan, corporate vice-president for Azure Data at Microsoft, said in a statement.

Before Google announced the change in its policies, Microsoft had stated that it does not intend to use any organisations’ tenant data (read this, as data collected from users) to train the base language models that power Copilot.



Source link