Friday, May 03, 2024
Outlook.com
Outlook India
Outlook Business

Ensuring Data Privacy In The Face Of Generative AI FOMO

Unless start-ups take proper measures to manage the risks entailed with the data, its management and outputs generated by AI systems, the results will be inconsistent, biased and costly

Ensuring Data Privacy In The Face Of Generative AI FOMO

Outlook Start-Up Desk

POSTED ON August 06, 2023 9:25 PM

Beyond all the hype that followed the launch of ChatGPT, Generative artificial intelligence (gen AI) brings powerful capabilities and explicit creativity to almost all businesses, large or small. With millions of users, the technology has transcended into territories once deemed a stronghold of human minds.  

Apps powered by gen AI can create unique content, respond to queries, sketch designs and even write computer code by leveraging advanced neural networks like GPT-3 to learn from data ingested from users. However, some critics believe that gen AI is a bubble soon to be burst—that it is not an actual and sustainable tech.  

Believe me, it’s as real as it gets. The latest genre of systems, including ChatGPT, relies on foundational models like deep learning and large language models (LLMs) to fit various use cases. Before the models are built, companies must prepare a truckload of data, create a proper architecture, train the model and fine-tune it to deliver excellent outputs.  

Nonetheless, the value-creation potential of gen AI boosts its relevance for both newcomers in the market and current tech-preneurs. Swords are drawn to combat the fear of missing out (FOMO), but are businesses—especially start-ups—really equipped to preserve data security and privacy before moving into this uncharted territory?  

Challenges Along The Way 

LLMs for developing gen AI apps offer immense opportunities to generate exciting results, but the technology is still nascent. Instead of moving ahead with half-baked ideas, a focus on reliability and data quality is essential to mitigate challenges. 

Some of these issues include delusions amid the data deluge. LLMs are not always right—they don’t grasp and understand the language correctly. A research paper even dubbed them as Stochastic Parrots. 

That’s because these models sometimes ‘hallucinate’. They are trained on massive volumes of data. However, without appropriate or adequate filtering, they ingest biased and unbiased facts, leading to inaccurate results.  

Moreover, the information these tools spew out may be inappropriate or carry negative social patterns. Systemic biases can occur since there’s no way to assess the truth of data fed into them. It produces factual content in some cases, but the information is totally out of context.  

These models also suffer from probabilistic algorithms. When a user asks the same question multiple times from a tool like ChatGPT, the model may deliver another version of the previously wrong answer, replace an incorrect answer with correct responses, or create different combinations of previous answers. Such probabilistic behaviour instigates distrust in the tool, causing reliability issues.  

Sharing a massive amount of sensitive data with LLMs to generate the correct responses triggers an inherent security threat. The models can’t perceive them as humans do, putting the data at risk of cyberattacks, hacking, or identity theft. 

For example, revamping a legal contract on Gen AI tools can result in inadvertently sharing confidential data to be used for pattern matching and computation, opening the doors for bad actors and malicious codes. 

Developing precise LLMs also requires huge computational expenditure. It involves training, deployment, monitoring and maintaining models with high energy consumption and hardware investments. A basic infrastructure with open-source tools and pre-trained models can work for more straightforward applications. However, the costs are much higher to achieve higher accuracy while handling complex queries.  

Assistive Approach Or Complete Automation? 

Companies can reinforce responsible usage and augment results using Gen AI as an assistive tool. An open-eyed approach to viewing regulations and ensuring compliance at every step will help.  

Assistive AI deployment ensures that the underlying data is unbiased and contextual. Organisations can define AI principles and establish a transparent structure to bring more trust in its applications. Also, empowering users with proper training will pre-emptively identify governance gaps.  

Another critical consideration is the involvement of non-technical stakeholders in the process. They can help validate the business use cases and test the models for better outcomes.  

It All Starts With Robust Data Privacy And Security 

Data is no longer just an asset for businesses; it’s fuel to propel them into the future and the logic behind every decision. Mishandling data can be catastrophic and requires more than the outside-the-box functionality of platforms like ChatGPT. 

Behind the scenes, Gen AI tools leverage every bit of information to create a model, which is then shared across the board for any number of similar queries. The lack of strict data governance at this level leads to several risks. Primary amongst them is when instances of data breach or unauthorised access may draw infringement-related lawsuits. 

Moreover, there could be financial repercussions. When data falls into a competitor’s hands, or the loss of sensitive information results in heavy penalties from the regulatory authorities.  

There are social implications as well. The credibility and reputation of a company can also be at risk due to data mishandling. Similarly, when misused, the information has emotional implications for affected parties. 

Only if start-ups take proper measures to manage the risks entailed with the data, its management and outputs generated by AI systems will the results be consistent, biased and costly.  

Bringing Data Privacy And Security To The Forefront  

Synthesised information leading to unimpressive results forces Gen AI adopters to rethink their strategy and retain the human element. A black-box approach fails to address the issues of biased and ambiguous results. 

Reinforced learning based on human feedback helps train the models in human preferences to create semantically correct and meaningful communications instead of relying on a predictable next-token approach. 

Controlled access at multiple points can also keep information vaulted, while in-house hosting of LLMs can eliminate privacy concerns and allow their fine-tuning with relevant data to improve response times and precision. 

Multimodal knowledge injection into a model can establish the traceability and credibility of AI outputs. Still, if there are risks, it’s better not to expose the model to end users, and internal business users can better leverage it to assist them. 

Generative AI represents a transformative leap in the landscape of data science and AI. However, responsible and visionary adoption will be the key to unlocking its limitless possibilities as this journey continues. 

- Dharmendra Chouhan, Director of Engineering at Kyvos Insights

  • Related Articles

    E- Solution India will facilitate the technology deployment for this collaboration

    RAMP Global And IIISLA Partner To Process Insurance Claim Reporting And Management

    Grip launched its first SDI product in October 2022 and claims that it has successfully enabled over 100 crore in investments

    Grip Launches InvoiceX And LoanX, Eyes Rs 750 Crore Investment

    Reacting to this capital infusion, Sameer Malik, Founder of UcliQ said that the company will be aiming at revolutionizing the B2B meat supply chain, bridging gaps, and fostering transparency through...

    B2B Marketplace UcliQ Gets INR 7 Million In Angel Round