In the ongoing discourse surrounding the impact of ChatGPT on our economic and business landscape, both positive and negative opinions have surfaced. However, a recent development introduces a unique perspective, shedding light on data security in relation to OpenAI’s ChatGPT.
Metomic LTD, a startup specializing in data security solutions, has unveiled a browser plugin designed to prevent employees from uploading sensitive information to the AI-based chatbot. This technology provides real-time visibility into the data being uploaded onto the machine learning platform.
Metomic’s approach is automated, tracking employee uploads and scanning for any potentially sensitive information. If risks are identified, the plugin offers a preview of the critical data about to be uploaded. Its proficiency lies in recognizing and distinguishing risky data types such as Personal Identifiable Information, source codes, usernames, passwords, IP addresses, and Mac addresses.
To illustrate the significance of threat prevention, consider a case from last year involving a South Korean business specializing in silicon wafer manufacturing. The company made headlines when its employees used source codes from a certain product to develop new codes via ChatGPT. This not only exposed sensitive data to external parties but also raised concerns about the potential use of such information by competitors, given that all uploaded data contributes to refining the platform’s solutions with precise user content.