Reflecting on Generative AI One-Year Post ChatGPT Launch

On November 30, 2022, the technology world as we knew it changed with the launch of ChatGPT. In honor of the one-year anniversary of its debut, the below experts shared their perspectives on the impact the technology has had on the industry, as well as what comes next for Generative AI.

Chris Denbigh-White, CSO, Next DLP

“Since ChatGPT entered the public’s consciousness, it has been cited as both a dream for employees and a nightmare for organizations that are trying to protect sensitive data. For example, while it might be fine for a marketing executive to jazz up a LinkedIn post, the same cannot be said for a CFO putting poor quarterly results through ChatGPT and sugarcoating bad performance. Understanding these data flows and not losing control of what data to input is a line that has still not been found for many companies.

One of the biggest conversations has been around ChatGPT swallowing up jobs and leaving a vast proportion of the population unemployed. I find this fanciful. In the same way that calligraphy experts in the 19th century lamented the printing press as print didn’t have the ability to create beautiful characters, Large Language Models (LLMs) – the foundation of ChatGPT – will be embraced sooner or later. People will learn to repurpose their current skill set to complement LLMs and find opportunities to work alongside this technology very quickly.

The question is: do we trust LLMs? Just like the friend in the pub quiz who is totally convinced of an answer even though there’s no guarantee he’s right, LLMs are still a black box – and the regulation that surrounds it is still a bone of contention and unlikely to be solved anytime soon. This is particularly tricky if you’re using these models for industries such as healthcare and patient prioritization, as errors like these can have wide-ranging consequences. For cyber security professionals, it’s essential to collaborate closer on AI and LLMs and adopt a repeatable framework across the board.”

Matt Rider, VP of Security Engineering EMEA at Exabeam

“Artificial Intelligence (AI) is the buzzword of 2023 – indeed, it was awarded the Collins ‘word of the year’ for 2023 – but as a term, it’s incredibly broad and often misused or misinterpreted. ‘True’ AI doesn’t exist. At the moment, there is certainly some level of intelligence (small ‘i’) to be found in ‘AI-powered’ technologies, but there is certainly no sentience there. It’s also not a new innovation – machine learning has been in use since the 1950s. However, due to the widespread availability of Generative AI powered by large language models (LLMs), the phrase ‘AI’ is back with a vengeance and it seems every enterprise has embraced it and the software vendor’s solution is powered by it.

Chat-GPT and its Generative AI counterparts have been the truly innovative ‘AI’ development to have occurred over the last year. We no longer need to carefully structure our data, we can simply chuck a load of information at Chat-GPT without much thought and still gain value from the output. Instead of carefully researching a topic on Google for hours, constructing search-engine-friendly queries, and flipping through numerous websites, we now only need to type one question into a Generative AI-powered chatbot and it seems to finally understand us.

However, while generative AI-powered LLMs are making life easier in numerous ways, we need to be acutely aware of their limitations. For a start, they’re not accurate: GPT-4 Turbo has the most up-to-date data since its inception, but still only contains world knowledge up to April 2023. These systems also hallucinate and have a clear tendency to deliver biased responses. In fact, numerous reports have demonstrated these tools’ ability to be sexist, racist, or just generally discriminatory.  Rubbish in, rubbish out.

These limitations are not unique to generative AI-powered LLMs, though. Gaining inaccurate or biased knowledge is a risk we all take simply skimming through Google or Wikipedia. However, the real concern with Chat-GPT is the way these LLMs are presented. They give a ‘human-like’ interaction which inclines us to trust them more than we should. To stay safe navigating these models, we need to be much more skeptical of the data we are given. Employees need in-depth training to keep them up to date with the security risks posed by generative AI and what its limitations are.”

Joel Martins, CTO at Calabrio

“As we mark the first anniversary of the introduction of ChatGPT, it’s a suitable moment to reflect on the transformative impact AI technology continues to have throughout the customer service industry. Over the past year, ChatGPT has proven to be more than just a tool; it’s a catalyst for Large Language Model (LLM) integration, offering new possibilities across industries.

Earlier in 2023, ChatGPT and GPT-3 were integrated into innovative technologies to tremendously streamline contact center operations. LLM integration can bring more automated agent workflows with useful analytics, allowing agents to focus on customer experiences. The potential for AI in contact centers will continue to amplify the agent experience and deliver strategic customer support.

As we look forward, the next phase of AI holds even more promise. Companies should remain committed to utilizing this technology to empower their clients with the best contact center experience to delight their customers and drive innovation.”

There is no question that a year later, ChatGPT’s presence has revolutionized several aspects of our society, not only in our everyday lives but also in offices, hospitals, schools and more. As AI technology continues to develop, we must realize the collective responsibility held by everyone to leverage ChatGPT and similar AI technologies in an ethical and fair manner, emphasizing the need for controlled use and responsible development moving forward.”

Whatever your stance on Generative Artificial Intelligence, it’s clear it is here to stay. Businesses now need to decide how, or if, to integrate these tools into their workflows and what safety precautions to take.

Ad

No posts to display