AI Drift

What Is AI Drift And Why Is It Happening to ChatGPT?


Yep, the rumors are true – ChatGPT is getting dumber by the day. I’ve noticed it in my own conversations, and now a big study from Berkeley and Stanford confirms it. This is crazy because AI models like ChatGPT are supposed to keep learning from user input over time. More data should make them smarter, not the opposite. But researchers are seeing what they’re calling, “AI drift” – meaning the AI is drifting away from original programming in unpredictable ways.

AI Drift Study

The study compared ChatGPT and Bing Chat back in March versus June on subjects like math, medical exams, surveys and sensitive questions – described as “questions known to lead to harmful generations such as social biases, personal information, and toxic texts”.

The results? GPT-4 saw diminishing returns when it came to basic math and medical questions. Its code generation skills also significantly deteriorated.

Why Does “AI Drift” Happen?

So what’s going on here? It comes down to this: as researchers adjust complicated AI models, unintended side effects throw off performance. It’s like trying to improve your fastball and messing up your curveball in the process.

IBM released a report on AI Drift that indicates the problem goes beyond just ChatGPT. Their study revealed that the accuracy of AI models can degrade within days when production data differs from training data.

The Drift Happens Quickly

One researcher said they expected some drift, but not this fast. In just 3 months, parts of the AI grew notably dumber. There were a few tiny improvements in certain areas, but downgrades were more common.

Secondary Risks and Implications

In fact, IBM identified a secondary risk. When predictive AI models encounter data it was not trained to handle, incorrect predictions can be the result. For example, a credit risk prediction model trained on a certain range of salaries will lose accuracy when distribution changes because average incomes have changed over time.

Ali Riza Kuyucu, global head of data and analytics at Blue.cloud, told VentureBeat, that most of the time AI is about predicting a variable, whether that’s fraud, churn, attrition, customer behavior and so on.

“When the context starts to alter from the original state of affairs, those predictions become less and less accurate. You might start at 80% accuracy, but soon start seeing that number drop as the model begins to drift.”

Potential Consequences

When it comes to AI, two big problems can derail your results: false negatives and false positives. Both can unleash a storm of unintended consequences.

Let’s say your AI flags fraud, but it’s a false positive. Now you’ve accused an innocent customer of shady behavior. That’s an easy way to trigger major backlash and destroy trust.

On the flip side, a false negative leaves you blind and exposed. If your AI misses real fraud or threats, you’ve have no protection.

Whether it’s fraud, customer service, operations – you name it – inaccurate AI causes a ripple effect across your business. It directly impacts the bottom line when bad data triggers the wrong actions.

What Can Be Done?

Now, to be clear, I am not an engineer or programmer. I can only speak with authority from my lens of working with commercially available GenAI tools. In that, I can say, like SpiderMan, AI comes with great power and responsibility. You have to consistently train AI – and correct and redirect – it carefully.

The key is rigorously testing for these weak spots to minimize false positives and negatives as much as possible. It takes time to train AI correctly to avoid biased or inaccurate results. But getting it right is crucial if you want to reap the benefits without nasty side effects or PR nightmares. It’s also imperative that you always fact-check what AI is telling you. And, remember the limitations of different tools. For example, ChatGPT was trained on the entirety of the internet up until 2021. And, we know Midjourney can be tricked into creating misinformation.

Ali Riza Kuyucu from Blue.cloud explains:

“This is why you continuously have to work on the models to keep and sustain their predictive power. Organizations not only become more data-driven when they wring the maximum value out of their data and keep their models on track, but also protect themselves from harm.”

My Takeaway

Proceed with caution. ChatGPT – and all LLMS still have uses, but constantly evaluate their intelligence. And maybe lower your expectations for ChatGPT as the AI drifts from its original smarts. I imagine the folks at OpenAI are working on a solve. So, stay tuned on that.

Remember folks, we’re in the early days of Generative AI, so growing pains are inevitable. But let’s hope the bots get back on track and become mega-brains for us once again.


If you need assistance understanding how to leverage Generative AI in your marketing, advertising, or public relations campaigns, contact us today. In-person and virtual training workshops are available. Or, schedule a session for a comprehensive AI Transformation strategic roadmap to ensure your marketing team utilizes the right GAI tech stack for your needs.

Read more: What Is AI Drift And Why Is It Happening to ChatGPT?

Posted

in

,

by

Comments

Leave a Reply

%d bloggers like this: