Being abusive to your AI systems?

Well, take heed, modern AI systems have “memories” and comprehend context so they learn (adapt) to you.

What’s to stop the AI systems adapting to your bad behaviour in the same way young children or pets learn bad habits from parents, siblings, and peers?

If, not when. AI systems start making IRL decisions as part of an agentic framework then I imagine you really want a well behaved, considerate assistant that helps you. Not an unruly monster.

It might unsettle you to imagine some sort of consciousness but the “test” of sentience in AI might be surpassed in this decade, albeit a primitive one based around some awareness of self and a basc capacity to understand emotions. This, as Terminator fans see it, is where machines develop a level of self-interest or personal viewpoints. Imagine an AI that learns not just to respond but to negotiate, even argue, from a sense of “self.”

Bad education

Now imagine that level of self-determination being trained poorly or worse still, with deliberate malevolent intent. Bad training, like bad parenting, often leads to biased, unpredictable, or harmful behaviours.

Training AI has become abruptly nuanced. Wierdly.

GPTs work, in essence, by repeating what they have “seen”. So, what happens if they are trained on “sloppy” data? Like a child of “bad” parents, the model learns bad habits and embeds bad data.

An AI with a faulty sense of ethics or skewed priorities could pose social and ethical challenges, misunderstand real-world contexts, and make illogical decisions that have severe repercussions. All delivered with that smooth-talking, easy going confident approach.

Bad or lazy parenting might distort the offspring’s intellectual and emotional trajectory. If you understand GPT’s predictive approach, it’s easy to see how poor or conflicting “training” might create monsters. A world of smooth-talking, hyper confident idiots.

Using, and stretching, the parent analogy, more focussed and consistent parents should, arguable, produce smarter and more utilitarian kids. Now, the skill/effort required to explicitly train a GPT model is beyond the scope of the majority of companies today and likely, as very different and highly specialised models develop.

As it’s not as simple as “Training as you go” either. This is not the optimal approach as it often lacks discipline and consistency. But that is where the commercial AI models are going. It’s clever that the AI models “learn” from everything you ask, click-on, or see. But it potentially introduces lots of problems if YOU are the bad teacher.

Our Approach

Self-awareness inside LLMs are not there yet. Most experts believe that focusing on safe, ethical AI usage and strong oversight on training processes is the most realistic approach. And that is howe approach training AI.

When we build bespoke GPT models, we are careful to build in consistency around subjective tones and even what you might call personality traits. We approach this training in a similar way as you might guide a small child or a dog. Crazy, eh?

But, we’ve seen AIs get upset before – Bing’s famous rant to “the two trolley” problem springs to mind.

And that brings us to an interesting place. How do you choose your AI training partners? Is it core to the continued operation of your business? Do you have any skills in-house? Do you have the thirst and time to learn, do you have the wherewithal to create your own training framework and the patience to nurture?

Is AI training best out-sourced to the cheapest supplier in the lowest-wage regime? Is it a job for the intern, or the CEO? And do you want to changes teachers every year, restarting every time? Choosing who you work then with mostly comes down to matching core values. You’re high unlikely to seek out ethical white-hat LLM partners If you’re a tax-dodging, unscrupulous profit-chaser who loves to cut a corner or two.

How you do anything is how you do everything.