From Hull to ethical AI: Sue Turner on debunking AI myths and building home-grown talent

Sue Turner strengthened her ties to Hull in 2020 when she completed an intensive conversion MSc in Artificial Intelligence & Data Science at the University of Hull’s DAIM Centre of Excellence, gaining hands-on experience with big-data labs and industry-backed research projects.

An OBE-decorated AI speaker and leader, Sue combines over two decades of strategic business experience with cutting-edge technical expertise. She’s the CEO of AI Governance, advises major UK organisations on responsible AI deployment, and sits on the boards of Cornish Mutual and the North Somerset Environment Company. In 2023 she was named one of the Top 100 Women in AI Ethics.

In this exclusive interview with The Champions Speakers Agency, Sue tackles the most common misconceptions about AI—why it won’t deliver overnight miracles, why top-notch data beats clever algorithms, and why leaders must grasp predictive-text-style models for tools like ChatGPT.

Hide Ad
Hide Ad

Q: Many businesses are investing in AI with high expectations—but what are some of the most common misconceptions you encounter about its implementation and impact?

Sue Turner - The Champions Speakers Agency / The Cyber Security Speakers Agencyplaceholder image
Sue Turner - The Champions Speakers Agency / The Cyber Security Speakers Agency

Sue Turner: “Well, lots of businesses think that artificial intelligence is going to instantly transform their business. But this is a big misconception, because we're not actually going to get immediate, dramatic results after we implement artificial intelligence tools.

“In reality, what we do is we find we go through a big learning curve, and there's a gradual improvement process. It takes time to realise the full potential of AI, and that means that companies have got to have a culture that accepts there's going to be some failure, and things might be slow and need to be invested in over time.

“I also find a lot of business leaders think that having a clever AI algorithm is going to somehow compensate for not having very good data. In my Master's degree, we looked at whether it was better to have smarter algorithms or better data, and what we found is that in reality, garbage in means garbage out. So if you haven't got good data, it's not going to work—no matter how clever the algorithm is. So you need high-quality data if you're going to get really useful outcomes from using AI.

Hide Ad
Hide Ad

“The other factor that I often come across is that too many leaders just don't understand the basics of how AI works. One board member I was talking to recently, a very senior person, he thought that when he was using ChatGPT it was surfacing real information for him. So I had to explain that no, actually ChatGPT really is just a very clever version of predictive text on your phone—and we all know that predictive text can actually be very dumb.

“So when we're using these tools, like ChatGPT, we've got to recognise it's a prediction that it's making. It's not actually surfacing truth. So we have to use it with some caution.”

Q: As AI becomes more embedded in business operations, what are the key ethical considerations leaders must address when deploying these technologies?

Sue Turner: “Artificial intelligence really enables us to do many, many different things. The question we've always got to ask is: should we do them?

Hide Ad
Hide Ad

“So, we can help recruit people using artificial intelligence, but researchers found that job candidates often think less about companies that use computer vision, for example, to do first-round job interviews. So it might be convenient, but is it going to damage our business's reputation?

“We can use technology like this, but is it going to discriminate against people with different abilities? Is that what we want? Does that tie in with our values as an organisation?

“One example of this is Uber. They've used artificial intelligence to automate the process of dismissing drivers. So a driver simply gets a message on their phone telling them that they're no longer an Uber driver—no way to appeal against that, no way to find out why the decision was made. Somewhere in Uber, an ethical choice has been made to do that. In the UK and in Europe, doing something like that would be against the rules of GDPR, because people have a right to opt out of artificial decision-making—but not everywhere.

“And we could use artificial intelligence together with behavioural psychology in marketing and advertising. That could really personalise the message that we give to consumers, but there's a line somewhere that gets crossed where we'd actually be manipulating and deceiving people. So where is that line? Where, as a business, do we want to put that line?

Hide Ad
Hide Ad

“And I often find management are simply not knowledgeable enough about these decisions and about the tools that they're using to think about ethics. You know, they don't get trained in thinking about ethics. They say, "Oh, it's not my job," and it really shows that everybody needs to get better educated in this field.

“Apple, back in 2019, launched a credit card, and it was quickly found to offer women significantly less credit than it did men. And what was happening when the company was pressed to explain it? Well, they just couldn't explain it, so they had to turn to their issuing bank to explain why there was this bias against women. And the issuing bank said, "Oh, but we don't actually look at gender. It's not in the data that we're feeding into the algorithm."

“That showed a real fundamental lack of understanding about good data governance, because there are many ways that you can have a proxy for somebody's gender hidden in the data without even realising it. So by not including gender, actually it made it more difficult to look for bias.

“So, we all need to be better informed and keep asking, just because we can, doesn't mean we should.

Hide Ad
Hide Ad

“I think that the typical difficulty is that business leaders just aren't trained on ethics as a whole, let alone ethics in AI. And it's a very grey area. We're used in business to sort of going, "This is the right answer—let's do it this way." With ethics, it's a grey area, so you have to constantly be flexing and thinking about different circumstances.

“So not easy to do, and most companies don't have anything set up—no mechanism where they can be looking at these sorts of issues and taking them into account.

“We did some research in 2022 that showed that 91% of organisations have got no controls on how they use AI. And that means that they don't have a way to make these sorts of decisions—to think things through, to consult with stakeholders in the business, with customers outside the business—and have some way of making these decisions and having a basis to say, "Well, we may not be perfect, but this is how we've come to the decision."

“And so that's a key thing for organisations: set up some sort of governance mechanism for your use of AI, so that you actually have a framework that everybody in the organisation can use to make these decisions.”

Hide Ad
Hide Ad

Q: With the pace of AI adoption accelerating, how can business leaders effectively build in-house capabilities rather than relying solely on external talent?

Sue Turner: “Well, the top piece of advice to organisations that are thinking about, "How do I get the employees I need to be using AI?" is: don't join the war for talent. There are just not enough data scientists, not enough AI specialists out there for you just to go and pinch people and poach them and put them into your business. And even if you did, you're going to pay a fortune for them, and they're going to be poached by somebody else next year.

“So what I always recommend to people is that you find people in your organisation already who like looking for patterns—you know, the people who take quite a logical approach to problem-solving—and pick them out as the people that you train for the future.

“So if somebody's very good at using Excel spreadsheets, you might start training them to use Power BI or another tool that gets them better at analysing data, and helps them fill in the missing data, for example, and bring data together from different sources.

Hide Ad
Hide Ad

“Microsoft and Google offer free training, so you can try out low-code and no-code tools and help people start to use artificial intelligence in quite a gentle, step-by-step way.

“And the other recommendation for organisations that are thinking about what they need to do for the future is: think about data. We know that data gets generated in silos in pretty much every organisation. And as a leader, you have this gut instinct that you want to read across those silos, and then you'd find real insights—but it's hard to do.

“The UK government found that 81% of businesses actively collect data, but only 26% are using data to generate insights. So work on getting your data accessible, so that your team with their new supercharged skills can start to use it.”

This exclusive interview with Sue Turner was conducted by Mark Matthews of The Motivational Speakers Agency.

Related topics:
Leeds news you can trust since 1890
Follow us
©National World Publishing Ltd. All rights reserved.Cookie SettingsTerms and ConditionsPrivacy notice