The Risks of AI to Your ERP | ERP Advisors Group

AdobeStock_603741880

In the age of Artificial Intelligence, entry-level hackers are now combining generative AI and social engineering to enhance their online deception practices, especially targeted toward businesses. But what does this mean for ERP? And how can businesses protect their data from better-equipped cybercriminals? In this episode of The ERP Advisor, Shawn Windle will be joined by special guest, Security Awareness Advocate James McQuiggan, for the fourth consecutive year to bring attention to Cybersecurity Awareness Month, educating businesses on the dangers of AI and how to avoid falling victim to this emerging threat.

The Risks of AI to Your ERP

Fearmongering around AI  has heightened as cybercriminals leverage tools to steal identities, private information, and other treasured assets. This bad publicity unfortunately masks the world of opportunity to further improve and accelerate society with AI. To truly understand the role of AI and the risks it poses to your ERP, you must understand the truth behind its history, how it is being applied in software, and what to look out for in its usage.

A Brief History of AI

English mathematician, Alan Turing, first introduced artificial intelligence in the 1940’s. Known as the father of computer science, Turning decoded several German Enigma encryptions in World War II, but is famous for creating the Turing test, a process used to determine if a computer could think and process like a human. As AI becomes increasingly more advanced, the Turing test will take on a greater role in determining AI’s application in businesses, societies, and economies. Over the years, AI has infiltrated nearly all aspects of life. Examples include Google search, AI chatbots, GPS apps, and social media algorithms. It is also important to note that AI is not a new phenomenon – it has been around for decades. ERP solutions have utilized AI since the beginning to automate processes and improve operations, its scope has simply grown as technology advances.

We have entered a new era of predictive and generative AI solutions where the technology has become advanced enough to create content, provide detailed, educated predictions, and eventually make operational decisions. Because of this shift, the risks associated with “AI reliance” have changed.

The Dangers of Bias

Many currently view AI as an “all-knowing presence” wherein the technology replaces the need for human intervention altogether. This viewpoint neglects a key piece of information: AI was created by humans. As a result, there will always be inherent biases and errors baked into the technology, which is of particular importance to businesses looking to integrate AI into their operations.

AI applies different aspects of machine learning (ML) and data models to learn and, as a result, the technology is only as good at the information it is being fed. When a solution is created (especially a generative AI solution) it is carefully coded with algorithms for creation and is put through intensive “training” meant to provide it with its base knowledge. This training can accelerate the decision-making and time management processes for a business. However, improper training can be detrimental. Within those algorithms and data are human biases. What this means is that you can never fully trust the information being provided by the AI. In recent years, major companies, like Amazon, have fallen victim to these biases, proving that we are all vulnerable to the risks. This also emphasizes the need for verification when generating AI reports or content, using the information to make important business decisions, and eventually granting AI permission to make decisions within your ERP.

The Concept of Trust AND Verify

You must have a healthy level of skepticism when using any level of artificial intelligence in the workplace. Otherwise, companies run the risk of applying inaccurate information or even falling victim to AI hallucinations.

AI hallucinations occur when generative AI intentionally provides made-up information when it cannot find the resources the user is searching for. For example, if you were to type in OpenAI, “outline the plot and ending of the fourth Lord of the Rings,” you would receive an overview of a movie that never existed. This is just one example of how AI can mislead us.

When integrating an AI solution into your ERP or overall technology stack, it is vital to understand the mechanics and nuances of the application so that you can effectively identify those biases or inaccuracies, instead of blindly following its recommendations.

The Greatest Risks AI Imposes on ERP

Society has falsely convicted cybercriminals using AI as the greatest risk to ERP; however, the real risk is when employees regularly use AI without the company's knowledge and approval. Whether generating content, taking notes, or applying AI in other ways, externally engineered solutions can wreak havoc on your business when misused.

Employees must beg the question, “Where is this information going?” If they cannot give a definite answer, then they shouldn’t be using it! companies can find great value in internally engineered AI, such as solutions from their ERP provider, to improve productivity and growth. While these solutions are still vulnerable to cybercriminals, major software vendors take drastic measures to protect your information which ultimately mitigates your risk. All in all, do not utilize applications and tools you are not familiar with or that have not been carefully vetted by a qualified individual. In turn, ALWAYS do your due diligence to protect from deceptive information and practices.

Additionally, AI has further empowered cybercriminals to eliminate obvious phishing signs like poor grammar and incorrect email addresses to level the playing field for entry-level and experienced hackers alike. Cybercriminals are taking it a step further by creating artificial intelligence systems, such as WormGPT and FraudGPT, to create strategically convincing messages and steal information from businesses. As a result, businesses need to be aware of these risks and business vulnerabilities, making education key to identifying threats.

Advantages of AI

We are barely scratching the surface of the value computers and artificial intelligence can provide to modern society. Technology has indefinitely fast-tracked businesses to excel within their ERP solutions. This enables them to enhance their automation capabilities, daily operations, and basic business functions. Businesses can then use AI to free them from basic tasks to do more value-added work. When AI is used appropriately and properly vetted, it is an invaluable tool for businesses and will take them to previously unimaginable heights.

Conclusion

The need for human intervention will not be eliminated by AI anytime soon. For example, airlines have long relied on AI for planes to take off, fly, and land without any assistance, yet there must be two pilots staffed on every flight around the world. Moreover, AI cannot reach its full potential if someone is not supplying it with accurate, relevant information. While there are risks associated with any new technology, proper education erases the fearful image to reorient the conversation toward increased productivity and improved decision-making.

Learn Why to Hire an ERP Consultant

 

Rebekah: Thank you for joining us for today's webinar: The Risks of AI to Your ERP. Shawn Windle is one of our speakers for today. Shawn is the Founder and Managing Principal of ERP Advisors Group based in Denver, Colorado. Shawn has over 25 years of experience in the enterprise software industry, helping hundreds of clients across many industries with selecting and implementing a wide variety of enterprise solutions.

His podcast The ERP Advisor has dozens of episodes with thousands of downloads and is featured on prominent podcast platforms such as Apple and Spotify. James McQuiggan is our special guest joining us today. James is a 20-year security veteran and security awareness advocate for KnowBe4, as well as a part-time faculty professor at Valencia College.

James has achieved many certifications, identifying him as an expert in the fields of cyber security and security awareness. James previously worked as a Product and Solution Security Officer, Information Security Analyst, and a Network Security Engineer for Siemens, where he consulted and supported various corporate divisions on cyber security standards, information security awareness, and securing product networks. On today's call, James and Shawn will help educate businesses on the dangers of AI and how to avoid falling victim to this emerging threat. Welcome, James and Shawn!

Shawn: Thank you.

Rebekah: So, October is Cyber–Security Awareness Month, and James, this is your 4th year joining us. So, thank you so much for continuing to come back and collaborate with us.

James: Thank you all for inviting me back. It's a pleasure, It's an honor. Looking forward to today's chat.

Rebekah: Yes, we are too. It's going to be a great one.

Shawn: I have to ask though James, because I don't know why you don't have more grays than I do, especially given your job since we started this four years ago! Is that right?

Rebekah: Yeah, this is the 4th year that James has joined us.

James: I'm coming up on my 4-year anniversary with KnowBe4. I've been with KnowBe4 as long as I've been helping you all!

Shawn: We love having you. It's so great to see you. I'm going to just tell everybody right now; I get a little freaked out on these calls. So, if I get a little nervous, plus I just had a big cappuccino. Thanks again, James, we love being able to talk to you. And we have a special presenter today too with Rebecca, who's world-renowned as the ERP minute anchorwoman! Exactly. Thank you, Rebekah, for popping in today.

Rebekah: Of course! Thank you both for having me. I haven't had the opportunity to sit down and talk to James. I've lovingly named Spooky Season for ERP. Nick, who's behind the camera with us today helping - that's what I've been referring to as this entire time so that he was prepared for how terrifying these calls with James are. But if you guys are ready, we can jump on into these questions. So, James, we're going to start with you since you're our resident expert, but in your experience, what is AI and what are the biggest fears surrounding it?

Shawn: Oh wow!

Rebekah: We're going to take it right out of the gate!

James: Right out of the gate, alright! One of the things when we work on these panels and these discussions, it's always good to have the questions ahead of time. And I remember getting this question and looking at it, “What's AI and what's the biggest fear?” And I'm like oh, wow, Debbie Downer right out of the gate. Here we go! No, I think I'm going to flip it on the head a little bit and kind of talk about “what are the benefits?” as well as things we need to be aware of. When we look at artificial intelligence, we look at AI, a lot of people are starting to wake up to it and realizing how impactful it can be to organizations, to our businesses, to our risk programs, to security, to marketing, to life, the universe, and everything.

AI has been around since the 50s and it was thought up by Alan Turing with the Turing test way back in the late 40s. He was the guy who created the computer, the bomb computer that decrypted the Enigma machines back in World War 2, the German and English machines. And he had the notion of “What if computers could learn, program themselves, and think for themselves?” Not sentient or conscious state but be able to reprocess and take information and learn it and be trained on it and go from there. We kind of fast-forward forward to the 50s. We have Dartmouth College, and the term artificial intelligence was created. And over the next 20-30 years, it's kind of an AI winter. The idea of the notions are there. It's doing basic things like learning how to play chess and dealing with math theories.

We get into the 80’s, and AI is now like this big computer called Deep Blue from IBM. It plays chess against Garry Kasparov and beats him in the first game. It's important to note Gary then comes back and figures out how this computer thinks and ends up beating it in the next three games.

We have Watson that comes along, another big IBM AI-type product, that goes on to Jeopardy and beats Brad Rutter and Ken Jennings, our current host of Jeopardy. Part of my nightly routine. My wife and I, we watch Jeopardy. But it was basically utilizing AI to hear the question from Alex, process what it is, figure out its meaning, get the answer, and then reply back in a verbal text.

Now we're with generative AI where we're able to go in and type in a question. We can go into Google and say how do I make a fruit salad? And Google is going to give you a result of 65,318 results. You go into a generative AI, which we saw with ChatGPT last November, and say, how do I make a fruit salad? And immediately, it pops up with the instructions of get this fruit, cut it up this way, this many things, it gives you the recipe and boom! And that for a lot of folks is like the best thing since sliced bread. It was mind-blowing.

We start to see AI now in the generative aspect versus just being general artificial intelligence. What we're dealing with so much nowadays since November has been what we call generative AI. And what that means is it has the ability to create text, images, code, models and basically, it's there to generate material. It's all based off information and data that it's been trained upon. It's not really creating new stuff, but it takes other stuff that has been trained on and then goes from there. It's kind of like we go to school, we get training, and then we learn how to apply that. And that's what generative AI is doing. So it's a great thing! You can go out and ask it all kinds of questions. It uses what we call large language models or LM's, where we're able to ask it a question, it processes it and then gets us a response in an English format or in a language that we ask it and we're able to get back that information. And it's a tool. It's incredible, it's great.

But the fears: now, I use ChatGPT, I use Claude on a regular basis because of its generative capabilities and things that I'm able to leverage it for my work. The concerns now - so just like any tool that's out there, there's the malicious side of it. You can pick up a hammer and build a house, hammer, and nails, but you can also hurt somebody with it.

The concerns that we're seeing already now with AI, there are a lot of issues with biases. AI has been trained on data. If it hasn't been trained on the right data or enough data, it can build a bias to the information that it's got. Years ago, I remember dealing with issues or reading issues where you had cameras with facial recognition. It had a hard time picking up people that weren't white. Its accuracy wasn't as good because it had been trained on mostly white people. Now that has been corrected. But even still, there are biases in the data. There are biases in the information. So, if your AI models aren't trained well enough, you can run into issues on biases.

There are privacy concerns. When we had ChatGPT come out last year, everybody was like, oh my god, this is great! I can have it fix my code for programmers. I can have it write me letters. I can have it do all kinds of stuff! And people were uploading all kinds of information to ChatGPT that really shouldn't have from their organization: intellectual property.

You had an executive upload their corporate strategy that they were doing for the next year to ask ChatGPT to have it create a PowerPoint presentation. You had Samsung developers upload their intellectual property code into it to say, “Where are the bugs?” or “We're having a problem with X, how do we fix it?” The legal team from Samsung had to call up OpenAI, the people that oversee ChatGPT, and go, “Hi, we need you to delete that out of there for us.” So, a lot of the things that we're seeing there are people uploading information that they shouldn't. So that's always like anything else we deal with on the Internet, don't upload anything on the Internet that you wouldn't want to show up on a billboard on your local major highway.

There are security risks that we're starting to see outside of privacy concerns. You've got cyber criminals that are using AI for their benefit as well. When it first came out, I remember seeing these posts on some of the dark web forms. They were raving about, “I got chat LGBT to write me a phishing e-mail!” and I know we're kind of talking more on the risk management aspect. But from a social engineering standpoint, which is a lot of what I talk about, we always talk about security awareness training as okay, when you get a phishing e-mail, one of the telltale signs is the fact that they've written it in bad English, there are spelling mistakes and that kind of thing. This now levels that playing field because now they can go into ChatGPT, or any one of those generated ones, and get an e-mail written with very good grammar, very good spelling, and that removes that criterion out of that overall.

We're also seeing cyber criminals using their own ChatGPT versions, older versions or making their own, for malicious reasons: getting it to create malicious malware, getting it to create those phishing emails without the guardrails. You can't go into ChatGPT and go “Hey, write me a phishing e-mail to get the credentials from my boss.” It won't let you do it. But you could manipulate it using what we call prompt engineering to convince ChatGPT or the large language model to give you that information.

The last one I want to touch on really quick is the loss of jobs. You can go all the way back to the 1500s, or, depending on how your history is because mine's bad, the printing press gets invented. Gutenberg invented the printing press. And you've got a whole slew of people that are all upset because they're now out of work because they were the ones that were handwriting out all the pages. Now with the printing press, he's able to produce books even quicker. You had people out of jobs that way.

We look at the car, the automotive industry, where people are on the assembly line building cars for years and years and years. Now we have robots doing it! So now They're there in the QA. So, there are concerns with regard to AI taking away people's jobs. But I think we're personally, we're in a time of evolution with regard to that. People may attrition out. People may transition out or try other careers. Hey, come on into cyber security, we need lots of people!

So, looking at the risks, looking at the potential loss of jobs, bias issues, the privacy concerns, there are things to be aware of when it comes to AI. But at the same time, we want to make sure that we're using it for the proper reasons. It's a great tool to use and be able to generate that information or just as an AI tool.

Rebekah: Awesome. Thank you so much for that, James. I think that was a great foundation for us to really start this conversation. And I would just like to point out we've talked about malware and then like putting code into ChatGPT, but you said something that I don't think is harped on enough in some of those fear-mongering articles, where people think they could just plug it into ChatGPT and get malware. It's not - you must have some of those extra pieces in it.

That was just something I'd like to point out as we're continuing this conversation and hopefully, people hear that so there's a little less fear. But Shawn, that's going to turn our conversation to the ERP side of it. We both know we look at the news every week. The vendors are doing a lot of things with AI. So, can you talk a little bit about how vendors are using this new generative AI, new advancements in AI, to really improve business processes and the bottom line?

Shawn: You bet. And I do have to say too, I think what James just went through in 10 minutes is probably the best, most consumable definition of AI that I've heard for sure. So, I think that's well done, James, because I do think the secret of our spooky season going into cybersecurity month and everything is we really want people to understand. Once they have the knowledge and they understand situations, then they're not afraid, right?

I don't want anybody to be afraid! Your business, our business. We're in the exact opposite for those reasons, which is great. I really appreciate you going through that. And I think within the context of what James talked about, what we're seeing from an ERP perspective is that all the prevalent ERP vendors have AI strategies that they're starting to announce, or they've been working on, and some are partnering with the larger AI organizations leveraging their tools. Some are buying AI companies. Others are following a strategy to name their AI solutions after vape products. I don't know. I'll pick on SAP a little bit. SAP just announced Joule, which is cool. That's fine. Yeah. It's interesting.

So, if you take what we just went through with James, and you think about kind of general AI, generative AI, ChatGPT. When I was playing around with it a couple of weeks ago for another conference that we did for the Financial Executives International on AI, and I put in, James, I said, “Which ERP software is the best for me to buy?” I mean that's kind of mean, but it's an app, but what is it going to do to me, right? Well, it zapped me, first off. It's like “You can't ask me questions like that!” I'm like, “Oh, no!” Just kidding.

It said, “Well, the answer depends on bam, bam, bam, bam, bam.” I was like “Those are exactly the things we look at when we do needs analysis and selection projects!” And then of course, because we have paid OpenAI, I don't know how it did this, it came back, which - I'm just kidding. It came back and said, “You should find an ERP selection firm to help you through the process because they can help you to make sure your needs are met, et cetera, et cetera.” I was like, you know, all that stuff about AI, maybe mean it's not so bad, right? I swear we didn't have anything to do with that.

Rebekah: Yeah, ChatGPT is a paid sponsor.

Shawn: Yeah, right? Brought to you by...!

Rebekah: Brought to you by ERP selection firms.

Shawn: Exactly. I don't think we could afford that one. But the point of what I'm trying to get to is we saw a demo recently with one of our vendors in a software selection process that they're starting to use AI solutions. And I think the question that somebody asked within the app through the AI was like, “What are my competitors charging for this same product?” That's kind of interesting, right? So the AI tool could go and crawl the web or do whatever it does, I mean, it's got a mind, right? But then I think on top of that, it has the ability to go search, and the information came back. That's pretty cool! Like, that's really cool if you're a salesperson or if you're in purchasing: what are other vendors charging for this product versus the one I'm about to build a purchase order for?

I think what we're excited about most for AI in ERP Is you're kind of doing your daily job in the ERP usually, right? You got stuff you're doing. I think we talked about a field services scenario. Where did that come up, Rebekah? Was that SalesForce? Somebody was doing this recently.

Rebekah: AI with field service? A few of the vendors are. There are a couple of different examples.

Shawn: Yes! So, when should I change the oil on this truck? Well, we have maintenance schedules already in a preventative maintenance module within an asset enterprise asset management solution. But why can't the AI go out and say, well, I know you're supposed to do it every six months; but if the usage is this, you should do it every four, right? And here's the benefit that you're going to get by increasing the life of the asset. I mean that's kind of cool stuff, right?

I mean, I can remember a large car manufacturer. Actually yes, I forgot about that. In 2000, one of the software vendors I worked with, we were doing an experimental project with one of the car manufacturers, and those were the questions that they wanted answered. That was my own experience from 20 years ago, much less what you talked about, James, where people have been asking these questions all along. I mean, that was the vision for computers.

I think what I would say is as a society, computers are really, really still young. I believe that we are literally scratching the surface of so much value that computers are going to provide our society in perpetuity, right? You think about space travel to different planets, or even maintaining atmospheric pressure on different planets, or what is the protein so that - we've got prospects that are calling us that are already growing food in the lab. I'm not a huge fan of that now. But if we lived on Neptune, I'd be a real big fan because there probably aren't that many cattle. And with my blood type, I must eat a lot of red meat.

So, what I'm saying is, I think everybody needs to understand this and this is super, super important. The ERP vendors are used to solving business problems. And thankfully, they believe the more business problems they can help solve for the client, the more software they can sell. So, AI just becomes a mechanism where they can help businesses, nonprofits, and government agencies - I should be specific on that - to meet their business process objectives. Man, I sound like AI, but it's kind of true!

I tend to view AI and ERP as a really, really good thing. But I was involved years ago in business process management and automation. One of my neighbors worked at Crystal Reports and he went to Business Objects and then went to SAP. The reporting tools that those guys were building! You look at the guys that started Hyperion that Oracle now has, and they're over at OneStream, and you look at some of their tools. Not to mention Planet Together, or there's a firm called John Galt that does a lot of advanced planning and scheduling. I mean, there are so many firms that come into my mind that we've worked with that are solving really complex business problems already. Now to be able to layer in these tools to do the kinds of things that that James mentioned, within the context and within the daily operations, in your ERP, you’re in AP, you’re in purchasing, you’re in invoicing, you’re a controller, you’re a salesperson. There are so many scenarios!

We were just at the Infor conference, and we listened to some customers talk about how they're using some of their tools. Pretty basic stuff, right? It's pretty basic. But I'm telling you, the next 10 years, how many times have you heard me say this, Rebekah? Sorry.

Rebekah: So many times! And I think you're right.

Shawn: And you think I'm right. Yeah, I'll make sure to get your salary paid. She will tell me when she thinks differently, thankfully. And I love that. And that goes for all the viewers. Any different viewpoints, let us know!

No, but I really do think that we're in a very, very exciting time. I think things are really going to change. If a system, a computer, software, processes, etcetera, etcetera, can offer 100% benefit, I believe as a society we're at like 3%. So, you put the protection in that you guys do from KnowBe4 so that the bad guys don't win, and all the good guys and gals out there that are building these really value-added apps within their ERP and that's a context that people are used to working in today, I'm super excited! I think it's only going to get better.

Rebekah: Definitely. So, I'm going to jump just a little bit and I know Shawn, this was originally a you question, but James already kind of touched on it. So, you guys might be able to get some good banter going back and forth with this. But AI has been utilized in ERP for years. I think people don't really realize it because now it's coming back as like a buzzword. They hear AI and they're like, “Oh, we're advancing. We're going to get human robots walking around that we're not going to be able to tell the difference.” But how do the most recent advancements compare to ERP in the past and then just AI in general in the past?

Shawn: Go ahead, James. Yeah, please!

James: I want to step back just a second. Your comments that you made before; I want to want to add a little minutia to them. This isn't to scare you, but just to bring some of my thought percent.

Shawn: Sure, James.

Rebekah: I was expected to be scared during this call.

Shawn: It's OK.

James: One thing that I found interesting, you were talking about how a salesperson goes out and ask what's the sales point or how much are they selling their particular product or service. In cyber security over the last number of years, we have always talked about the concept of trust, but verify. We go trust AND verify. We want to trust it. We can't believe everything on the Internet because we know we can't believe everything on the Internet. Same thing with e-mail. But when it comes to the generative AI things that are being created, we have to have that healthy level of skepticism. AI has a condition that we call AI hallucinations. If you were to go in and go, “What's the price point or what are they selling XYZ widget from Acme Corporation?” and if it comes back with a value, you will want to go through and verify it. Because yes, it could be scraping the Internet looking for the information online. But a lot of times, if it can't get the answer or it doesn't know it, it makes it up.

Shawn: Oh, no kidding!

Rebekah: That's fascinating!

James: I'm going to bring a fun story. Folks may have heard about it in the news. I know it was a big news story for me because I got a kick out of it. Back in May or June of this year, you had a law firm in New York City working on a trial case. And the paralegals in the office, as requested by the lawyer, were to go out and get any supporting evidence or other cases like this case they were working on. So, they went into ChatGPT and said, “We're working on this case. We wanted to know if there's any other cases that match our case that we're dealing with this fraud or this investigation.” Boom! And it came back with six different cases, and they were like, this is great, cool! Now the other thing to remember is that early on when it came to the generative AI and ChatGPT, when it got released, people were putting in questions from the Bar exams, and it was getting them right. So, people are looking at ChatGPT going, it's a lawyer. We have our own personal lawyer! We can just go ask it and get the information because they can pass the bar questions.

Keeping that in mind, they go out and they ask for these cases, and they get six cases back. Jackpot! This is great, the boss is going to love us! They throw them in the briefcase case, hand it to the lawyer. The lawyer goes over to the courthouse, files it. Gets in front of the judge and the prosecutor, and the judge goes, “These six cases you've listed here, where are they from?”

And the lawyer is like, “What are you talking about? We researched it and we discovered those cases.” The judge goes, “That would be good if these weren't completely made up.” ChatGPT didn't have any other cases, so it made up six of them. Gave a case name, New York vs John Hancock, and gave up those cases. And they never verified it. The lawyers have been sanctioned; the lawyers have been fined. They learned a valuable lesson.

That's one of the reasons why I like using ChatGPT because for me, it's a verification tool. I'm going through and I'm working on something, or I want to ask it a question, and the response I get back. If I know that that's accurate, I'm like, okay, good. I can move on. But a lot of times, we must be aware of that it could generate additional information that may not be accurate. So, it's important that we're verifying that information. You could go into ChatGPT and go, “Tell me what the plot and story was for Forrest Gump 2.” There never was a Forrest Gump 2, but it makes one up for you! That's what we call an AI hallucination.

So, it's always important to remember using generative AI, especially on the text aspect, when you're asking questions and looking for information, that you do some sanity checks and you do a bit of a check over there as well.

You talked about cars, and I always crack about that one because we're seeing AI in cars. Tesla is starting to do self-driving cars, the autonomous cars. (They are) utilizing AI with optics, with IoT sensors, big data, feeding it back, and being able to go through; and you can mess those up because it recognizes a stop sign. But if it's looking for that hexagonal sign that's red with the word stop, and you put black spray paint over it, how do we know it's still going to stop? Because it might recognize the hexagon, but if it's been changed, the color, that is kind of like a data poisoning of an attack that can happen with autonomous cars. So, there's a learning aspect there.

But what cracks me up is - you're right. When you get that check engine light on, if that car all of a sudden goes "Oh! You got your check engine light come on; we're going to the dealer.” Yeah, then I've had it, essentially, when it comes to self-driving cars. But when we look at where we are now, and you're right, we're still early days. Computers have come a long way since the 1960s when our phones are more powerful than the Apollo 13 rockets that took them to the moon. But with computers and where we're going, we've got quantum on the horizon, we're using AI, we've got IOT devices. And within AI, our brains process data faster than the computers do. We're still moving faster, thinking faster than they are. By 2045 is the expectation that's going to switch.

Last week, on 60 Minutes, a great interview with Jeffrey Hinton, who we call the godfather of AI. He’s been using it for the last 40 years and he just retired from Google. And one of his biggest fears, and we've touched upon it already, is the fact that we do not want military robots that are bipedal, up and about running around with guns. Because then if they get that level of self-consciousness that could be coming in the next 10 to 15, 20 years, then we start getting concerned regarding Terminator. But I'm getting off on a tangent. What was your question, Rebekah?

Rebekah: The question was, AI has been utilized for years, so how do the most recent advancements compare to AI in ERP of the past? You touched on this that it's generative. Shawn, do you have anything to add to that, like how it's changing the landscape of ERP?

Shawn: Well, yeah. And the framework that James just talked through, I can't get the robots out of my head now, right? I think we did an ad or an article early on and put like the Terminator's face - was that us or somebody else did that?

Rebekah: Someone quoted our article at the Terminator's face, but I did reference the Terminator in the article because you can't talk about AI and not mention the Terminator.

Shawn: Oh, that's right. That's funny. And then, of course, I think with Star Wars, right?

James: Star Trek? Well, those were humans. Those were humans and stormtroopers.

Shawn: Yeah, Star Trek.

James: You think of data from Star Trek.

Shawn: Right. There you go. We just watched the Picard series this season three of Picard. Did you see it?

James: Oh yes, fantastic.

Shawn: Oh, that was so great! I'm not like a huge Star Trek guy, although we've watched a lot of them. We've watched all of them. No, that's not true. More of a Star Wars guy, but I cannot recommend that series enough. It was so much fun. Every episode was great, but yeah, you had the ultimate version of AI in data. I think what it comes down to for ERP was I always think of the predecessor to ERP, which was MRP, Materials Requirements Planning. There was an AI bot, you could call it that didn't feel like a bot back in the day, that would take a bunch of data in and give you an answer. It would take in your current inventory levels, what your forecast is, a bunch of other things, min, max, all kinds of things, right? And then come back and say, here's what you should order from your suppliers.

And I don't know. I bet there are some listeners that could send a note later and say, not us! But in my experience, every planner that I've ever worked with takes that plan and then they go, “Well, we're going to tweak this, this, this, this, this,” right? So, there is a human mind that usually sits above those kinds of plans. Now fast forward, I mean that's probably 30 years ago, 40 years ago to today. And I think with exactly what James said, if the AI tool is going external, we don't know where that information is coming from. I love that about AI hallucination. I'll never forget that term. Maybe I can use that as an excuse with my wife when I, "Honey, I was AI hallucinating. I don't remember what you wanted me to get at the grocery store,” right?

Rebekah: Yeah, I'm sure that will go over well!

Shawn: But when you use AI tools within the four walls of the enterprise, the virtual 4 walls, that's where it gets pretty cool. What if you really can trust the data? What if you can send a note to the generative AI tool within your ERP that says, “I really need to know how many of these products did my customer buy from us over the last five years? In English because usually, the salespeople aren't very good with sequel, or even going into the crazy, old app programs and typing stuff in. But there's the information that we can trust. So you're right, Trust, but verify. I like verified data: trusted, verified data. To go back to your point here, Rebekah, I think we – you and me - are super excited! Especially after hearing recently what Infor is doing with Amazon.

If you look at SAP and what they're doing with the hyperscalers. You look at what Oracle is doing owning the full stack. Whoa, they have all the data! Now what's interesting is they also may have your competitor's data, but we won't get into that. But we now have the ability to go all the way down the stack to database-level information through the application, the business logic, and through a user interface that people are used to. I think that's where the differences will be.

I think you're right, James, you can't talk about AI without this fear of, “Oh my gosh, I'm going to lose my job,” right? And I understand that I'm kind of like that might actually be good for me. Can I retire yet? I can't quite retire. Well, never mind. I guess I have got to keep working. Sorry, Rebekah, you can't replace me with an AI bot.

Rebekah: Well, you got to stay around for all of us!

James: We could! Never mind.

Shawn: I know, right? I know. But just like James said, if you think about the amount of manually intensive work that every single one of our clients is going through, you can't - because we don't put this in AI tools. We keep data extremely secure in this firm. But if you were to look at all our needs analysis stacks, and we've done hundreds of these. Almost every organization we're working with has this issue of manual duplicate data entry, tasks that are redundant, et cetera, et cetera. So, they look to an ERP to solve those problems. And I have to say, “Well, that problem is like the third or fourth stage of five, right?”

The first stage is you got to get out of this old system that is about to crap out on the side of the road. You can't run your business on a 25-year-old application that the person who wrote it is pleading to retire. Or you're on a system that isn't supported, so you're not getting security updates. There are some basics that we have got to handle - the danger condition first.

Then you get on a good platform. It's got great tools and everything else. Okay, well, let's get everybody going into the same place for the basic business data. Good. Now we can start talking about automation stuff, right? Then you can start looking at this sort of whole new level of value-added with analytics, business intelligence, and everything else. Then what sits on top of that and - that's kind of funny. I've thought about this for years is: what's that top piece? And I do think that's where generative decision-making based on system processes and data comes in. There really are major, major changes that can be made from that and that's the biggest difference. We're moving away from giving you a little bit of value add in terms of just a transaction processing, to the enterprise for real: what can we really do here to add more value? But people are always going to be a part of that process, and maybe that goes back to what you said James, that we can trust what the system says, but the savvy employees will always be there to verify what needs to happen - and then go do it! Don't just base it on what the system says.

James: You know, it's interesting. You talk about the automatic aspect. Artificial intelligence isn't just generative AI. Artificial intelligence is machine learning, deep learning, neural nets. The generative aspect pulls in the large language models. AI can be used for automation. If you're having repetitive tasks and tasks that can be repeated, you can use a type of machine learning to go through and handle that. You've already started seeing computers in there or that artificial intelligence in there - chatbots are very common. When you go visit Facebook or a website that's got a customer service, you're going to interact with a chatbot. It's going to have a picture of a human being that looks real, but you're actually communicating with the computer, with that artificial intelligence, that machine learning that's being able to give you the response.

There's a major hotel brand. I called them up a couple of weeks ago because I needed a bill. And when I got on the phone, it immediately answered and it wasn't, “Press one for English, press two – whatever.” It was, “Hello, and thank you for calling Acme Company, what can I help you with today?” And I was like, “Whoa, I got a human being right out of the gate? Way to go!” and then I'm like, hang on a second. I said, “I need to get a copy of my bill.” “OK, we'll be right with you.” So there's a bit of a nuance and I'm like, aha, it's a chatbot! It's an AI chatbot that's behind it. That's communicating, listening to my response, and then getting me where I need to go.

So that's already removing those level one type positions overall. Anything that can be automated or using that AI. Now talking about the human, and you hit it on the head there, is the fact that if every day people get on airplanes and we go to destinations from A to B. Those airplanes have an autopilot feature. And I've talked with pilots over the years, and those planes can take off, fly, and land all on their own. Great, so why do we have a pilot in there?

Because we need that human element. We need two of them! We need the pilot and the copilot. There's always the two of them in there because we still need that human element to oversee and make sure - that and they love to fly so they like to take off and land. But there's always going to be that need for a human element there to keep an eye on it whether it's going to be for robots, whether it's going to be for the machine learning and those kinds of systems.

And it was funny, you mentioned in your last statement, Shawn, was about sequel. I don't know if you heard, but a sequel statement walked into a bar, it saw two tables, and it walked up to them, and said, M(AI) join you? Sorry it took me 30 minutes to drop that in there, but I needed to find the right time.

Rebekah: We needed it! I was waiting for it, James. It's an expectation! Yes, I'm going to change our gears here a little bit - just a second. But before that, I do want to say, I agree with both of you. I'm not as good as Juliette about bringing in these questions. She does incredible, we're going to miss her.

Shawn: Yes, she does a good job.

Rebekah: But I think we are, just from what Shawn and I are seeing going to these events, we’re in a new era of delivery is the way that I'm starting to look at it. We've had this data and it's always been in place. The ERPs have been gathering it for so long. Even the legacy systems, the old AS/400s, they still had some data. But now we're learning how to utilize it. and if we're able to get those generative reports, and we're able to present it in a way that is really easy to understand, that frees people to do more with it and to add actionable insights to it. There's only so much time in a day. And so, if I'm spending my entire day creating these reports and trying to look at the data, there's less time for me to actually do something about what I'm figuring out. So, I think that's going to be this next gradient with AI to free people up for those things, so I'm really excited. Shawn knows I'm super excited about technology.

But just kind of changing gears a little bit. So, James, you've kind of touched on this already, but how safe is AI really?

James: Yeah, I mean just in the general sense. This even goes back years ago when it came to doing data analysis. Whatever garbage you put in is the garbage you're going to get out. The concern is going to be, AI, with regards to self-learning, which is what we call the general AI. We have narrow AI, which is what we've been already working with, where the system has a task, a particular function, it has to go do that. Whether it's taking in information, inputting in a prompt, and then delivering a response. When we've got predictive analysis, that's it's being trained on data. We've got some self-learning with the cars that the automated autonomous cars - things like that. But now when we start talking about self-learning, that's where people get freaked out. That's where our Terminators and our data are coming along where they have, not exactly a consciousness, but a self-aware, constantly learning.

The 60 Minutes video that was on Sunday, that was up at the University of Toronto. You had two robots that were programmed that were basically given the instruction to score, to get the ball in the net. That was their one task and that's what they were programmed with. They weren't taught - they have to use their feet. They basically had to figure out how to walk around the little soccer pitch - and there were two of them - walk around and kick the ball well enough to get it into the net. All they did was say, “Just get the ball in the net,” and that robot had to learn utilizing its surroundings, it's information that it had, to be able to figure out how to move around and how to kick the ball. That's the narrow aspect.

When we start looking at that self-learning, that's when people kind of that's when the concern really comes up. But looking at data poisoning, looking at prompt injection, jailbreaking, this is where people are trying to circumvent the protective measures that are on the generative AI. For example, when OpenAI released ChatGPT 3 and everybody was using it, there were Reddit channels that came out and the OpenAI developers were going out on the dark forms, going out to Reddit, and seeing what people were posting on what they got ChatGPT to do. I can't believe it was that stupid. I got it to give me a phishing e-mail. Well, the OpenAI developers then went, “Oh, okay.” So, then they went back to the office, back to the code, and updated it. So, they could correct it.

So, the developers, with regards to AI products, are learning and improving because this is really at the beginning of where we are. Even though we've had AI concepts around for 70 years, over the last 10-15 years, it's increased - and especially even more so over the last six. Looking at the model security, making sure we're protecting those, avoiding data poisoning. Because if somebody were to hack into an organization that has an AI system and be able to get in and manipulate the data, load it up with misinformation, disinformation, and basically providing the wrong information that's out there, that could persuade people as a society in and of itself. Making sure that we've got accountability, transparency, regulations. You've got Meta, Alphabet, Google, you got OpenAI, Amazon, all the tech leaders are getting with the government to come up with regulations. It's not going to be easy because we still are working on data privacy.

But working on trying to get regulation for these AI models and these systems that are out there. We must have that transparency and accountability because a lot of the time with the large language models with the neural Nets, it's all the different layers, we don't know what's going on inside those and that's kind of the scary element there.

Sorry, Shawn. But that's where that stuff that's going on that we don't have a lot of insight in. We're going to have to start having people in organizations that understand AI at all different levels. So not only just your data scientists and your programmers and developers, but also some type of AI expertise. Are we going to have a chief artificial intelligence officer? I don't know. But I think it should be a chief intelligence artificial officer because then it's ciao. Is that right? Yes, ciao. Anyway.

Rebekah: And yes, Shawn, just bringing it back to you, I know you did a presentation on this a few weeks ago, so you'll have some ideas, but just what are some things that leaders should consider in their AI strategy just to protect the misuse of it within their organizations? And James, you might be able to chime in, but I know Shawn, kind of shoring up the office of the CFO.

Shawn: Exactly. From our clients' perspectives, it does go back to having some kind of knowledge of what you're using and who's using what within your organization. I don't think you can just stick your head in the sand and just assume that your people know what to do and that they're going to do the right thing, right? They're not educated and maybe there is some maliciousness that's in the organization. I think it's relatively minor. It's a small amount of population, but the risk is there.

So, I would say the first thing is, and even as an organization for us, not only are we not using some of the AI tools, note-takers, and other apps like that simple stuff, simple communication things. We're not using them because we still don't know where that data goes. And when you're in our business or your service provider, I mean think about SOC 1 and SOC 2 when you're in financial services providing financial services for other companies. These are really strict requirements that organizations have to meet. If they're using an AI tool, and oh yeah, the data goes to wherever, and then it comes back with some answer to us. But where's wherever? You can't just say wherever.

I would say don't use the AI apps from new vendors that you're not familiar with. See, this is the problem. It's not like there's going to be a big enterprise sale and some organization is going to come to you and say, “Hey, we have this AI tool, buy it from us!”

That's actually good. Now you can get to know him, you can ask a lot of questions, do your technical due diligence, etcetera, etcetera. It's more like an employee starts using a note-taker. We had that on a client call recently where one of the implementation partners we were working with was using a note-taker our policy is no note-takers, yet the implementation partner was. My guy sent a note to our operations manager just today and he says what do I do about this? Like they're not supposed to be using that, but we're not really in charge of them. The client kind is like what do I do? So, we said to talk to the client. Tell them why we don't do it and ask them if you would like for us to tell them to knock it off, we'll do it. You have to look at this extremely practically. It's so easy, especially for folks that aren't that technical. Like the summer I got to take a bit of a sabbatical again. I always thank you guys for holding down the Fort. That was like a part-time sabbatical.

I was doing some stuff with this really great guy. He's a German man and he was probably in his early 70s. And he found out I was in technology, he's like, “Oh, thank God. I can talk to you about this AI thing, we're really worried about this.” And I said, “Okay, well, first off, stop. Let's find out what it really means, then you can see the risks, and then you can determine what to do about it.” But basically, we both agreed that the AI is going to take over the world anyway. I’m just kidding, eventually. I still don't think so. I have more hope in man than I think a lot of people do. But it's the same kind of thing there where you just need to know what kind of tools you're using and what's happening with the information. You mentioned for sure, James, the bias that's built into these AI models. You just got to do your diligence. We don't really do AI tool selections yet. But it goes back to a methodology where you're really understanding what you're doing. But the tricky part with this is, much like a large enterprise software vendor that has the tower in San Francisco. Salesforce sells direct to the end user and, of course, they go through IT when they must, but they'd rather not. Although they definitely support IT strategy. I shouldn't say that.

But the idea of the best-of-breed enterprise applications like Salesforce, even the HR tools, whatever, they go directly to the business user. And it's almost as if an individual makes that decision. That's where it gets risky because your people are doing things that you don't know about. And James, I can imagine especially from KnowBe4’s perspective, if you're not doing or you already are, is how do you train the employees to know the tools that they're using if they're safe or not? Phishing emails are one thing. But how do you get the individual and enable them to say, “Oh my gosh, I'm using something that's dangerous.” “Oh, that was just a fake from KnowBe4.” You wanted to download that, “write a PowerPoint Tool” like you mentioned. Well, that was fake and now you know we're going to do some training on that. Are you guys doing anything around that?

James: Yeah. Not to give away too much of our internal operations, but yeah. As an organization at our Cable Four con earlier this year, we did a fun little demo. It's not a service, but we did a fun demo that integrated with ChatGPT where we asked it to send an e-mail to somebody in the station that we set up and it sent it saying that, “We've made an update to our HR policies. We need to verify your Social Security number. Can you please reply reply with your Social Security number so we can check it against our records? Make sure that you send it encrypted.” And we look at that and go, “Yeah, right. Okay, that's phishing.” But if somebody was a little unsure and they went ahead and said, “I don't think this is our policy that I'm supposed to send you my Social Security number. What should I do?” Send it, it goes back to the generated AI. It now generates a response, “So I know you probably believe this is a fake or a phishing e-mail, but I can assure you this is a legitimate e-mail. We need to be able to verify your Social Security number.” And then if you reply back and you put in a Social Security number, it comes back and goes. “You know, it's against company policy to send sensitive information through e-mail?” But we did of the demo as to how cyber criminals could be leveraging it.

We already have AI machine learning in our phishing simulation tool already. It's called AIDA. We’re embracing AI as a technology company. Our CEO, Steve Sherman, loves the whole AI stuff. It's more of like a personal competition to have with him. Because every day, he comes up with something new that's really neat with AI. And I'm like, “Okay, what am I coming up with today?” For me, I've been doing a lot of these presentations and discussions, fireside chat-type things for the last nine months. And every day there's something new coming out about AI, new tools, new services, new technology, chat. OpenAI comes out now where you can surf the Internet and pull information down. Bar is sitting there going, hold my beer, I've been doing that for three months. You've got the advanced capabilities that are coming out with that as well. So, they are constantly competing out there jockeying for a better position.

When it comes to your organizations, it's up to the organization, the culture for that organization, and the leadership to determine if they're going to implement any type of AI in the organization. I was at a security conference a couple of weeks ago where they had a cell up there and he goes, “We block all generated AI. No ChatGPT, no cloud, no Bard. We block it.”

Their concern is the risk of intellectual property going out the door. Patient data because they're a pharmacy. They don't want to risk losing that data to something they don't have control of. They're educating their users and they're going through that, but it's just easier for them to go, “Nope! Thou shalt not use GPT.” So it depends on the organization, depends on the culture, on how that's going to be adopted. You've got some organizations - Sales Force has wholly adopted it because they're utilizing their products as well. I'd seen a presentation from one of their folks.

It will depend on the organization that wants to adopt it. There are a lot of cybersecurity products already out there utilizing some type of AI, most likely machine learning, predictive analysis, early detection response on malware, network scanning, identity access management, those kinds of things. So even our phishing product uses a bit of machine learning in AI as well. It comes down to what the organization is willing to take on, what kind of risk they want to take on, and how they're going to be able to manage it.

Shawn: James, you touched on something, I'll just say this as a last comment because I know we're getting close to the end, but just like you guys, I would trust what you guys are doing with AI, ML. That's why people buy your product. Most of our clients use KnowBe4, it's incredible! But the same goes for the ERP vendors that we've been relying on for decades to put in solutions that work. When you put an app in the cloud, you don't even know all the risks you have when you put it in the cloud - especially if it's multitenant.

There are walls around everybody's data, but we've been relying on that working for a long time, and the vendors have proven that they've fleshed out their architecture and it works. That's why we do like the AI strategies coming from ERP vendors and sort of put a little bit of the onus and responsibility on them to figure out the security issues, et cetera, et cetera. That doesn't mean when you're doing your diligence on a vendor and trying to understand the ERP vendor's approach with AI, that you shouldn't ask a ton of questions so that you understand.

It's like buying a car and they're sourcing their parts from the original equipment manufacturers that you never know. Well, the automotive manufacturer did due diligence with those vendors to make sure that their parts would work within the overall assembly of the car. Same thing for AI and ERP. We can rely on the vendors more because they are used to doing this kind of diligence and they're asking questions that none of the rest of us are even thinking of. Because one problem with an AI tool in an application like a Sales Force or whatever, could completely destroy that organization. That's another warning I'm putting out here because I know a lot of different people hear these calls. We are really relying on folks like KnowBe4, we're relying on SAP with Joule, we're relying on Infor, and the Oracle people and everybody else to do their diligence so that these solutions work for our clients. And I think it works. I think it's not even a golden age of what's to come, I think it's actually just beginning to understand what could be with really valuable computer solutions, computing solutions, that were just on the horizon to see.

James: The other thing to consider as well, is you're going to be working with the large organizations, but also think of moving it, think of solar winds. They had their unfortunate breaches and data corruption, and they leveraged that to gain access into a whole lot of other organizations because of their third-party vendors they were working with. So, when it comes to your third-party vendors, one of my messages that I always like to relay is your security program may be strong, but your security program is only as strong as your weakest vendors’ security program. Because depending on how they're accessing your system and gaining access, that's another entryway for cyber criminals because they like to go after the smaller fish so they can use those to catch the bigger fish.

Rebekah: Definitely. I know we could go on forever about AI, and I do encourage anybody who's listening or when you share this content, view it after the fact, please do reach out to us if you have questions. I mean, that's what we are here for. We can connect you with the right people. Or if your organization is looking to do something, we are here and that's what we love to do. We love to make sure that people are definitely achieving their business strategy and their goals and applying this technology but in the safest way possible. So, thank you both for joining me. I'll close this out.

Thank you again everyone for joining us for today's call. Please let us know if we can answer any questions that you have. Be sure to join us for our next webinar, scheduled for Thursday, November 9th: Your ERP implementation Compass: Leveraging a Client-Side Implementation Consultant, where we will discuss the role a client-side implementation consultant plays in the ERP implementation and how they are not the same as the implementation partner or the internal client PM.

I think it's going to be a great call, so hopefully you guys can join us! Please go to our website erpadvisorsgroup.com for more details and to register. ERP Advisors Group is one of the country's top independent enterprise software advisory firms. ERP Advisor Group advises mid to large-sized businesses on selecting and implementing business applications from enterprise resource planning, customer relationship management, human capital management, business intelligence, and other enterprise applications, which equates to millions of dollars in software deals each year across many industries. This has been The ERP advisor. Thank you again for joining us.

Shawn: Thank you. Thank you, James!

James: My pleasure.

 

 

RELATED