Thank You! Enjoy the Webinar

Your complimentary copy of “Artificial Intelligence: You Keep Saying That Word, I Do Not Think It Means What You Think It Means” is ready below as well.

Full Transcript:

Speaker 1 (01:03):

Hi everybody. I’m Bob Bowman, editor in chief Supply Chain Brain. Welcome to the special presentation Artificial Intelligence. You keep saying that word. I do not think it means what you think. It means presented by gains. Quick reminder, there will be an audience question and answer session at the end of this presentation. Audience members are encouraged to submit their questions at any time during the presentation by clicking on that q and a icon at the bottom of your screen. So clearly artificial intelligence is one of the most talked about topics in supply chain management today. Yet despite the excitement, there’s widespread confusion about what it actually is and how it differs from basic automation or rule-based systems. But true ai, especially agentic AI, is much more than just automation. So today we are going to challenge the prevailing myths about ai. Break down the difference between automated workflows and agentic ai and explore how to make informed decisions when adopting AI technologies.

(02:13):

You’ll discover why most so-called AI solutions aren’t truly intelligent and how to discern between buzzwords and genuine innovation. With that, I want to introduce our speakers for today. Amber Sally is the Vice President of Industry Solutions at Gains where she brings more than 15 years of experience and supply chain management with a background that includes 12 years. As a senior research director at Gartner, Amber helps organizations enhance resilience, agility, and decision-making in today’s volatile environment. At Gains, she’s an advocate for anti fragility and supply chains, encouraging organizations to embrace adaptability and grow stronger in the face of uncertainty. Bill Benton is the co-founder of Gains and a driving force behind its evolution into one of the industry’s most trusted supply chain optimization platforms. With over two decades of leadership at the helm, bill helped shape not just gains, but the way that modern businesses think about resiliency, decision-making, and long-term value creation. He’s the architect of gain’s customer first philosophy, a champion of measurable outcomes over empty promises, and a believer that technology should empower people, not replace them. So welcome everyone. With that, I’d like to turn it over to Amber for the presentation. Amber, Sally, take it away.

(03:45):

Amber, you are muted. You need to unmute. There we

Speaker 2 (03:50):

Go. Thank you. Thank you, Bob for the great introduction and welcome everyone to the webinar. Bill and I are very excited to be here. So a little bit about what we’re going to share today. So as Bob mentioned, this is all about AI and demystifying AI for you and from the line from the eight 1980s classic movie The Princess Bride AI is you keep saying that word. I do not think it means what you think it means. And during our time together we’re going to tell you what AI means and what AI isn’t. Then Bill’s going to walk you through the evolution of AI within supply chain where it can really drive value for you and introduce the newest AI buzzword, agentic AI will then lead you to understand the gains approach to AI and then give you some takeaways that you can start to leverage in your environment to really start to integrate AI into how you work and how to make better decisions within your supply chain.

(04:54):

And then with the end of the webinar, we’ll do some panel questions and take some questions from the audience. So if we think about why we’re talking about this right now, AI is being talked about everywhere in the supply chain market, but in reality there’s not as much there there as the market is making it seem to be. Now, there’s a very sobering statistic that 70% of AI pilots fail, and these findings cross many different professional services firms surveys and industry group surveys. I mean, the good thing is this isn’t just supply chain where AI projects fail, it’s across the enterprise. But when we think about how supply chain leaders are feeling about ai, many of them, a very large percentage of them look to AI as a very transformational technology that can provide tremendous value to their supply chains and supply chain outcomes. But a much, much smaller percentage have actually been able to operationalize AI and really integrate it into their way of working in their supply chain.

(06:12):

So we see a big gap between what supply chain leaders say they want to do with the technology and what they’re actually able to do. And we think it’s probably because companies don’t really understand what AI is and how to effectively use it. So if we’re thinking of what AI is, we’re going to start with the basics. AI isn’t magic, it’s math, it’s data, it’s compute, and that’s it. You feed it enough good data and you apply the right algorithms and you use processing power and it starts to surface patterns and predictions. But just having data isn’t enough. The true intelligence is if it’s human intelligence or artificial intelligence is about understanding and acting and it’s not just doing analysis for analysis sake. So you’re not having AI just for the sake of having ai. It’s about making sense of the world and then doing something useful with the insight.

(07:15):

Now we need to separate AI from some of the terms that it’s often confused with. So think of algorithms and automation. Well, algorithms, you think of them as a set of instructions, a set of step-by-step approaches to get to an outcome. Then automation uses those algorithms to execute repeatable tasks. But AI takes it a step further, it learns from that data, it adapts to new inputs and it makes decisions in real world environments. So when you hear the market talk about claims of something being AI powered, it’s worth asking is it really AI or is it just a bunch of scripts and rules? Think of it this way, if it follows a fixed logic and doesn’t adapt, it’s probably not ai. And even within ai, not all techniques are equal. There’s machine learning, which is finding patterns and making predictions, and machine learning has been around for decades.

(08:21):

Then you have some of the newer AI techniques that have appeared, a generative ai, which is creating new content from learned patterns. And we see generative AI applied a lot in creating text and creating graphics. And that’s because it’s using the patterns of language to create text and is using the rules of physics to be able to understand shading and perspective. But now we’re starting to see agentic ai, and these are the systems that can reason they can take action and they can call on other tools within your technology environment to have those tools come together to solve a goal. So choosing the right approach, machine learning, generative ai, genetic ai, it all depends entirely on the problem you’re actually trying to solve. So here’s a good litman test. If a human giving enough time and data could do the task, then AI can probably help. But if it needs judgment, creativity, or leadership, that’s where we still need to have humans support the decision-making process. Now I’m going to turn it over to Bill to start talking through the evolution of AI and supply chain.

Speaker 3 (09:40):

Thanks a lot, Amber. So going back, Amber mentioned that machine learning and other methods while not new actually are not that well deployed. So I wanted to build on a theme that she mentioned, which is don’t overkill. So if you’re new on the AI journey, try to find things that add value and can do so relatively quickly. So going back historically, things like rules-based ai, you might’ve heard the term robotic process automation or RPA. These are things where there’s a prescribed path and you just want to be able to automate clicks if you will, and or a B type of decisions where there’s a clear described path path using a litmus test here, if you can describe it on a flow diagram on one page and it’s in legible font, you probably don’t need a agentic ai. You can probably use something more historical rules-based that sort of evolved into some predictive ml.

(10:51):

And there are things very powerful like predicting lead time. You think lead time in supply chain is one of the most overlooked areas of opportunity, for example. And a machine learning methodology can be very helpful if you engineer the right input or feature data and feedback as you move historically into gen ai. What we’re looking at here is now the ability to articulate text. So how do I go and interrogate a system and say, show me a scenario where my demand is 30% higher, how does that affect my budget in purchasing? So that’s very helpful as it relates to people interacting with systems in a more productive fashion. And then ultimately in the reasoning models, what’s generally agen ai where the path is not prescribed upfront. This is where you couldn’t describe every potential route in a one page flow diagram. And then that’s evolved now into the reasoning models as well.

(11:54):

So not just getting to a feasible solution, but maybe having competition between agents where people are negotiating, say a ship date on a purchase order. You could have one agent yours talking to, say a supplier’s agent to come to a reasonable ship date for you to expedite, for example. So when you think about it as consumers, right? We relate to this with chatbots, a practical example, my aura ring battery life had declined from seven days to less than one. Within two minutes I was on a chatbot on their site and they were sending me a replacement ring, right? That’s great, and that’s a really effective use. I might’ve argued though, to Amber’s point, maybe they missed an upsell to the next generation of ring. So how do you not just automate, but augment, just to build on that theme. And then lastly, talking about ag agentic ai, this kind of goal-oriented decisioning.

(12:58):

What is, and this gets into theme we’d like to reinforce throughout this, is make sure that along this journey there’s always a results centric approach. A result could be I want to optimize a decision like approving, for example, work orders or purchase orders or executing on changes in a supply plan or bringing in what exogenous data into say, demand prediction. Do I want to look at these indexes or prices or inflationary numbers, et cetera. So agen AI allows sort of less structured data, et cetera, and we’re going to talk about that next as we discuss how to discern which AI is quality and which isn’t. So not all AI is created equal. So there’s a lot of AI washing. So this watching you might’ve heard of greenwashing as it related to everybody trying to claim whatever they did was environmentally sustainable. We’re seeing this here.

(14:02):

So if they’re saying AI is in our DNA and they’re vague, chances are that they don’t really have a lot of results proven use cases. So that’s where you’ll see marketing buzz versus real capability also use common sense. If they’re claiming that they do these things well, you’d like to talk to two or three people where that’s happened, right? Don’t just believe the slideware. A lot of people have what we call PowerPoint parody. They can claim things on a slide, but can they actually have you talk to a practitioner who’s achieved real results? So that leads into some of the red flags, no transparency. I’m going to look at transparency from two perspectives. One is the one we just mentioned. Are they transparent about their actual success stories, but also transparency within the model? Can you explain why it’s making certain decisions? And this applicability can be very important as it relates to adoption and trust in these models.

(15:06):

So if you have an agent that’s actually spending the organization’s money without a human intervening with each of those steps, you want to understand why. Why is it making this decision? The next point here is bespoke machine learning or ml, that’s not reusable. There’s a lot of science experiments posing as scalable, deployable and secure solutions. And some of the issues around these is they lack one or more of those three key ingredients. So as we talk about here, this last point is you want to really ensure that what you’re developing or what you’re adopting is sustainable across teams and time. And I would say secure if I were to add another S in here, sustainable and secure across teams and time. So next we’re going to segue into discussion about where things fit in supply chain. So what are some good use cases? So first is unstructured data.

(16:09):

So an example here would be, how do I say do attribute matching across new products with old products if I want to do product launches or if you have two different entities talking to each other, how do I know whether these two item or part numbers actually the same thing? If you could read in text description into an AI methodology structure that text data and create likelihoods of these two things, sharing attributes or even being the same thing, that can be very helpful. Another element is where there’s lots of variability and that can be variability of process like AM was referring to, right? If it’s a small process that’s easily automated, you don’t want to overkill with the gentech ai. But if the process itself has many contingencies and many routes at each contingency and there’s a lot of variability in the process, that’s a great area.

(17:08):

But then stepping back and saying, let’s look at variability as an error around a given prediction. So my background’s in applied statistics. So when we search at where’s a good area, well, if your say, demand planning is very tight, but you have plus or minus 40% error around your lead time predictions, look for those areas where there’s lots of variability. So that’s a key way of finding high impact, probably quick wins. Secondly, looking at planning augmentation and exception handling. So this would be areas where say people are having to manually override system suggestions, but they’re doing it in a way where they can describe what data they’re thinking about, what emails they’re reading. So speaking of unstructured data, email is a great source, very hard to feed that into Excel, a lot easier to feed it into an AI model. And then lastly, decision automation.

(18:08):

So how can we focus on say the top 5% of decisions as opposed to what’s typically happening now would be like the top 50 or 80% decisions. So that’s another great area. Bad fits are where you going? You’re chasing shiny objects, right? We’re trying to use ag agentic AI for something that’s a five-step process that could be robotic process automation. Other things where there’s no data governance. We’ve had two decision scientists curate a bunch of data, get rid of outliers, and there’s no process in place for the next round of training of the model that’s not likely to be sustainable across teams in time. No clear success metrics. We just want to make something better. That’s not quite enough. What are the results either in working capital, reduced expediting, increased product availability, reduced human effort to get to an end result or make a decision and measure those things in dollars, fulfillment, et cetera.

(19:14):

And then lastly, AI is not equal to optimization. So if you’re trying to figure out how to optimize how to build a container or hit a free freight allowance by hitting a certain spend with the supplier, this is not generally an AI problem, this is an optimization problem and understanding, which also can be very important. Next, we’re going to talk about understanding what’s the difference between an agent, a tool, and a task. So when we talk about agentic ai, just focus on that last bullet here. So there’s forethought, which is how do we predict where we’re likely going? There’s reflectiveness, which means generally this is a concept like where there’s holdout data, how well did we do, and is the model adapting for continuous improvement? And then there’s an execution element. Can it take action? So analytics are great and visibility is great, but it’s insufficient, right?

(20:15):

Visibility does not equal action. It might help support it a bit, but it also creates a lot of noise as well. So agen AI also should have that ability to act. We’ve talked about some of the early use cases. And then the other thing is it’s not quite ready for really complex elements where there’s a lot of human judgment involved, like network design, dealing with things like what happens if tariffs increase? How do I create, how do I decide which other countries I need to source from? There’s insufficient structured data for this. So while ag agentic AI can help, there’s still going to be a lot of interaction. So let’s talk a little bit on the right side here about agents, tasks and tools. So agent, what is that? Generally that’s the decision-making entity, right? So am I approving this line in a po, for example, yes or no?

(21:19):

And it can make that decision. Is there a task, is a defined processor or workflow? So that could be many things. Getting rid of anomalies in demand data, again, could be approving a purchase order that fills a container precisely and optimally. And a tool is a technology of resource. So that tool could be a machine learning method that does a prediction around demand or supply or lead time, for example. Or it can be a resource, like a SQL query tool that can go in into multiple data sets across the enterprise and pull the information needed, say like a landed cost value or shipment cost within a lane, for example. So agents apply tools to perform tasks. So I might actually have reversed this order of the task and tool from a sort of conceptual flow, right? So what’s an analogy here? Think of a chef preparing a meal.

(22:19):

The task is the recipe set and the tool is your kitchen appliances, mixers, frying pans, et cetera. So this is I think a very interesting summary, and this is from our friends at Tribe ai. I want to attribute that and how they’ve sort of distinctly done this. Now, if you don’t need all of this, so going back again to how to be discerning, this may not be a problem that rises to agentic ai. Maybe you just need the tool because you just want to do a spot solution that’s high impact and easily adopted. So thanks a lot. I’m going to segue back to Amber now for the remainder of Amber. Appreciate it.

Speaker 2 (23:01):

Yeah, great. Thanks Bill. So with what we’ve been talking about for the past 20 minutes or so, we hope that we’ve given you a good understanding of what AI is, what it is, and how it can be used. Because if we think back to that stat that I shared earlier about the 70% of AI pilots failing, we don’t want you to fail. We do see AI as a transformational tool if you use it in the way it’s intended to be used for. And as we gains have been working with our clients to deploy ai, we have observed common things that have led to failure in scaling and implementing ai.

(23:50):

The first things are about how the market is just talking about AI and people falling into the trap of the market hype. Bill was talking about AI agents ai, and what we’re starting to see is there’s confusion, there’s conflating of what these technologies are, but a lot of supply chain leaders are trying to go after these new shiny objects without themselves not really knowing what these technologies do. And then you add to that the technology vendors themselves might actually not have strong capabilities for them right now. So you start to build an initiative to bring in some of the technology that you’ve been hearing about, but you really can’t scale it because you’re trying to apply it in the wrong use case. We have also started to see more in the market of this notion that AI is the answer when it’s not necessarily the answer.

(24:51):

You heard Bill mention other technology approaches that I heard Bill mention. RPA bill mentioned optimization, and those tools in many instances are a good enough tool to solve the problem that you’re trying to solve or help with improving the decision, a specific decision that you’re trying to improve. So you really need to be aware of what the players in the market are saying about AI and how it could solve a certain class of problems when maybe it’s not the best approach to solve a certain class of problems. What we also observe is a lack of a structured business case for bringing AI into the environment at Gaines. We’ll talk to companies evaluating technologies who will just make a blanket statement that says, I need a technology that has generative ai. And then when you ask them, well, why do you need generative ai? You don’t hear a response back.

(25:56):

That demonstrates the awareness of what something like a gen AI could bring to the environment. So if there’s a business case developed, you at least have an outcome that you’re trying to move towards. You can start to identify how you would measure ROI and then that can help you identify what type of technology, AI, or something else might be the press approach to support that business case. And this also then goes into the idea that sometimes the problem that is trying to be solved is an ambiguous problem. The problem just might be something like, I want to use AI to increase ebitda. Okay, well, there’s a lot of components to that and there are probably lots of decisions and tasks beneath that that can be supported by ai, but you need to really identify the discrete problem that you think AI can help you solve.

(27:01):

And this also ties into the idea of seeing companies trying to automate the wrong thing. As an example, I was talking to someone that I know who has an AI consulting business, and he had a client that said that they want to apply AI into their customer service activities. So the person went in and he was observing what was going on in customer service, and what he found out was the problem wasn’t that there were repeatable tasks that could be improved with ai, it was that they had very high employee turnover because employees were actually better making decisions than an AI would. So the response was pay employees more so that you don’t have as high turnover. This isn’t an AI problem, this is a human resource problem. So it’s really understanding what is the problem that you’re trying to automate? Well, the thing gains, we have created a couple of frameworks that companies can use to help identify what they should be automating, how to build things you consider as you build your business case.

(28:08):

First, this is concept of supply chain awareness, which is a three pillar approach that we espouse, which is thinking of know thyself. So understanding the strengths of your supply chain and where you’re leaking value. Know thy network where you build transparency beyond your first tier partners and know thy why, where you gain clarity on decision making impacts. We also gains a spouse a framework that we call the seven principles of decision engineering. And one of those principles speaks to a few, but there’s one in particular that speaks to using the right tool to solve a problem. And again, the tool might not be ai, it could be mathematics, could be optimization or some other type of approach, but it’s again, understanding what decision you’re trying to solve decisions with high impact and if they are a good candidate for applying ai. And lastly, thinking of what are these good decisions?

(29:09):

Well, we see them as being repeatable decisions. So decisions that might be made multiple times a day where if you’re off by, you’re off in making the decision. If it’s a lead time and the lead time range is between four and 10 days and you approximate what that lead time is, but you’re off by a day or two, well that one day, that one decision might not have a huge impact. But if you’re making that same decision multiple times a day and it can end up being thousands of times per year, that can ultimately lead to very high impact over time. So the gains approach to ai, in addition to what Bill and I have been talking through about, thinking about your decision, finding the right tool at gains, we’ve put a big effort towards rethinking how we even architect the technology to allow for AI to be more easily integrated into a technology environment.

(30:10):

Because we look at AI as a tool to support your overall strategy. So we use a composable approach where companies can plug in AI services as needed into any technology stack they need. So this allows you to pinpoint a specific need that you might have that could be solved with ai. We have out of the box models to support that need. Bill mentioned lead time prediction. A company can take our service for lead time prediction, machine learning based lead time prediction, plug it into their environment and see a remarkable improvement in the accuracy of the lead time predictions, which can lead to millions of dollars in cost savings. And then being able to improve these conditions and evaluate your supply chain in real time so they can continue to see which decisions have the biggest impact to achieving your goals, and then plugging in an AI approach that can help to improve the outcome of that decision.

(31:28):

So as we think about, but Bill and I have been sharing with you in the audience, we want to again reinforce this idea that AI is a tool. It’s not magic. It’s not like if your student leadership says, we’re going to be AI first, and you start to invest in all of these AI tools that you magically have this supply chain that can do all these amazing things. No, it’s not like that. There are other considerations that you have to account for in order for AI to add this big epic big impact that it really can add. So some of those things that you have to consider, things that we think at gains you should start to think about and start to do next are really understanding where AI adds value. Identifying these decisions with a lot of impact, these repeatable decisions and seeing where AI adds the most value.

(32:26):

And again, sometimes the better value is applied using optimization techniques or maybe some other mathematical technique, but finding where does AI add value? Then getting your teams ready for ai. We always hear the market talking about this human machine interaction, and that’s real. There are some things that AI will not be able to do. AI doesn’t have as much human judgment as humans do. It’s very hard for it to manage around some type of event that it’s never seen before because it looks at patterns over time. So there’s still going to be a need for humans. Their roles will change, and they have to learn how to incorporate AI into how they work. But starting to lay the groundwork now for making your team see that AI is not scary, but it’s going to be a tool to help them be more productive over time.

(33:27):

Then the data quality piece and data volume piece, I don’t think I need to go too deep into that one because I know the market talks about this all the time. And lastly, thinking of building these systems that adapt and not just automate. And again, when I think about the systems adapting, adapting to real world environments, moving from a traditional approach that has a monolithic environment that doesn’t allow for a lot of flexibility to adapt to changing business conditions and shifting towards a composable approach where you can plug in your components as necessary to gain more value. So I thank you for your attention. I’m going to pass things over to Bob to start the panel discussion.

Speaker 1 (34:15):

Thank you so much, Amber, and thank you Bill as well. That was one of the clearest and most realistic descriptions of AI and especially Agentic ai I have ever heard. That was fantastic. Let’s move, as Amber said into the panel discussion where I have the privilege of asking a few questions before we get into audience questions. And audience do remember there will be an audience question and answer session at the end. So continue to submit your questions by clicking on that q and a button at the bottom of the screen while we get into the panel part. So Amber, to this idea of further demystifying ai, you’ve seen a lot of vendor claims across the market. Bill talked to us about these so-called red flags. What in your opinion, are the biggest red flags that supply chain leaders should watch out for when someone says we use ai?

Speaker 2 (35:02):

Yeah, so a big red flag is discerning whether or not they are just repackaging what they’ve already had, maybe scripts rule-based approaches, and wrapping that in an AI wrapper. Again, taking advantage of the markets, lack of awareness of what AI really is. So we need to educate ourselves on the true capabilities of ai.

Speaker 1 (35:37):

Bill, we talk a lot about the difference here between automation and true ai. In your view, why has this confusion persisted for so long in the supply chain space?

Speaker 3 (35:50):

Well, part of it is marketing hype, right? So I think the ambiguity around it is engineered into the process so that things that aren’t truly AI could be repackaged and claimed as such. So when I mentioned PowerPoint parody, there’s a lot of claims out there that are exaggerated. So I think it’s been convenient for people that are, whose marketing is ahead of their ability to deliver.

Speaker 1 (36:21):

Okay. I do want to drill a little bit more into this idea of so-called real world value versus buzzword promises because there’s so much out there right now, and Bill, you have emphasized, you’ve always emphasized measurable ROI over flash. How do you help customers separate real AI that can drive impact from features that just sound intelligent?

Speaker 3 (36:44):

Sure. Going back to something mentioned earlier, if there’s, think of the agent taking action using tools to perform multiple tasks, right? So if you’re hitting all three of those and you’re doing it in a dynamic fashion, meaning what tasks are performed and which tools are called is something that the model is deciding and the model can execute at the back end of that, that’s a full agentic AI solution. If it’s lacking one or more of those, it’s something less. And that doesn’t mean it’s bad per se, because you don’t want to overkill the problem. Agen AI has a cost to it. There’s an incremental cost. These data centers consume a lot of energy, for example, that’s not free. So you don’t want to smash a mosquito with a sledgehammer, but in some cases you need a sledgehammer because you’re cracking concrete. So matching the tools, the tasks, and the agents to the scale of the problem is important. But if you do want to achieve this because you want to build, as Amber mentioned, training, right? The best way to train is to do right? So you probably want to get one of these running sooner than later so you understand where they can apply and that would be ticking off those boxes. Bob,

Speaker 1 (38:06):

Amber, you mentioned in an earlier slide that many AI pilots fail. What do you think are the most common traps that companies fall into and how can they avoid them?

Speaker 2 (38:17):

Yeah, so a few common traps are some of the things that were messaged earlier of not solving the right problem, not using the right AI technique, but some other things that plague projects are starting to piggyback on. A little bit of what Bill was stating was the cost of ai. AI is really expensive. The compute cost for AI is very, very high. So as an example, we think about forecasting and the use of, say, generative AI to improve forecast outcomes. Data has shown that there’s maybe marginal improvement in creating a forecast. We seen generative ai, but the amount of processing power and the cost associated with that actually negates any of the value gained from a cost perspective. So this notion that these projects are starting and they end up being really expensive, and then the outcomes that are coming out of these pilots are not high enough or significant enough to justify the cost in compute for ai. And the last thing I’d say of why these projects fail is there’s no ROI or business case attached to it. So they sometimes end up being long drawn out projects with no end in sight. So it’s as if you’re trying to chase your tail, but you’re solving the wrong problem in the beginning. So you’re just going around in circles and circles and circles and you say, oh, the project failed when maybe you should have just cut bait and looked for a new problem to solve.

Speaker 1 (40:13):

I love your reference earlier, the customer service thing, where the particular solution might actually be hiring more humans than if you’re not going after the right,

Speaker 2 (40:21):

No, paying them more. Paying them more, I’m sorry,

Speaker 1 (40:24):

Paying them more. What a concept there, huh? Right. Okay. What is next with agen ai? Bill Agen ai, it’s a newer concept in many supply chain leaders. We’ve only been hearing about it for the past year or few months really in a public sense. Help us understand how it differs from the gen AI that people might already be experimenting with.

Speaker 3 (40:46):

So gen AI essentially allows text or voice-based interaction with a solution where it can generate responses to questions, and that’s very helpful. It can synthesize information, right? We use it internally ourselves to interrogate our legacy code base. So when we talked about composable solutions, we had to move from an end-to-end planning solution to a series of composable services. So if somebody just wants to say, optimize how to build a container or a maximize lead time prediction accuracy, they can do that alone. We use gen AI tools ourselves to refactor software code very good at this stuff. However, agen AI is where you’re giving AI agency to make decisions and sort of the next step, if I were to choose one, would be where you have multiple AI agents interacting with each other. I alluded to this briefly when I discussed maybe your agent talking to your supplier’s agent to arrive at an adjusted delivery date. So if you’ve decided that you have a shortage you didn’t expect because demand exceeded what your forecast was and you’d like to bring in something sooner, right now there’s probably a dozen emails between you and somebody at your supplier or internally, your capacity scheduling team to try to negotiate a viable accelerated date. The next step will be agents doing that, coming to one or two proposed solutions and providing that back for final approval. So instead of an eight step process, it’s a one step process because they’ve already renegotiated most of it.

Speaker 1 (42:35):

Alright, let’s talk a little bit more about how you can operationalize AI from a strategic standpoint. Amber, you said that companies should not start with the tech, but with the problem. What is your framework for when you’re helping supply chain leaders to identify what are actually AI worthy use cases?

Speaker 2 (42:53):

So it starts with the decision and identifying those small decisions, we might say, the UN problems that you are trying to source. And then the next stage is the data pieces. Is there enough data to actually train a model to try to solve the problem they’re trying to solve? And is your data good enough quality to actually do it? The next goes to this notion of the experimentation. You have to be comfortable with failure. I would tell people that your failure rate should actually be pretty high with ai. Some might say that if you aren’t failing enough, you’re actually not pushing the envelope far enough with how you’re trying to deploy ai. I mean, I guess that kind of counters the sobering statistic of a 70% of AI projects fail because when you say thinking of it in that terms, is 70% failure bad? Actually, it’s probably not because it means that you’re doing a lot of experimentation and you’re going to get some successes, but then you’re also going to find out where it’s not going to work.

(44:13):

So you can focus your time and effort on where it does work. And then as you do these experiments and you fail, scaling it out and doing all of the work of training the teams to embrace the way of working. And I think I’ll add here, the nice thing about this is it does allow you to rethink how you are managing your supply chain because you now have more resourcing available to take a step back and think, oh, are there things that we could be doing better in our supply chain that we haven’t had the time to really focus on? Because we need human judgment to figure out how to do things a little bit differently.

Speaker 1 (44:52):

Okay, there’s that word human. I want to very quickly before we turn to the audience, just get in this question about as AI takes over more analytical tasks, where do people still matter in the loop? Bill, you want to handle that?

Speaker 3 (45:05):

Yes. I think where people, what we need to pivot from as a mindset in supply chain is repeating processes, not exactly ad nauseum, but predominantly ad nauseum and moving towards how do we continuously improve a process, which to Amber’s point might be stepping back and understanding what additional information or data can we be providing to improve this process. So what we’ve done partnering with people is shifted them from focusing on if you think of any process at a high levels input process and outputs, right? Don’t focus on manipulating outputs. Don’t try to focus on building an Excel model to do the process. Focus on how you can feed novel or newly engineered data into it. So for example, we have a great customer who creates a lot of sustainable packages for takeaway or delivery service. And who would’ve thought in 2019 that the number of passengers going through each airport’s TSA security check line would be a great predictive input for supply chain models. But it was very representative of how much takeout business there was, right? In first correlation, fewer people going through TSA checkpoints meant higher degree of shutdown, which meant more people dialing DoorDash. So these are the types, that’s one simple example, but augmenting at the front end rather than tinkering in the middle or manipulating the output Bob and short,

(46:51):

And then tackling problems that aren’t tackled right now, like resiliency. Amber mentioned anti fragility. Do people, if they look at their supply chain, do they have redundancy? Now? A lot of people don’t, right? They don’t have time to do it because they have to do their day jobs a lot, 80% of which is repetitive

Speaker 1 (47:10):

Tech fragility where you don’t just survive the disruption, you actually thrive as a result of it. It’s an interesting concept. We could wish we had more time to get into it, but let us bring the audience in now for our question and answer session. We’ve had some good questions already submitted. Audience, you can continue to submit those questions by clicking on that q and a icon at the bottom of your screen, and we will get to as many of your questions as we can. Time permitting. So here’s a question I guess for both of you. Maybe I’ll start with Amber. Can you share an example of a customer who used AI to solve a real challenge and what results they saw?

Speaker 2 (47:48):

Yeah, so we have one customer that’s in the distribution space that has been using our machine learning based lead time prediction service. They were challenged with increasing variability in lead time, starting with around the 2020 time horizon. And as things in the global environment started to seem to stabilize, they were still seeing lots of variability in lead times. In addition to that, they noticed that the predictions that they were making for lead time were becoming less accurate themselves. So a combination of more variability and less accuracy. So they used AI and our machine learning service to more accurately predict lead times based on looking at different features and going to bill’s comment about the humans bringing in more data. We think of this, we call this a data-driven approach where you’re using all the data you have available to you to drive the decision that you’re making.

(48:57):

In this case, the role of the human was to gather as much data as they could from their entire environment to see is there actually correlation that maybe the humans couldn’t have figured out that our machine learning model can help them figure out. And so by doing that, they were able to come up with more accurate lead time predictions, which has led to fewer stockouts reduced inventory overall $20 million in savings. And if we again, think about what the role of the human is, what they also do with this data is they can track the predictability probability, which pretty much tells them we think the lead time will be this number of days, but the probability of this being true is between 90 and a hundred percent. Well, if the probability is below a certain and threshold, we’ll just say 60%. Well then the humans will take a step back and say, Hey, maybe there’s something else going on here that we need to look into. Maybe we need to look at our contracts and see if we need to review the contracts and renegotiate those. We just need to talk to the supplier and learn if there are other elements that we just haven’t discovered yet that is influencing the lead time prediction that’s coming out of the machine learning model. And again, this was a composable service. They’re able to plug it their existing environment and get value out of it in a very short period of time.

Speaker 1 (50:34):

Bill, what occurs to you as an interesting customer use case or some benefits that are really derived from this initiative?

Speaker 3 (50:41):

So one area is trying to predominantly automate the purchase decision process. So if you think of a purchase order as having multiple lines, having to conform to certain constraints, whether that’s weight cubage or dollars or multiple, and then making an overall PO decision, including transport method, et cetera. Having an agent that can call tools to make these decisions along the way with predictions like Amber mentioned, what’s the probability that our best performing planner buyer would have approved this particular supply line? Would they have changed it? Would they have changed it up or down? How much would they have gone from a carton to a layer to a pallet or vice versa? So doing that type of thing allows us to refocus 90% of a purchasing or planning supply planning effort towards things like anti fragility, but not a hundred percent, right? So going back like what’s the role? The role is now looking at how do we continuously augment that model with new inputs that may not have been relevant a year ago but are relevant today. So that also has led to fewer stockouts because there’s no delay in placing orders, right? They’re almost instantaneous. Secondly, a redirection of FTEs towards higher value activities or in some cases some attrition in those groups. So those are 10 x recurring ROI opportunities generally.

Speaker 1 (52:30):

Well, what a great discussion this has been. I’m so sorry that we’re just about out of here. We could talk about this for hours more, but unfortunately folks, we just have time for one final question. I’m going to pose it to both of you all. Starting with Bill. Here’s the question. If our audience takes just one immediate step to advance their AI journey in the supply chain, what should it be?

Speaker 3 (52:55):

Identifying problems, whose resolution would’ve high impact? And then working backwards to determine what types of tools and decisions are required to make those improvements. And then finding three that can be done quickly and deliver high impact and wouldn’t be subject to cultural adoption technology, adoption issues, or lack of data.

Speaker 1 (53:23):

Amber, what’s your take on that?

Speaker 2 (53:25):

Yeah, I’d say you need to really understand your supply chain, understand where the pain points are, understanding what’s causing variability, and then really understanding where the decisions are being made today. And then once you know that, prioritizing your decisions so you know where to improve and then focus on how to solve them effectively.

Speaker 1 (53:47):

Well, again, this has been a fantastic discussion. The two of you have done such an excellent job on giving us a real realistic perspective on ai, agentic ai, where it fits, where it doesn’t fit, beginning of a great conversation. I’m sure there’s more to be said going forward, but for now, I want to thank you both for that great presentation audience, for your participation as well and for your great questions. Here is a complimentary white paper for everyone. It’s called a call for awareness in the age of ai. It’s from Gaines. And if you click on or if you take a picture of that QR code, you can access it. But if you don’t have time to do that in these final few seconds of the webinar, don’t worry about it that we provided to all attendees of this webinar. And by the way, stay tuned for our next webinar, AI Wars, the Data Awakens. That is Wednesday, August 13th at 11:00 AM Eastern. Be there, don’t miss it. Once again, everybody, thank you so much for this great presentation. Audience, thank you as well for your attendance. Everybody have a great day.

Speaker 2 (54:51):

Thank You, Bob.