If you’re looking to gain fresh insights into innovative supply chain solutions, watching Bill Benton’s presentation at NASCES is a must. In his engaging talk, Bill delves into the concept of composable supply chain solutions, emphasizing a problem-centric approach that prioritizes quick wins and tangible benefits. He shares practical examples, like improving lead time predictions and automating replenishment, showcasing how companies can achieve rapid impacts without the need for complete overhauls. By watching the full video, you’ll discover how to navigate and optimize supply chain processes efficiently, enhancing your understanding and potentially inspiring impactful changes in your organization. Don’t miss this opportunity to learn from a leading expert in the field.
Bill Benton (00:05):
Good morning. Thanks again for making the time. John, thank you for the gracious introduction. Really appreciate it. So I’m going to talk to you today a little bit more about how things can be done rather than what precisely to do. We’ll touch on both a little bit, but this idea of a composable supply chain solutions approach really talks mostly about being problem centric. So rather than, for example, focusing on transforming end-to-end functions, which is important, how can we look at particular problems and point solutions to create value and build momentum either prior to a transformation or even as a compliment to it? So the idea is how do you build morale, build momentum, create tangible benefits along the way or prior to other larger scale projects that are going to be a longer haul and maybe a little bit more challenging from point to delivering things that are tangible and acknowledged as clear benefits by the organization.
(01:12):
So more particularly one of the things that sort of differentiates a composable solution versus a more traditional function-centric approach is looking at a single problem. So one of the examples we’ll talk about here is we have a lot of customers that have a very hard time predicting what their lead times are going to be. Not only, and it seems single number, right, single number, but how does that change across time? How does it change across season? How does it change if you change your flows in the network? What about new products? How do you predict lead time for those? So this is the sort of thing where there’s nobody out there that says what we do is our primary solution is lead time prediction, but it’s a very common problem. And even we found internally with things like transferring in your own network, this can be wildly inaccurate piece of data that’s presumed accurate.
(02:12):
Or even if you’re measuring the air and trying to buffer, that’s expensive too. It’s much less expensive to get it right than to hold extra stuff just in case. So I’ll talk more about that, but that’s the type of, what’s the problem to focus on as opposed to an overall functional transformation. I’ll talk about how it leverages combinations of independent methods, solvers, and functions. So what I want to prevent is you leaving with the impression that a toolkit where you assembled an order, right, as needed is what a composable solution is. That’s not the case. While it comprises a toolkit, these are use cases that have been already composed, tested and productized. So you’re not having to invent something, which that sounds time consuming and expensive, and it often can be. And then the other key element here is that it’s system and architecture agnostic.
(03:15):
So this isn’t about ripping and replacing current or old solutions. So if you’re going to do it quickly, you can’t have weeks and weeks of culture change debates or I’m not comfortable without these reports that you’re taking away or whatever it is that often has to be dealt with when you’re doing a complete transformation. Here the goal is using, and I’ll talk more about that, but avoiding a rip and replace mentality, more of a complimentary approach. So reiterating what is it and why is it that we focus on this? One is high speed to impact. In the deployment complexity, there’s two areas here I alluded to one already in terms of lower risk and cultural adoption and training. You want to minimize that, but also minimizing the IT burden and in fact maybe avoiding it altogether. So how do you use APIs for the art of the possible?
(04:09):
And that’s the application program interface to pull data in a standard way from everything from federal reserve economic data if those are useful for you, like building starts or interest rates or your internal solutions. So if you have one of the top three ERPs, how do we pull transaction data from that in an inobtrusive way without having to create a lot of integration risk and effort for IT who’s already probably fully booked understandably? And then most importantly, how do you create self-funding exercises where you can achieve quick wins and win one funds wins two and three, et cetera. So the first sort of concept here to underlie all of this is this framework. So at the top we have what you can think of is functional processes. So everybody knows demand, inventory optimization, design, et cetera. And then that’s underpinned by different methods here in gray.
(05:13):
So that can use digital twins, which support design concepts, machine learning prediction for demand or even lead time prediction, genetic algorithms for inventory. So these are the different sort of methods that underpin this. And then these are the tools that help us take these methods and make them usable. In green here, this is sort of a categorization, excuse me, framework, which is what are we trying to do here? Are we looking at diagnosing things? Are we looking at prediction? Are we trying to understand what we learned from automation functions? So this is the overall framework and we architect these as services. So is not a large solution that you have to implement end to end. These can be deployed independently, uses shared data model so that all of these methods, you don’t have independent data efforts to make these work. And then lastly, it’s not a toolkit as I mentioned earlier.
(06:18):
So we’re using blends of these, proving them out and then deploying them at speed. So that’s the idea as opposed to a complete product if you will. It’s a point solution to solve a particular problem. The first sample that we wanted to talk about here is predicting lead time. So this is one of the classic sort of high leverage, high impact, quick to implement solutions. Certainly I’ll talk about 10 of these types of use cases at the end here, but this is one of them, often one that makes sense to start with. And let me describe it a little bit. The concept here uses both internal, like your observed lead times coming by lane for different products down to the SKU, but also up at the vendor level, et cetera. And then exogenous data, which can include everything from any vendor data they share. So some of the more advanced cases, the vendors providing their current on-hand to the recipient, your manufacturing capacity utilization rate could be another factor that’s used in here.
(07:27):
So not what they’re predicting their promise date is, but actually looking at the root causes of their promise dates, for example. And so just without getting into too much detail, there’s some interesting counterintuitive things here. So to come up with this particular lead time prediction of 13 weeks, we started with a population average. So that starts with what’s the lane from our manufacturing plant to this endpoint or initial point of receipt or from a vendor to that particular endpoint. And then what are the other factors we look at? Well, we look at what was actually observed for this SKU, right? And obviously the longer the observed, the more it adds to the lead time. So the lead time goes from here to here with that one factor considered. And typically this is all people consider, right? We look at what we actually observed as our lead times.
(08:21):
However, there’s other things like as the quantity goes up sort of counterintuitively that contributes to declining or it’s negatively correlated with lead time. As the dollar value goes up, that also contributes to lower lead time. And while it’s counterintuitive at first when you think about it, people take more time to ensure that larger higher value orders get delivered on time and the long tail of smaller ones often get deferred or delayed. So going through this as just an example, again, I didn’t want to spend more half of the time on the what because our tendency, and that’s where I feel passionate about these types of solutions. But the idea here is that here’s something that we can deploy within one quarter that can improve both availability, because when you underestimate lead time and things come later than expected, you have stockout risk, or you overestimate lead time and things are coming in early and we’ve seen a lot of that.
(09:23):
So pandemic, near post pandemic, your history was really skewed. You had all of those observations of these terrible delays and events and now the converse is happening, things are getting received early and it’s clogging some warehouses for example. So in this case, we are using machine learning as one of the methods using some heuristics, prediction, exogenous data like we mentioned, and it’s essentially a supply function problem. The next example talks about how much to stock and how to flow it. So one of the things that we believe needs to be re-envisioned is this idea of design as a periodic project as opposed to a continuous process. So often right now people look at design as a periodic, could be as infrequently as once every two or three years, maybe as frequently as annually. But in reality, the underlying drivers of what makes an optimal flow are changing frequently and often changing by season too, if that can be effect as well.
(10:33):
So how do we integrate that with inventory? Because the other thing that we’ve found in design is that there’s a lot of assumptions, like we’re just going to use weeks of supply, for example, that don’t really apply well and they can be a major part of the cost model. And if those assumptions are wrong, the whole design can be off. So here’s one that blends inventory and design to understand how do we best use current assets. So the other thing about design is that often it gives you the as-is and it gives you the desired-to-be without real regard to how to bridge that gap. It can be very difficult to redesign your racking in a warehouse. It can be very difficult to create new lanes of transport. So in a composable perspective, what we’re trying to do is say, look, but there’s a lot we can do by just doing a Pareto approach, meaning give me the top 15% of design changes in terms of flow or where we stock or major changes in how much or whether we stock something somewhere and make those changes feasible.
(11:41):
So everything is essentially prioritized based on what can be done versus the impact it has. As opposed to a shotgun approach where you’ve got to do this whole redesign, which requires a lot collaboration and coordination and maybe even asset changes, new facilities, et cetera. So the other things though is to be feasible, you have to adapt with dynamic freight and seasonal factors. So dynamic freight brings in, for example, the exogenous data. A lot of people use assumptions that might be averages that are old and outdated. So when we’re doing this in a continuous fashion, you want to have a data model that’s continuously refreshed and can utilize those key parameters. Good moment here because we’re doing well on time and just ask if there’s any questions. I know usually that’s saved to the end, but given it’s a small room and we could probably hear everybody, does anybody have any questions at this point before I continue?
(12:42):
Okay, great. So how do we look at prioritizing and sort of orchestrating a pathway? What is it? What’s the approach that we’ve seen that’s been successful in doing this? And we look at an impact matrix concept here where we have the value or the impact on the Y axis, so from lower impact to higher impact, and then the inverse of effort and time here on the X axis. So the right edge here is low effort, low time to impact, and then this construct of an ROI frontier. So at some point it’s either too low of an impact or too high of an effort to really consider in this model. So then if you look at these then viable areas, we’ve got the upper right quadrant and that can be what are our current year targets. This what’s in blue might be year two candidates and then longer range elements here down in the lower right in longer range category.
(13:54):
So for example, we talked about lead time prediction, but other things can usually be done pretty quickly is how do we reoptimize our inventory policy? Usually there’s either it’s not multi-echelon, or it’s still based on time supply, or it doesn’t consider supply variability, or it doesn’t optimize service level from item to item. It just gives you broad categories like inventory class of A, B, and C when it can be very expensive to stock some BS at a high level and very inexpensive to stock some A. So you want to optimize that for example. And things like simple, if you have a distribution network, how do we optimally allocate during shortages, right? Fair share is a very crude method, doesn’t look at risk or cost, right? So that’s another example of things. Now this is of course just one client’s example. Everybody is different and effort and time can be different based on your system architecture as well as your data availability.
(14:56):
So that all has to be taken into account as you evaluate this and we can help evaluate it. We’ve seen people work with third party consultancies that help do this as well, and we’ll partner with them to make sure that we’re able to deliver in this right edge of low effort and time. So not overpromising is another major element here, but also not under promising. So this idea of underpromise and over deliver isn’t really consistent with this, because the whole goal here is that we’re going to make commitments that we feel confident we could deliver and drive value. So anyway, just quick side note there. And then going into some year two candidates, how can we automate replenishment? So one of our customers that has done this, Graybar, if you’ve heard of them, a large distributor, they’re at 98% automated purchasing. So we’ve been able to work with them to fine tune the model to the point where they’re only touching 2% of their lines.
(15:58):
They still need to do that, we think probably forever, because you have to keep retraining the model as new features become important. So for example, working through the pandemic, nobody ever thought that the number of people passing through TSA security checkpoints would be the number one leading indicator for lead time and demand sensing by location, but it was a perfect proxy for the degree of shutdown in that particular region. So that’s the type of thing that last quarter’s model isn’t going to have. So you need to keep retraining it, you need to have your best planners that are feeding this model and we push out these ideas sort of like in a spam filter and email, you still get some spam because they’ve got to say, look, is this still spam or is this potentially real? Right? Same concept. So 2% of these things get pushed out for manual review to make sure we’re not missing something.
(16:56):
Sorry, again, a bit of a tangent. I can’t help myself. I get excited about this stuff, which puts my poor wife to sleep usually. The other element here, we talked about continuous supply chain design. So for them that was quite important, but there was a little bit more effort in there. They didn’t have capacities really for their locations. Their freight data wasn’t great. So that was something that while it was high value was something that was deferred to a little bit later, et cetera. The last one I’ll mention here is strategic probabilistic scenario management. So I don’t know if people had the opportunity to hear Ernest Nicholas speak yesterday. How many saw that on resilience? Yeah, Ernest is a great guy. So his work at Rockwell, that was done with us. So GAINS was doing composable improvements. So they had a huge S4 HANA project that he mentioned in that speech yesterday, and that was a years long approach, but he’s like, I need to deliver some results, not in Q 10, but in Q2 and Q4.
(17:59):
So they worked on things like how can we do multi-echelon inventory optimization? Then they worked on, geez, we have real deployment issues and we’re buying when we could redistribute, but we need to do it optimally. We have scarce resources for handling and redistribution in our internal fleet. So those were some things that we worked on with Ernest when he was CSCO at Rockwell Automation. So again, another example, and he’s amazing speaker. I always hate going after him. I wish he was the last speaker at the conference. But anyway, really practical and innovative leader. So this gives you a sense of this impact matrix approach. Next slide talks a little bit about, again, a conceptual comparison. And I don’t like the term versus here, I’ll talk about this in a second. But what we’re looking at here is each of these dots represents one quarter’s time in a 10 quarter timeline from left to right, so we have a transformation and a composition and the implies that these are mutually exclusive concepts and that’s not the case.
(19:11):
They can be sequential. Like maybe you’re not in the midst of a transformation or you’re just after one and this is really all you’re going to focus on. But we have situations where companies implementing a Kinaxis or even O9 or something, or the IBP, have a long and very well-planned path for transforming the company, but need wins. And that’s where a composability plus transformation would make sense. So how do we pick out those that can compliment these efforts and are feasible given people are quite busy already. So as you look at this, and perhaps this is not a rosy scenario, but spending time budgeting RFI/RFP, selection, contracting, designing the integration, change orders in red, not a fun thing, training and testing. And then usually, and Ernest mentioned this, they had to scale back their transformation as they went quarter to quarter. What they aspired to do, diminished to what he called it MVP, right?
(20:20):
So that’s not a most valuable player, that’s a minimum viable product. So as you temper your ambitions to make sure that you get something meaningful within the time allocated, that creates more gaps between what you need and what you’re delivering. And that’s another concept here. This composition is how can we fill gaps? How can we compliment those types of efforts? And if it’s not a compliment, it’s the primary focus, then you can do more. You can compose the compose solutions faster. So in a composition timeline, usually there’s pilots and you don’t start with RFI/RFP, we go in shared risk, right? Let’s see if this works. If it works, you keep it. If it doesn’t work, you don’t keep it. And there’s no long tailed commitments in a pilot. So try it, refine it. If it works, great. You go live, you start generating benefits that funds the next pilot for composition number two, and then composition number three by Q6.
(21:27):
And then you get in, if you remember that impact matrix. Now in some of those it might require not low effort but low to moderate effort. So those now it’s not every other quarter, it might be every third or fourth quarter that you’re composing a new solution along the timeline. So this is a conceptual framework of sort of either contrast or complimenting what is often a fairly long period to when you start trying to deliver something that’s usually a part or just partial of what the original design was. So this is sort of like how do you keep morale up? That’s another thing. How does your team feel like they’re not just asking for more extensions or IT resources than were budgeted and arguing about change orders, but actually chalking up some wins that support potentially the big missions, or are standalones either way. Great. So look, I don’t try to fill every minute because I do like questions. I think they’re more interesting than me talking at you more. So at this point I’d like to open it up for any questions for myself or I’m sure John, you wouldn’t mind fielding some if you’d prefer. Anybody. Great.
Speaker 2 (22:47):
Jenny Harris, Duracell, do you have any examples if you go back to your framework of how you have built solutions or composed solutions for procurement?
Bill (23:01):
Yes. So there are a couple primary procurement oriented solutions. One is the automation that I mentioned with the Graybar example. So that’s using machine learning to try to predict when given a suggested PO or lined order in procurement, what’s the probability that the planner or the buyer would reject that recommendation, approve it unchanged, or change it? Okay, so let’s say that we then categorize each line probabilistically based on what would happen. And then we ask ourselves, well then the tricky bit, what if they change it? So changing talks about things like supervised learning. So what we look at there with a neural net is a prediction of whether the change is going to be positive or negative and whether it’s going to be, we shoe size it because it can be very difficult to pinpoint it. So you could say extra small, small, medium, large or extra large change.
(24:07):
So once you know the direction of the change and the scope of it, you can make a recommendation. And that’s what we tend to automate. So as the probability of certainty goes up. And what’s interesting is what defines an exception in this framework is not like it crosses some arbitrary threshold. You use the probability of certainty of the model. So people will start where I want to see everything where the model is less than 98% certain, so that might only be 50% of the lines, but it is that disproportion like there’s going to be 50% of lines where the model’s 98% certain out of the box. But then as people grow to trust the system more and the system is fed more exogenous data. So there’s no single data model for this. You keep trying what’s called feature data in machine learning, new inputs to see which ones improve the accuracy of the model.
(25:03):
So you’ve got this sort of dual synergy of we build more data in as time allows and we build more trust and then maybe they’ll reduce it to, now I want to review only those things where the system is less than 90% certain, but that with the improved data models might cover 94% of all lines. And so that’s the nature of the automation. Is that a relevant example? Okay, great. Yeah, I mean the other thing that we try to talk about besides automation is optimizing in supply. So there’s optimization around things like do you need to fill full containers? Do you have volume discounts with free freight allowances? Do you have discounts by line? And how do you trade those scenarios off to say, do I order a little bit less and not take that discount but avoid a stock out? Or is it worth it to invest in that extra inventory in order to meet those thresholds? So that would be an optimization example in procurement that we’ve worked through. Great. Any other questions? I have a question. Who found this useful? Be brutally honest or maybe just honest. Alright, great. Even only one of my teammates. So that’s honesty. I love that. Alright, I know this is usable data. Well that’s great. Well look everybody, thank you very much. You’ll have time for some coffee. Really appreciate you taking the time today. We’re at Booth 41 sort of in the center. If you’d like to come chat more, we’d love to brainstorm what might be some composable opportunities for you. Thanks again.