Enhance your b2b sales forecasting for Engineering with airSlate SignNow
See airSlate SignNow eSignatures in action
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
B2B Sales Forecasting for Engineering
b2b sales forecasting for Engineering How-To Guide
Experience the benefits of using airSlate SignNow for b2b sales forecasting in the Engineering industry. Simplify your document management process, increase efficiency, and enhance collaboration among team members. Start using airSlate SignNow today and witness the positive impact it can have on your business.
Sign up for a free trial of airSlate SignNow now to improve your b2b sales forecasting for Engineering.
airSlate SignNow features that users love
Get legally-binding signatures now!
FAQs online signature
-
What is the future of B2B sales?
By 2025, Gartner expects 80% of B2B sales interactions between suppliers and buyers to occur in digital channels. B2B buying behaviors have been shifting toward a buyer-centric digital model, a change that has been accelerated over the past couple of years.
-
What is forecasting in engineering?
Forecasting in engineering is one of the most important topics when it comes to optimisation, which is related to energy savings, material savings, increasing efficiency, and appropriate and correct decisions at the level of a company, institution, city, or region.
-
What is the main focus of forecasting in B2B market research?
It is a method for evaluating and forecasting future demand for a product or service using predictive analysis of historical data. Demand forecasting assists a company in making better-informed supply decisions by estimating total sales and revenue over time.
-
How to forecast B2B sales?
Scalable Strategies for Accurate Sales Forecasts Choosing the Right Forecasting Method. ... Leveraging Technology and Tools. ... Length of Sales Cycle Forecasting. ... Opportunity Stage Forecasting. ... Historical Forecasting. ... Multivariable Analysis Forecasting. ... Setting Clear Expectations and Goals. ... Regular Communication and Feedback.
-
What is the main goal of forecasting?
Thus, the primary goal of forecasting is to identify the full range of possibilities, not a limited set of illusory certainties.
-
What are the methods of B2B sales forecasting?
One of the fundamental techniques used in B2B sales forecasting is historical data analysis. By analyzing past sales performance, businesses can identify trends, patterns, and seasonality factors that can help predict future sales.
Trusted e-signature solution — what our customers are saying
How to create outlook signature
hello everybody a warm welcome from eyesight on the grace hopper stage at the second day of the big data ai summit 2022 my name is lucas klinghartz i'm responsible for cloud and ai at bitcoin and i'm your moderator for the next three and a half hours and i'm very happy to directly introduce the first speaker of on the grace hopper stage today which is kupaka singham pali he is practice lead for advanced analytics at infosys and he's talking about ai-based forecasting for b2b manufacturers using amazon forecast the stage is yours thank you i'll just start sharing my screen here i hope you are able to see the screen yes we can see the screen perfect great good morning everyone i'll be speaking to you about a recent program that we executed for a manufacturer in the us let me jump directly to the context of the case let's minimize the screen yeah so this is a global hvac company that is an air conditioning chiller manufacturing company it's a leading company across the entire world and they were looking to execute a ai led supply chain transformation starting with demand demand forecasting the intention was as we improve the forecasting that we do the accuracy of forecasting it gives us greater opportunities for inventory optimization reducing loss sales and further optimization down at the factory level in terms of resource planning etc this particular stage of the program required us to build build a fully automated at scale ai based solution for demand forecasting across 34 different product groups and hundreds of stock keeping units let me give you a few more details about the business problem and the value proposition if you see the the top part here let me just grab my pointer if you see the before before we did the program the way demand forecasting was being executed across a variety of very distinctive brands none of which have you know they all have very different kind of drivers was manual the forecasting was being done by a couple of demand planners in excel sheets it was qualitative judgment driven they were using industry parameters and various kinds of indices but it was all up to their judgment on how they use that information and it was primarily driven by the historical sales trends as a result the demand that emerged was the plan that emerged was inaccurate the forecast was inaccurate to the extent of about 35 percent of average error across all of these brands and skus and that resulted in in in the manufacturing centers there would be a lot of waste because you are over ordering or under ordering your raw materials and there are also these kind of outcomes that you see here many of these products were also made to order and because uh the raw material procurement was given by this demand plan the cycles of time to fulfill a made to order was in the order of months so if a customer places an order it takes several months for that product to be delivered to that customer from there we have moved to this forecasting where we are not only using historical sales as quantitative data but also demand drivers beyond just the trend and seasonality that is evident in the historical sales and we were looking at external factors which were being considered here but qualitatively now we were taking them as quantitative inputs into an ai model which the details of which i'll speak to in subsequent pages and as a result we improve the accuracy of the forecasting uh to 25 20 from a 35 average i also should mention this 35 percent average was at a category level whereas we achieved 20 average at the sku level a level lower which is always more advantageous and then if you see the breakup in terms of made to stock versus made to order the forecast improvement that we did was in the order of 44 and 11 and the outcomes that we expect are these that we get more predictability and ultimately the the business value that the client is looking for is in these kind of factors now let me go a little bit more into the detail before i go into the modeling approach i just wanted to mention a few challenges that we hold to interesting challenges that we had to deal with as i mentioned earlier they were a good mix of made to order project products made to order is inherently harder to forecast because you know they are tailor-made to customers and demand is lumpy and intermittent that presented a challenge for us there's also this all this forecasting that we were doing for in the context of b2b sales that is industrial customers this is not retail so aspects such as promotions uh you know competitor activity etc are much harder to gauge and account for and the demand profiles of these products were very different from each other some were very frequent some were erratic and lumpy that is the demand would occur in you know big lumps and then there would be long gaps and intermittent and we also had to account for covet effects because we just completed this program and it was being executed during the covet effect pandemic which means that the demand was behaving quite erratically if you see the actual sales pattern across the entire year of 2020 it was very erratic what was our solution approach if you see the inputs that we considered were the historical sales data so this sales data was being provided to us at an sku level and we had the last three years of data that we were considering into the model and then we considered a variety of external indices for informing the model the ahra is the industry level index that tells you at an industry level for the air conditioning industry how many products are being sold month and month dodge tells you what is the construction awards that have happened in this month which is the new buildings that are going up which has a correlation with the kind of industrial air conditioning and chillers that has to go into production hspu gives you the non-commercial construction right now that is already existent wholesale price index the industrial index of production gdp i assume that they are obvious and we also considered things like percentage of cold areas and warm areas but they turned out to be not so relevant because this is really industrial ac's which are not driven by you know whether the need for climate control is not based on a local weather pattern the most correlated factors that we saw were these uh the hr and dodge but all of these factors ultimately went into the model and the model itself if you look at the models that we use we looked at amazon forecast amazon forecast is a service provided by amazon that provides these models out of the box some of them are neural networks some of them are time series i have a slide that explains this better but what it amazon provides you is this suite of models against which you can train this data and get predictions and we use these as uh features uh for instance uh price category like a premium product versus a basic product would be evident from the price category the marketing category represents the broad nature of use of these products whether it is chiller or whether it is a gas based chiller etc external indices that i already mentioned demand profile demand profile as you can see here would be determining whether it's a smooth erratic etcetera and that has a big bearing on the prediction model that will be selected and for each of these products which is a dominant customer industry and the secondary customer industry that you know would consume these products that is another factor that we considered so it could be education it could be hospitals it could be malls etc all of these would enter into the consideration and we on top of the forecasting model that amazon provides we did this feature selection this was something that we had to do discretionarily as well as this champion selection that the amazon forecast doesn't provide what it means is that at any point of time we are training all six models on all the history of data and selecting the best one of these based on a validation period for any individual squ that means for each individual sku a different model would be used based on the demand profile of the product the most suitable one would be used and that that allows us to get the best out of these models we also did a lot of hyper parameter tuning for these models and the output was an sk level forecast a little bit of an introduction to amazon forecast dpr plus and cnn qr are two neural network based models they're very useful because they learn from a group of skus normally any kind of machine learning model is based on individual sku's time series whereas here it looks for patterns among the group of skus and you know learn for each sku and hence it can support new product introduction scenarios because if you introduce a new sku uh you can still forecast for it because it's going to learn from the patterns of the group and we it has a powerful inbuilt lag and seasonality features such as the one that you see here so depending on the frequency of your time series for instance in our case we were doing it at a weekly level it would generate derived features internally lag features that allow you to forecast better they are auto tuning and they also support related time series data such as the indices and various other kind of categorical inputs like the ones i was mentioning in the in this can be used to group these skus further and learn from that apart from that there are machine learning models and time series models as well i will not go into the details of it now once we did the forecasting what i told you about is the core forecasting approach there but it was much beyond forecasting what we needed was an end-to-end automated ml pipeline so that is where we looked at creating this multi-stage model the main intention of this multi-stage mammal pipeline was on provenance tracking so that at any point of time when we do a forecast we are very clear on what is the kind of transformations of uh the historical data that were performed to arrive at this forecast in case we need to reverse engineer that and explain uh why a certain forecast is behaving this way reuse because we want to make sure that whatever pipeline that we built can be easily scaled to other business units other geographies we made this pipeline as clean and modular as possible and maintainability as you can see what we did was a stage two enterprise transformation where there was a data processing pipeline that would look at data cleansing and converting the raw data to the enterprise data format and merging the external data with the sales data as a result what you get is the enterprise level formatted data which we then convert into an analytical layer which is ready for processing by by amazon forecast so this is where it is available in exactly the same way that amazon forecast would want it so after we have this we would subject it to the forecast pipeline where we do a train test split we would do a champion selection we will generate a rolling forecast and then we persist that forecast and we allow the user to interact with this forecast through a workbench and a workflow and we also have a map calculation or a performance calculation module that allows us to continuously monitor how our forecasts are doing so this end-to-end pipeline was also implemented as part of this transformation exercise and this is was all done on aws as i'm showing you here that the raw data lake which i explained in the previous slide was an s3 the enterprise data make was in snowflake and the analytical data lake which is getting ready to provide the data to amazon forecast was again in s3 and the amazon forecast service itself is an api call and then we store the results again back in this enterprise data lake so that it can be used for further analysis we do a performance assessment uh based on how these forecast results are versus the sales data that is coming in from sap and we show all of that in an angularjs application that allows the user to update and view these forecasts so that you know uh that was a quick overview of uh what we did there what i will do is instead of talking further on uh the content here i would open up the session for questions it's um until here and we we have a few questions in the chat i start with the first question i see in the chat it's from um sophia trojanovska and she asks i'm just reading into you dear mr cooper what is the most important product that infosys is providing in germany and europe and how do you see the development of the i.t demand of german manufacturers and german chemical companies so in infosys provides a variety of analytical uh products i assume sophia that we are talking about products that are relevant to the kind of forecasting that i'm talking about here or for ai services so we have a variety of accelerated accelerators such as the champion logic that i was talking about in the previous slide a variety of computer vision related solutions that manufacturers can use for doing defect defect identification and other kind of use cases in the factory floor and we also have a couple of platforms that allow you to do data science at scale using standards we call it the analytical workbench so using that analytical workbench you are able to execute data science in all aspects of the data science life segment in one platform thanks a lot we have two more questions i will just address to you the next question is regarding the database you use for your sales prediction for your effort to train your models and the question the concrete question is do you have examples or do you have insights and how much sales data is needed like to to use to develop models which have a good prediction level yeah in this case uh let me i think there were two questions one was about the database and then the kind of data series that you need the databases i'm already showing on screen here in this particular implementation we used a combination of s3 buckets and snowflake as the databases the the reason for these choices s3 was because it is the default data lake of aws and it is cheap easily scalable and the perfect solution snowflake because that was their enterprise the clients enterprise standard for an enterprise data warehouse and they wanted all of these key measures that were generating out of this forecast to be stored in that enterprise data warehouse so that is the answer to your database related question the other question was how much data do you need to make an accurate forecast i think in this case we use three years of data that's ideal because anything beyond three years is not very useful the sales trends change too much over uh the period of time uh anything less than three years is manageable we have done forecasting in for other clients even with one year of data but three uh two to three years is ideal for us to capture seasonality patterns perfect so we have i will just follow up one question from my side and then i will ask the last question and so are you agnostic and if the customer wants to use another storage um approach or another data lake approach or are you aligned always to s3ms3 and snowflake no absolutely not i mean it's uh we have executed similar solutions on uh other clouds as well for instance for another client we have executed a very similar solution they in that context it was returns forecasting rather than demand for demand forecasting and that was done on gcp and the whole choices uh that gcp provides such as bigquery and my cloud sql and and we have similar implementations on azure as well so it really uh all the cloud providers provide you wonderful options uh in terms of storage and powerful forecasting solutions as well we mix and match what is necessary for the context of the client based on their requirements perfect so i have two other questions which i will combine in in one question addressing it to you the first question is regarding team setup how many people do you need for such a development of a specific prediction model and how which roles do you see in such a project and this combined with um the with the question regarding time you need for fine tuning the parameters of the model and select the best model okay so team structure wise we had a program manager because this program required extensive coordination with the business users with the infrastructure providers of the customer and their data science team so program management was a key element and then we had a core team of data scientists which is about i think three to four people one day one senior data scientist and three data science analysts and we also had a team that was able to develop the angularjs application the workflow it was a complex workflow that was about three people for doing the front-end web development and maybe uh and one or two other people were taking care of the data engineering aspects of the end-to-end flow so overall about 10 people uh was what went into this kind of program i'm sorry i missed the a second part of the question could you repeat no problem just maybe a few considerations regarding uh time you needed for fine tuning parameters of the model okay that's a good question uh i think the core models we developed using amazon forecast within within two months from the start of the program i think we spent about the next three months really in terms of fine tuning because we wanted to first of all make sure that amazon forecast is indeed the best choice so we created other models such as xc boost random forest which are not available in amazon forecast just to check that you know we indeed have the winner in amazon forecast and then we spent a lot of time identifying the kind of features that i talked about here all of these features in fine tuning these models so i would say two months for the base model and another two to three months for fine tuning thanks a lot mr singham pali for your insights and thanks a lot for the discussion
Show more










