Collaborate on Invoice Template AI for Support with Ease Using airSlate SignNow
Move your business forward with the airSlate SignNow eSignature solution
Add your legally binding signature
Integrate via API
Send conditional documents
Share documents via an invite link
Save time with reusable templates
Improve team collaboration
See airSlate SignNow eSignatures in action
airSlate SignNow solutions for better efficiency
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
Learn how to streamline your task flow on the invoice template ai for Support with airSlate SignNow.
Searching for a way to streamline your invoicing process? Look no further, and follow these quick guidelines to effortlessly work together on the invoice template ai for Support or ask for signatures on it with our easy-to-use platform:
- Сreate an account starting a free trial and log in with your email credentials.
- Upload a document up to 10MB you need to eSign from your device or the cloud.
- Proceed by opening your uploaded invoice in the editor.
- Perform all the necessary steps with the document using the tools from the toolbar.
- Select Save and Close to keep all the changes made.
- Send or share your document for signing with all the needed recipients.
Looks like the invoice template ai for Support workflow has just become easier! With airSlate SignNow’s easy-to-use platform, you can easily upload and send invoices for electronic signatures. No more printing, signing by hand, and scanning. Start our platform’s free trial and it streamlines the entire process for you.
How it works
airSlate SignNow features that users love
Get legally-binding signatures now!
FAQs
-
How do I edit my invoice template ai for Support online?
To edit an invoice online, just upload or select your invoice template ai for Support on airSlate SignNow’s platform. Once uploaded, you can use the editing tools in the toolbar to make any required modifications to the document.
-
What is the most effective platform to use for invoice template ai for Support processes?
Considering various services for invoice template ai for Support processes, airSlate SignNow is distinguished by its easy-to-use layout and comprehensive capabilities. It optimizes the whole process of uploading, modifying, signing, and sharing forms.
-
What is an electronic signature in the invoice template ai for Support?
An electronic signature in your invoice template ai for Support refers to a safe and legally binding way of signing documents online. This allows for a paperless and effective signing process and provides extra data protection.
-
How do I sign my invoice template ai for Support online?
Signing your invoice template ai for Support electronically is straightforward and easy with airSlate SignNow. To start, upload the invoice to your account by selecting the +Сreate -> Upload buttons in the toolbar. Use the editing tools to make any required modifications to the document. Then, select the My Signature option in the toolbar and choose Add New Signature to draw, upload, or type your signature.
-
How can I make a specific invoice template ai for Support template with airSlate SignNow?
Making your invoice template ai for Support template with airSlate SignNow is a fast and convenient process. Simply log in to your airSlate SignNow account and click on the Templates tab. Then, choose the Create Template option and upload your invoice file, or select the existing one. Once modified and saved, you can conveniently access and use this template for future needs by picking it from the appropriate folder in your Dashboard.
-
Is it safe to share my invoice template ai for Support through airSlate SignNow?
Yes, sharing documents through airSlate SignNow is a safe and reliable way to work together with colleagues, for example when editing the invoice template ai for Support. With features like password protection, audit trail tracking, and data encryption, you can trust that your files will stay confidential and protected while being shared online.
-
Can I share my files with others for collaboration in airSlate SignNow?
Certainly! airSlate SignNow provides multiple teamwork options to help you collaborate with others on your documents. You can share forms, set permissions for modification and viewing, create Teams, and track modifications made by team members. This allows you to collaborate on projects, saving effort and optimizing the document signing process.
-
Is there a free invoice template ai for Support option?
There are multiple free solutions for invoice template ai for Support on the web with different document signing, sharing, and downloading limitations. airSlate SignNow doesn’t have a completely free subscription plan, but it provides a 7-day free trial to let you test all its advanced capabilities. After that, you can choose a paid plan that fully meets your document management needs.
-
What are the pros of using airSlate SignNow for electronic invoicing?
Using airSlate SignNow for electronic invoicing accelerates document processing and decreases the chance of human error. Additionally, you can track the status of your sent invoices in real-time and receive notifications when they have been viewed or paid.
-
How can I send my invoice template ai for Support for electronic signature?
Sending a file for electronic signature on airSlate SignNow is fast and straightforward. Simply upload your invoice template ai for Support, add the needed fields for signatures or initials, then tailor the message for your invitation to sign and enter the email addresses of the recipients accordingly: Recipient 1, Recipient 2, etc. They will receive an email with a link to securely sign the document.
What active users are saying — invoice template ai for support
Related searches to Collaborate on invoice template ai for Support with ease using airSlate SignNow
Invoice template ai for Support
hello everyone and thank you for joining us for today's session my name is es skia and I'm one of the event planners for the Redman reactor space before we get started today I've just got a few things we'd like to go over if you could please take a moment to read our code of conduct uh basically what it saying is that we seek to provide a respectful environment for both our audience and presenters uh we ask that uh we definitely encourage engagement in the chat but we ask that you be mindful of your commentary uh please remain professional and try to stay on topic if possible uh we're going to be sharing useful links throughout the chat so please keep an eye out for that and know that this session is going to be recorded and will be available in about 24 to 48 hours on demand on the Microsoft reactor YouTube channel uh which brings us to today's session uh it's going to run approximately one hour and there will be time for questions throughout so with that said I will now hand it over to our speakers I have good news and I have bad news for everyone here um the good news we have another make aure AI real session it's going to be exciting we have niia here we're going to be talking about uh customization and fine-tuning I think it'll be a lot of energy the bad news is this is our final make a your AI real a part of this season season two we've done a lot we've uh built AI agents we've worked with small language models uh we've worked with the mraw models we worked with coher models uh and we've done it we've done it all almost uh in terms of what you could do with generative AI except fine tuning so I'm super excited niia welcome uh to the make as your AI real show as our final guest oh thank you so much this is the best way to end the year and you know like the the stuff you do is amazing I mean like the amount of community engagement around this is awesome I appreciate that um we can talk about this for hours but we only get one hour and we're gonna focus on fine tuning but if you have't if this is your first time joining us uh we have a collection we just going to throw that link at the bottom of this page and it's going to have all the material that we've kind of cover throughout this whole season again uh what we want to focus on in this show is not just talking AI but make you know doing Ai and that's going to be through you know code samples it's going to be through documentation it's going to be through Hands-On module learning and that's all in this amazing collection that we put together following the whole season but the floor is yours niia we would love to end it with a bang uh in the world of fine tuning so no pressure but tell us uh what is fine tuning and how do we get started yeah so you know me uh I love this stuff so I'm going to jump in and I kind of deliberately call it an AI engineer guide to AI customization and fine tuning because I think we're all seeing a lot of like this hyper around AI engineer so uh I basically thought I'd structure um the talk in kind of like a an open-ended way and then see where it goes from here so first I want to kind of just start by saying you know what is this AI engineer there's a rise of an engineer and I actually see this as a role where things like AI customization become key parts of that role's toolkit so quickly want to just set the stage for who you might be and how you might skill up as an AI engineer what is AI customization how does it fit into that toolkit why is it useful and then we'll kind of dive into model fine-tuning as one of the elements of the a customization journey and then kind of you know of course we got to talk about AI Foundry so we're going to look at what AI Foundry is doing to help you kind of do AI customization through various tools and last but not least we're going to kind of look at what are uh some of the new things that came out at ignite and leave you with some resources that you can play with now if you know me you know I love visuals I love telling stories in ways that help you remember so you might see some of these that might be familiar to you if you've done generative AI for beginners hopefully you'll still find it interesting so the rise of the engineer I really really liked this particular graphic that was shared by Amanda silver uh the CVP of devd I actually blocked out the little uh card in the middle to really make this case about where the AI engineer fits in and it's kind of when you think about traditionally right we had two different groups of people in the AIML space they were the people who made the model so they were like the data scientists and the Machine learning engineers and they kind of had a lot of depth of experience in research tools uh algorithms um kind of like you know data Gathering all of that stuff and then on the other side you had the software engineer who was was like fine when you finish give me the model and I'll just deploy it and then I'll build the software stack around it build applications to enter in development right but what happened is generative AI basically threw us all for a loop and what it does is kind of like creates this bridge personality that we kind of look at as an a engineer and what they do is they kind of can talk to both sides right so on the one hand you've got the people who are building the foundation models and the other hand they're the people who want to ship products around those models and the AI engineer becomes the bridge they're like okay we need to know a little bit about both worlds and then we also need to learn and skill up on all the things that make this kind of generative AI space different and there's a whole bunch of things over there but I want to put this because this is how I personally think of it like my my kind of mentality these days is oh I need to build myself an air engineer toolkit like what are the things I would have in there and I like to think of it as this journey from catalog to code to Cloud so the catalog is where we all start like the brains of every AI application is going to be a model and you know a year ago it's hard to think about now but a year ago we had like maybe a handful of models to choose from now we have 1,800 models on the Azure catalog and one million variants on hugging face so now there's decision fatigue like where do we how do we pick right but once we get that there's a Code uh loop that we go through where we're looking at how do we improve the performance of this model for our particular application how do we assess it and then last but not least once you've got that you need to deploy it and this is where where the cloud part comes in because as we start looking in the space there are so many different things we need to do that if we can get a seamless platform that helps us do this end to end it just makes our life easier so today I really want to talk about the fine-tuning part of that of that stack and we're kind of seeing this uh term AI customization so I want to kind of start there if you haven't checked this out uh the QR code came and went away this is a a talk from ignite this year hugely recommend you go watch it because it helped me get a really good sense of not just where AI customization fits in but where the Azure AI Foundry platform is beginning to really evolve to make this almost a first class capability for all of us so I'm going to skip the talk but if you kind of go into it I I kind of took some of the slides from that from that talk to set the stage so let's talk about what fine tuning is so many of us kind of like come at fine tuning from different angles if you look at a data science and machine learning engineer in in in kind of like the traditional models right we think about predictive models with like very discrete data sets but today when we think about fine-tuning it's really customizing a pre-trained large language model by giving it additional training based on a specific data set so I might kind of have a domain specific model or based on a particular task I might want to fine tunit for a specific set of instructions hey become a better Tech summarization model as an example right and Azure openai the Azure open service uses low rank approximation to F tune models uh and what it does is it's basically approximating the high rank Matrix with a low rank one making it possible for you to like do fine tuning with a smaller set of parameters so you're faster but you're actually tuning just a subset of the capability that you really need and this makes the training faster but also more affordable for us but if you kind of remember that AI engineer journey and M it onto like this generative AI path it turns out that when we thought think about AI customization we're really seeing this path that takes that basic pre-trained model and starts kind of like customizing the way we use it in a way that balances accuracy against cost and complexity so we all know prompt inuring very first step is where you're saying hey I'll just take the model but what I'll do is I'll work I'll design how I ask questions and this could be as simple as I understand my domain I understand that I want the format to be a certain way I understand what the keywords are Etc and it's really crafting The Prompt getting something the user asked and framing it in a way that we allow the model to get more context from it but then we get to a point where we're like these large language models are trained on a lot of data they're very good at generic stuff but I really want it to be grounded in my data and so we kind of get this next step which is retrieval augmented generation which I kind of think of as prompt engineering Plus data because you're still not modifying the model you're taking the prompt but you're doing an additional couple of steps you're using tools to go grab uh contextual knowledge that you embed along with your original prompt so that the large language model actually tries to use those as higher value grounding cortext right and this is really useful for most of our Enterprise applications you might have heard of contoso chat it's one of the applications that I help maintain that we've done workshops on that's really kind of the signature rag app for a Foundry and what you realize very quickly is that there is a very starting point for bringing data but at some point you start seeing tradeoffs because if you know how prom engineering and ragol work this fuch sh prompting you're giving it examples you're bringing lots of data from that knowledge base that's kind of growing the size of your prompt but you have constraints in the context window this is where we kind of progress towards finetuning where we're saying okay at some point the trade-off is that this model but just prompt engineering AAG is not good enough for me I want to retrain that model or adapt it with my own data and teach it new skills and once you get past fine2 if none of that is still improving your quality then you get to the really hard part where you say fine I'm just going to build my own model from scratch and this kind of Continuum takes us from like a very lightweight cost Factor but quality might be kind of like uh iterative to something where I control everything where does fine tuning fit in and in this like I really like the fact that they put this example because contel chat if you're familiar with it is about an Enterprise retailer uh who sells hiking and camping equipment and this really fits into that particular scenario very well so over here think that someone's coming in and says hey uh will my sleeping bag work for a trip to Patagonia next month right and that's the prompt coming into the large language model now if I did nothing else and if I'm using the default llm like maybe I'm using gp4 uh I could put this question into chat GPT what happens it's not going to give you any responses based on my retailer catalog right so this is where we kind of enter into the first the basic prompt engineering part where we say hey this is a customer service bot so I really needed to have a certain tone and style like a very customer Centric polite courteous bot I need to give it examples of how it should respond to the user questions for example address them by name maybe bring up other things they've bought as a way to upsell Etc and understand intent mapping like what were they actually trying to do what is the instruction I'm trying to get out of this once you've done the basic prompt engineering which we kind of think of as ideation the next part is where you shape the data so that default prompt engineering is kind of like setting the stage for the question that's being asked of the final llm the retrieval rag is that's when you want to start plugging in third party Integrations right I might want to say hey before I do this maybe I need a tool that goes and fetches the weather for next week from Bing maybe I want to basically find the customers record of uh you know purchases and find them things that align to what they've already bought perhaps I want to kind of see if there is a discount or something that I really want to promote this month and so that's the retrieval rag part but once you do that what happens you've bloated up your prompts you've got a whole bunch of things going on at some point you might say okay I will then F tune this llm because it could do a bunch of things that prompt where I was getting all this information I can reduce it I can also show the model instead of telling it I can literally give it more examples in training than I could fit into a prompt I can improve the accuracy by doing more rigorous evals while fine-tuning it and then only releasing that model when I'm kind of like ready to have it uh used in production and I can actually do things about how it gets the retrieve data during the retaing so the idea is that then the output for this when I ask this question comes back with a much richer solution so why is AI customization useful now we talked about this from the perspective of what is it right it's really this Continuum of all these steps that you would do to take this pre-train model and then add things to it so that the response quality becomes better but why is it useful so there are a bunch of different reasons why you might want to take this up but the first thing is scalability right as an when you start kind of like building your you when you start prototyping you're okay with using an existing model you're making a couple of calls you're using a small prompt and testing that out but as you scale to production when you need to go across all the possible users all the possible uh data inputs that could or questions that could occur you need a way to customize and scale it to your vertical domain or to your horizontal um customer or um customer base second is hallucinations we all know that large language models kind of are trained on a certain amount of knowledge potentially with an expiry date they also don't know some of the information that's in your Enterprise because that's private data by tailoring the model with your data you're less likely to have it give irrelevant responses you also will increase the reliability because you're able to actually tune and evaluate those fine tune models and you can improve efficiency because now you're taking away the cost per prompt but you can potentially and there are trade-offs of course but you're potentially now getting fewer s used getting higher quality responses which in turn might actually make your user do stuff so what are the use cases we talk about reducing prompt length let's talk about something like teaching it a new skill there was a really interesting uh example when I was looking at different models where one of the new skills is that we really want to have it understand cultural nuances so the same question asked in America may have a different answer than one asked in India for example because you understand that the wording use has a different Nuance in a different country you might want to improve tool use you might want to do domain adaptation domain adaptation could be things like healthcare where you realize hey when I'm doing something with a healthcare use case I can't afford to like have um unreliable information sent out or I might really need to understand the nuances of the symptoms causes Etc that are being brought up in that question the vertical use cases that they kind of bring out is things like natural language recod so this one is is a really nice one if you've used co-pilot if you used any of the code generation models that are out there today you know that they're trained on popular languages like Python and JavaScript but they may not be as good at like maybe an edge case like I don't know cotlin or rust so if you want to build yourself a model that actually understands that specific language and codes to it fine tuning is the way to go and the same thing also works for spoken languages you want to take translations dialects cultural nuances You' find style and formatting this was a really interesting use case they talked about at ignite doctors and patients when they're having Consulting you get all this data and doctors really want to see answers in a very clear presentation format where they just get the data they don't have time to read rings of text so they want sharp information with the right format the right template and actions right and of course customer specific knowledge intuition so we talked a lot about what the journey for AI customization is so prompt engineering then add data for Rag and if that's not enough then actually go in and find you in the model with your own data but what does that process actually look like so before we get started I want you to kind of understand the bigger picture of what that uh end to endend flow looks like so I'm going to going to switch over let me see if I can do that really quickly um and you can grab those two URLs if you like uh I'm going to see if I can bring my oh that's why all right so this is actually a notebook and at the end of this I'll make sure that we have the link to the repo and this if you want to play with it yourself but this is a repo where I have a very simple tutorial that's also available in our um Microsoft learn site that walks you through fine-tuning a GPT 40 mini so over here before we get started let's talk about what the big picture is so I'm going to close this so you can kind of see this full screen so when you're fine tuning the first thing you want to understand is hey what is my reason for fine tuning am I doing it for the right reasons or am I doing Trent because when you actually F tun a model you're going to have two kinds of trade-offs one you're going to have the trade-off in cost without fine-tuning you're paying the cost for additional prompt length Etc but when you finetune you're actually going to have three kinds of cost you need the compute cost for fine-tuning the model itself you need to have the cost for hosting that model which can be non-trivial and you need the additional cost for the usage of the model itself so you want to think about what your cost quality trade-off is but once you do that the process is actually consisting of four steps first you want to retrain an existing or pre-trained model with your data where is that data coming from what does that data look like how do we make sure that the data is representative and correct and valid for your use case that's data preparation once you've got that you upload it into aure find the model now comes the next part is there a model for your use case that you can fine tune does that exist in the region is it available in the region that you want to use it in and can you then use this data to train that model once you do that you want to basically go ahead upload the data run the compute train it and then you want to evaluate whether that particular retrain model meets your quality criteria it's at this point that you realize you might want to train quite a few times you might do multiple rounds of it and you want the ability to kind of checkpoint these results but you also want the ability to consistently repeat your your your same weights and parameters when you f tune and there are ways for you to do that last but not least you want to deploy and use it so let's actually take a look at what happens here so the first step is we need to First do data preparation but before we do that our actual first step now in this we're looking at an Azure open AI model so we're not thinking too much about it but in reality when you f tune you need to know does the existing pre-trained model allow me to find unit so the first thing I want you to kind of think about is the model catalog so if you have not visited the Azure AI uh Foundry portal you'll notice it has a you can actually explore the model catalog without an Azure account so just go to azure.com explor models but if you look at this you'll actually find that it has a drop down which says fine-tuning tasks and for example I want to do chat completion I can look at the models that are available that allow me to fine-tune them for this particular inference task in this case I want to use GPT 40 mini I can then go ahead and look at what that uh provides me right now but also which regions it allows me to fine-tune this in so that when I create my project I can actually deploy it and make it available in the right region this takes a long time I'm going to give it a second but if not we'll kind of switch to the the the code and just set all right we're going to come back and look at this um it could be just a delay give me a second we'll come back and look at are you doing something now wow super slow all right let's go look at the code instead so our first step is when we want to do this and if you want to follow along in the slides I actually oh there it finally did show up uh in the slides I have two different URLs the first one will take you to the model catalog so you can explore it the second one will take you to the tutorial that I'm actually going to walk through which is for uh GPT 40 mini that you can finetune on Azure AI Foundry the difference between this and other tutorials you might see is this one's looking at it code first so let's see if I think for a second uh we did in fact see that pop up so I'm going to just check really quickly to see that let's get rid of all of these these windows I love Macs but sometimes they have too many things going on all right I think we should be set now all right so before you begin we ask three questions is AI customiz customization Justified now in this demo we're going to use a toy example but really what you want to do is evaluate benefits and trade-offs is the thing that you're trying to do is it possible is it Justified and is it successful for Justified benefits and TR but viable is it possible you want to see if there is a uh model that actually fits your needs and if you have data that's available to train it last for successful you're looking at metrics and insights and evaluation so let's switch into our um demo now I already ran this ahead of time so we can kind of flip back and forth and look at what it looks like in the portal but it all starts with a very simple first step you want to the open AI package along with a bunch of python packages we're going to do do this code first and what you want to do is you need two things in this particular walkthr I'm using the approach where we set up a Azure AI project and an Azure AI Hub on the Azure AI Foundry platform via the portal so over here I actually have that set up already and uh you'll notice that The Foundry I've got a project and a hub uh set up and I'm not going to change into that right now but uh this is can be done kind of like low code just go in create a new project give it your details and you're done the main thing you want to make sure is that you pick a region that your model is available in GPD 40 uh mini for instance is available in North I think it's north central us and Sweden Central so when you set that up make sure that you've got yourself your resource Group and all set up in the right region next you're going to get an API key and an endpoint so once you actually set up that Hub and resource uh the API key and endpoint will actually be available for you in the overview I'm not going to dwell too much on it but you can see it over there and you can grab that and put it into your um environment variables so you'll notice I have an environment variable in here and this helps you kind of just set up your environment to be ready for fine tuning The Next Step so we've picked GPT 40 mini but we want to fune it with a data set this is where it's important for you to like make sure that you format and collect the data in a way that can be validated and here we use the format known as jonl so each line of that file should contain one example of a fine-tuning data that matches the open ey specification so we're going to look at an example right here uh let me see if I can open that up okay uh we'll look at an example right here of what that data looks like and I'm going to open that up right here so we have to give it two files a training set and a validation set so JNL really looks at each one of these is a combination of an input and a response you're giving it pairs and those pairs actually mimic the format accepted by openai so this one is kind of taking off of the openai sample and instead of like Marv the sarcastic chatboard it's doing clippy but you can take this sample and then change those uh input response pairs to match anything else you want to do if you go to our generative AI for beginners chapter 18 uh sample that's currently in GitHub you will notice we actually do something different where the examples show you examples of limeric so that what it's doing is every time you ask a question it always response in a in a limeric but for now you can use the sample that's in here so what it's doing is uh once you've kind of created this this data set you want to get a data set and split it into two you have a training set and a validation data set uh in here for now I just have about 10 to 12 examples in reality you're going to need hundreds if not thousands of examples based on the specific based on the kind of like specialization the the kind of accuracy and the kind of Skilling that you want to do this is a toy example with 10 to 12 itself it takes about an hour to get everything fine trained uh but fine tuned but you can kind of play with uh other ideas after so you need to create two training sets I mean two data files a training set and a validation set the examples are given to you but you can go ahead and create them yourselves put them both in the same directory as this notebook and then you're going to copy over uh the contents of those for the tutorial and run this so let's walk through what this code is actually doing first we're loading the training data set uh this one is just kind of telling you what you have in there so it's kind of validating that we have everything that we need you're loading the validation data set making sure that you have a data valid data in both of them currently you'll see we have 10 samples in each next this is a completely optional step but if you want to you can also use and this is a code first approach to doing the whole thing uh you can actually get a rough estimate of what the count would be for this for this whole process now this is not perf uh but what it's doing is it's kind of giving you a a metric that you can use to estimate what your cost could be so we have only 10 of each and what it's using is it's using open a stick package to estimate what the counts for um each of those input output Pairs and training are and if you kind of go through this and run it it comes back to you with uh a okay so here it comes back with your distribution of the total s uh what the assistant count of that is and an estimate of what it's going to use um for your for your whole run the next thing you want to do is you want to upload your fine-tuning files to the Azure a foundary now you've already set up a hub in a project so at this stage you really just want to kind of create an Azure open a client use the endpoint and API key that's set in the environment variables we're using the default API version that's provided to us give it the two file names that you want to upload and that's it you basically go ahead and upload it when it's done you will get back the two IDs of the files that are kind of in uh blob storage right now for you to use in fine tuning at this point you are now ready to begin your fine training process so I'm going to switch over to the portal for a second to show you what that looks like so over here I actually have uh I kind of run this this morning by the way one of the things uh about this is that when you actually do fine tuning and run the models and kind of have the jobs uh there you're going to pay two right you're going to pay for the computer but you're also paying for the hosting so if you do follow along with this at home and try to do it yourself don't forget to delete your deployments after you're done because you will continue to pay for hosting that model which can be non-trivial but over here what we're really looking at is when you basically upload the data and you start the fine tuning process you want to be in the fine tuning Tab and when you do you will notice that there is nothing in the AI service fine tuning but if you go to generator F fine tuning uh tab you will have a job that gets started and over here the logs will tell you step by step what's happening in your fine-tuning uh task so the very first thing is pre-processing so now if I go back to my code when we begin fine-tuning what we're doing is we're using the client that we set up previously we are providing it the training file we're providing it the validation file we are specifying the model that we want to use and we're doing one interesting thing we are providing it a seed now this seed is a kind of like a parameter that it can that it can use to make sure that it gives you an almost reproducible set of fine-tuning jobs so if you repeat this again and again you're likely to have a similar environment setup once you've actually created this fine-tuning job and submitted the job runs for a fairly long time and you want to be able to understand what the status of that job is at any given point so when you do this you can actually get back from the API the job ID and status but you can also track that uh interactively by kind of polling that job regularly and that when you PLL it this is the this is the kind of uh point at which you can see whether the job is still running or not from your code environment but what I actually like to do at this point is to look at it in our portal because in the portal you kind of get the step by step of what's Happening and you can actually see I started this at 8:30 in the morning and initially there was a kind of flurry of activity as it kind of started uploading the things pre-processing and by the way this is the point at which if you have any issues with your data and it's not validated this is where you're going to discover these issues and fix them but once the training starts you'll notice that it took almost 40 minutes with very little kind of feedback on the uh client side but if you look at the portal it gives you a sense of what's Happening so the training started it created a results file so it can start putting the results of your training job and then you can know you can see that it does this in Epoch it does this in iterations a certain number of steps at a time so for each one of these steps it kind of runs one iteration looks at and validates this against your other data set to see whether the the the kind of the accuracy and other parameters are acceptable and if not it does it again one of the nice things about this and you don't see this until the entire job is done but one of the nice things about this uh Aji studio uh Foundry Aji Foundry portal is you can then go and see the visualization of this process so you can see over here that every one of these little red dots corresponds to one of those checkpoints and as it keeps going right you can see that the loss is reducing so it's getting better and finally it gets to a point where there isn't much of a differentiable um impact or like improve Improvement happening so it says okay I'm done this is the version of the model that you have the other thing you can see is that the checkpoints it is creating checkpoints periodically and it keeps the last three so although in the metric you're seeing a whole bunch of checkpoints all the little pink dots it keeps the last three and one of the values here is that you can actually deploy any one of these checkpoints and test against that and that will allows you to kind of go back to a previous checkpoint if you find that this last one was not perfect for you or it lets you go back to a previous checkpoint and then retrain from there so let's look at what we're doing in the code after this when you submit the job you pretty much are kind of like allowing the compute job to run to completion it uses the training data you gave it to retrain the model it uses the validation data to test your accuracy and once it reaches an acceptable level it's going to come back and give you um the the ID of the fine tune uh model but if you want to actually go in and see what the different details like what are the actual events that happen there is an actually there's an API for that as well so you can go and see pretty much everything that I showed you in the portal via code for those of you who prefer a code first approach but I kind of want to go right to the end and just look at what the end result is so when you finish the fine training job I mean the fineing job you're going to get back a valid model ID so this finetune model ID that you see is now just like any other model in the catalog it's a deployed endpoint that you can now use oh it's not deployed yet it's a find model that is ready to be deployed it's almost like you've got yourself an instance of the catalog so the last thing that you need to do is deploy it in order for you to actually test it or validate it with a real world application so to deploy it if you've been following the tutorial that's in the port in the in the in Microsoft learn there are a couple of tiny tweaks that you'll have to make if you the the easy way for you to do it is to actually just go into the portal and hit a deploy so when you go into your um once the fine tuning is done you will actually see that your model shows up and you can kind of look at it open it in a playground test it out you can see the versions of your models in here pick one of them and if you want to go try this out you can deploy it so you can do that directly from the portal once you deploy it it'll show up in your models and endpoints in this case in our case we're actually doing it through code so when you go into the models and endpoints once this uh deployment once the training is done and the deployment is done you will see that the models available here that you can open in a playground just as you would with any other model so what do you need to do to deploy the model you need to know the subscription uh the resource Group in which this particular um Azure project has been created and here is the one thing that you need to keep in mind when you look at all the Azure AI Foundry tutorials you're going to find two kinds of tutorials ones that focus on Azure openai specifically others that allow you to use non-open aai models as well in my case I'm actually setting up to use the Nona openai models even though the one we picked for this particular tutorial is open AI if you wanted to try the Azure open AI specific approach you will find it unlocks other capabilities for you in The Foundry as well but over here the resource name that you give it will be that of the Azure AI Services resource that under underneath actually contains the Azure open AI service end point as well so to deploy the finetune model you're going to go ahead and get a temporary off the instructions tell you how to do that but basically you can just log into your Cloud shell in your account get a temporary that your code can use to basically deploy the model give it the subscription ID the resource Group and your resource name along with the model deployment name and you will notice here this is the most important part you're going to specify this the ID of the model the fine-tune model that you want to deploy and so if you want to try a different checkpoint you could do that here as well once you deploy this you'll and get get back an a a resource URL that you can then use from your code so for instance here in the next step we're using that endpoint to actually try out an interactive uh request right from our uh Visual Studio code so here there's like a sequence of conversational history uh you it's kind of giving it a couple of examples and then says okay I have this question for you and it comes back to you with the response that you're looking for and you'll notice that this is a kind of like a it was it's it's tuned to be a polite but also kind of semi sarcastic chart board so you can try out other examples that you want to and you want to delete this later but uh let's go back and kind of just look at what you can do here once that's been deployed and you can go into your models and endpoints and see it there we can then do two things we can go into the metrics and see how we're performing in terms of requests latencies failures Etc we can also open it up in playground just as you would any other uh applic any other model that you deploy and you can try out the kinds of queries that you would do so for example we can say what kind of now in this case notice that this is be a generic question it's not rag but what we're really trying to get at here is that hey it's coming back to you we able to use this uh find un model get back to responses and then we can kind of like iterate on the prom templates here or come back and say no this needs to be fine tuned with a different set of data to change the the behaviors Etc so this is kind of like a a really fast tour of how you can kind of use um code first approaches with the a SDK and the python pack the python SDK in particular to do your entire fine-tuning uh workflow end to end from a notebook in Visual Studio code you can also go and do this via the portal or via the CLI so if you're interested in kind of like trying out different ways to do this uh these are the three resources you need to know the first one is really guidance on when you should do is fine tuning um there might be reasons why it's not the appropriate solution for you so go ahead and check out the best practices for when you should be using Azure openai fine tuning the second link will take you to the Azure openai model availability page which kind of gives you a sense of which versions of open a models are available for fine tuning and which regions they're currently available in and the last one gives you a tutorial that actually shows you three different approaches do the entire fine tuning in the portal do the entire fine tuning from the SDK which is the example you saw or use the rest API and do this from the command line but before I kind of close this out let's come back and talk about what are we seeing like where where this is a Foundry what are the things we can do in a Foundry that were released at Microsoft ignite first what we saw now is that we are we can now fine tune GPT 40 and GPT 40 mini in the a Foundry um if you have an account today go check it out this notebook that I'll publish will also allow you to kind of use the Mini model but you can go ahead and retrofit it for the others but more interestingly what was what I found very cool was we can also do Vision find tuning with Azure open AI uh and this demo and uh notebook Etc were published if you go to that URL in a blog post there was also this was also the default demo in the breakout session that we kind of looked at earlier what this allows you to do is actually kind of do vision-based F tuning where you're refining how the questions about images in that particular uh data set so the dat set is not just a question answer pairs but it's also image URLs associated with that so you can refine things about how you ask and answer questions in a multimodal application this is super cool I have not tried this and I really really want to try this out next but if you want to go check out the tutorial go to that link over there but I want to kind of close this out now with a look at what's new as I Foundry so what you've seen so far is really us kind of walking through this is a typical way in which you do a fine-tuning workflow using AZ um Foundry portal SDK or CLI but what's new what else can we do in a Foundry so I there is a a video but I don't think can someone tell me if you can actually hear it if not I'm going to skip a models for the same reason you hear the AIO offers models to your bus prompt engineering retrieval fine-tuning and custom models without complex GPU set set ups or python expertise serverless fine-tuning makes Advanced Techniques more accessible simplified process lets you train models with smaller task specific data sets so they know the nuances of your domain you can initiate and compare fine-tuning jobs simultaneously to produce AI apps that are as relevant efficient and accurate as possible even the latest models are generalists and to get the most out of them we need to teach them about our business digital distinction is now about the quality of our AI experiences not raw materials alone but the craftsmanship so what was really exciting to me about the announcements at bill is not just that we have the Azure Foundry now as a One-Stop shop for all our fine-tuning prompt engineering and rag needs but that there's actually a whole bunch of new capabilities that have been brought in and I'm going to talk briefly about one of them which is distillation but you saw the vision fine tuning demo you should definitely go check it out in the breakout there's also weights and biases and gredle integration that we can kind of like briefly allude to the demo is there in the breakout session as well and provision global standard deployments but I'll I'll I'll mention distillation and uh one of my colleagues Cedric Vidal has a really nice blog post webinar and a whole bunch and a repo that you should look at if you want to understand how to do distillation but what was the bigger concept behind it the concept behind ation is that if you want to fine tune something today you've got a large language model that's pre-trained you're trying to fine-tune it with your data set but sometimes that there's a lot of cost involved what if you could actually transfer that knowledge to a smaller cheaper model so potentially take a large language model and transfer its expertise to a small language model and so distillation is the process by which you use the large language model as the teacher and a smaller model as the student and what happens here is that the teacher runs the questions gets the data the the responses and then those question answer pairs are fed as the data input for fine-tuning to the smaller language model so it's effectively trained only on the knowledge that is given to it by the teacher the reasons for this are multiple first of all the cost and dat you you have a much smaller model you can actually train it specifically on the things that are relevant to your particular domain and because it's smaller you can run it at the edge you can run it on a phone you can run it in kind of like multi uh um or multi-agent architectures or complex orchestrated workflows but what it really needs for you to do distill to do distillation is a set of three steps first you need to pick a small language model that you can't find you you need to pick a large language model that is capable of generating those input output response pairs take those and those are called stored completions that open AI now provides us a new feature so take the stored completions which is like the running the large language model and taking the data out of that storing it and then providing that as training data to the small language model and last but not least that has to be put into a loop with an evaluation strategy which looks at how well this retrained or like fine tune small language model performs as compared to your original and this I think is going to be a really really interesting place for us all to play with particularly for specialized models like healthcare um Retail Finance and so on but where can you learn more I kind of want to wrap this up by talking about some of the really really uh interesting tutorials and resources that came out of ignite so I'm a big fan of markm maps so I'm actually going to switch over and show you an interactive Mark map that we can look at but if you grab this URL this is basically the gist of what came out in the past month that's worth us kind of looking at so I'm going to go over open this up in an interactive way so we can look at it so this is what a mark map looks like so first thing news if you go look at if you go to that URL that I gave you and click on it you can kind of scale it up and down and look at all the different news in September 2024 that was the first time that we're talking about customization of llms uh this was a place where they introduced new models this was in September uh new models were introduced for Azure open Ai and we kind of started down this path of AI customization with fine tuning for GPD 40 next if you kind of look at the fine tuning collaborations that were announced at ignite there are three different things first you want to look at weights and biases so if you look at Azure open AI weights and biases allows you to integrate the weights and bies dashboard directly into your fine tuning workflow and that means that as you fine tune the results of this can be pushed to the weights and bies dashboard and you can compare and contrast different versions of your application get more detailed Anis and metrics that kind of help you determine if this is um working well or not they also have synthetic data I think that's grle I'm trying to remember uh there is actually an example with gredle that shows you how to use rag with uh fine tune data to kind of create synthetic data pairs that you can um use in your fine tuning I think that's yeah um I need to go look at that I forget so um let me see what else is there oh yes and then finally this is the the link to the session that you should take a look at which has demos from weights and biases the new distillation custom Vision models as well as basically the new AZ Foundry uh features and capabilities from a docs perspective the main things you want to know are how do you define tuning with azf Foundry the link to discover tunable models in the azf model catalog let me Zoom that out um what's interesting to me is now you actually have tutorials for both managed computes so you can find you in managed compute models in uh the a model catalog but you can also do pay as you go there are model options that you can use and I think let's see if my um let's see if my uh catalog is still available you can go in and filter by both uh find tunable models and deployment types a second so if we look at siless apis and we look at what models are available there's actually a bunch of models that you can use which is pay as you go so your costs are going to be slightly less uh last one not least uh there's a whole bunch of examples from not just Azure open AI but you can go and look at fine uning in open AI they have a a bunch of cookbooks as well as the distillation um sample I'm also interested in looking at how we kind of find you models outside of the a outside of the open AI provider space so if you're interested in this keep an eye out for more notebooks that I'll be adding to this particular repo and I'm specifically interested in hugging face so hugging face has a small language model called small so there is a really nice tutorial over here that teaches you how to fine-tune that model uh and this is basically on the hugging face Hub we do have the small model version in our catalog so if you try it out on hugging face you might be able to figure out how to do the same thing with the version that's in our catalog and then there is a much uh kind of deeper set of tutorials and tools that you can use in hugging face itself one of the things that I didn't talk about today but you should go check out is a startup called unslot that provides fine-tuning notebooks which include fine-tuning examples for 53 that are more efficient they take less memory and are faster and you can start from them and kind of like uh bring your own data set we also have an fine-tune UNS sloth model available in Azure model catalog today if you want to try that out before you dive in and then um I think the last thing I'm going to leave you with is my whole journey into this believe it or not I'm not like I kind of focus a lot more on brag and prompt engineering but my whole journey to this started with this really nice course that we have an open source curriculum that Corey uh spearheaded called generative AI for beginners so if you kind of want to not just look at fine tuning but think of the Continuum of your AI engineer Journey then definitely go check out the generative AI for beginners course chapter 18 was fine tuning but if you go into the um curriculum itself you're going to find earlier chapters on prompt engineering uh rag agents and so on and uh let's see what else do we have I think with that I'm kind of going to go back and just wrap up so we did a bunch of things today we kind of went on a Whistle Stop tour of what uh AI customization means and what I really wanted to think about now is how do you build yourself your own AI engineering toolkit by taking these different skill sets and building on them with your data with different models deploying it with serverless versus compute trying it out with multiple like design patterns could be agents rag Etc how do you go about building your own engineering C your a your toolkit start from generative AI for beginners but then when you're done publish those back contribute some of your own insights notebooks Etc to the course and come tell us what kinds of data domains and challenges you faced in fine tuning so with that I'm going to end this I think I have probably a few minutes for questions so my goal was really to take you from this notion of there's an AI engineering U Skilling Journey AI customization consists of prompt engineering fine tuning as well as rag fine tuning is where you want to go to when you've exhausted the other two options the model fine tuning process allows you to do this in Azure Foundry via the SDK via the CLI or via the portal there's a bunch of new tools and techniques that have been integrated into the Azure open AI um service on the Azure AI Foundry at ignite weights and biases grle and uh stats and last but not least there is the generative AI beginers course if you want to kind of get a grounding in fundamentals before you dive into fine tuning I'll also leave a shout out again to uh the work done by my colleague Cedric Vidal who actually does a lot more specialized stuff in fine tuning look for a blog post and um a repo on raft which is rag with fine tuning uh using llama and with that I think I am done uh I will kind of see if there any questions but if not uh Corey I think oh we have just over 10 minutes left okay yeah yeah we're good um yeah there was a question around sharing the git link which is a classic um believe that's what behind this QR code or um no that's just the that was just my LinkedIn but I'm going to publish this there the okay yeah the thing is actually um or we could put it into your um generative a for beginers course too I guess I could yeah for sure that would be probably good as well yeah into this but yeah great work I really love how um you can definitely see a changing in time of you know we all started this this world of gen of AI collectively a few years ago and I think initially everyone was like oh fine tuning don't do it just find rag uh you'll find a way to fix Rag and now it's like you know the AI craftsmanship as you saw that little Diamond thing so it's great that we're not just saying don't do it but uh you know here's the reasons why in a very clear way uh for people to understand which is um uh perfect one of the questions I I think I have is um you know now that we're making fine tuning you know way more accessible you know through The Foundry and it's just a clicking experience that will probably lead to more people fine tuning I would assume you know that's natural is there anything you know you covered some of these things of the reasons why but is there anything that you would say is like where fine tuni goes wrong like like before you click that button like ask yourself that's one more question before you start committing uh you know the resources and and funds and things like that do it I think so and that I think the the the the the main thing there has to be the evaluation steps so I'm going to put the the link back in there and that's because sometimes if you don't do this right fine tuning will actually make your model worse right because you're basically overriding some of the instincts of the original model so I think the the two cases for me are ask yourself whether the cost is worth it because when you basically and this is the number one uh you know the thing I asked the first question you should ask is is it Justified because the minute you ftuned a pre-trained model it's now your baby like if the fine-tune model evolves too bad so sad you got to fix it if something happens if they if they come up with new mechanisms you need to retrain your model so now you're paying the cost for that original uh retraining you're paying the cost for hosting and you'll be paying the cost for maintenance because you have to not only maintain it for your own app but you got to keep on par with the original model second and I don't know of this has changed Cory you can tell me if it has but typically most of these fine-tuning models um they are tied to certain regions right and I'm a huge proponent of model choice I really think that like we need to be able to switch models quickly one of the things with fine tuning is now you've invested so much of your time into this one model that it's very hard for you to switch back to something else that comes on the market later that you think might be interesting that said I I think like at some point we should put a hatp out to the inance API because as long as you're fine-tuning a model that is supporting the Azure open AI I mean sorry Azure AI model inance API it should work the same with both your finetune model and the original one so you could still flip it out your finetune version for something newer um the last but not least I think and this I don't know if we're recording but I guess I could say that I've actually seen hugging face to me is the most interesting thing because almost everything that you see on hugging face is a variant it's a fine tune variant of someone else's pre-trained model and there are people who actually retrain the model to remove things that have been put in for example you might say well this this particular model is uh putting too many guard rails on speech I wanted to actually be able to curse I'm going to find take it out that could be dangerous on the other hand it's been a really really big game Cher in my opinion for multilingual and localization so I'm seeing a lot more we we kind of it's a language model which means that a lot of the things that we're seeing is kind of based on Linguistics and understanding like you know what people ask how they ask it so where I see the real value is like sale there was a model that came out for South Southeast Asian languages we at ignite saw a Japanese bilingual model that it can be fine train announced as well so I think that to me um the the kind of like the balance is should you do it the first question you should ask for almost everything is is this really an AI problem are you do you really want to take the cost for fine-tuning but once you choose just know that you now own the baby right so you've got to understand um how you how you how to keep checking that it's continue to be relevant how do you keep evolving it how do you have a plan as a backup for moving away from your fine tune model if you need to nice if only we could get the uh Foundry team to uh before you click on the the F tune this the link for this YouTube video will come up and then people review that would be amazing but hopefully everyone will get this knowledge because I think to your point right we like you know I mean I really like this idea of responsibility of the model after you fine tune in you know it might not I mean it's not a silver bullet it's still you could make the model worse for your use case so please consider all those things I think n you you've done a great job of closing out this series uh very clear in terms of um you know application and where things are going as well so you know if you do have a season 3 you are welcome aboard not that there's of course you but I I think well if you have season 3 I think you should get those Partners who build developer tools I think that's where it's going to be I'll also put a a request that I think it's not just the fine tune models but the data sets so if you're thinking about it and you've got a domain and you've got a cleaned up data set try publishing it on hugging face or make it available to others so they can also build off of the same thing I think that kind of knowledge sharing in the open source Community is what's going to make a difference beautiful I'm going to end it there then uh thank you niia for joining as well as everyone you you've stayed here for about an hour uh you could have been anywhere on the internet right now but you spent a whole hour with us talking about fine tuning and for that I'm truly grateful and also grateful nithia for you joining and closing out make as your a season two with this great lesson thanks everyone thank you bye
Show moreGet more for invoice template ai for support
- Invoice bill format in excel for Sport organisations
- Invoice Bill Format in Excel for Pharmaceutical
- Invoice bill format in excel for Human Resources
- Get Invoice Bill Format in Excel for HR
- Invoice bill format in excel for Entertainment
- Invoice Bill Format in Excel for Education
- Invoice example excel for Accounting and Tax
- Invoice Example Excel for Communications & Media
Find out other invoice template ai for support
- Learn how to do a signature on Word with AI
- Learn how to do a signature on Word with Artificial ...
- Learn how to do a signature online with AI
- Learn how to do a signature online with Artificial ...
- Learn how to do an electronic signature with AI
- Learn how to do an electronic signature with Artificial ...
- Learn how to do an electronic signature free with AI
- Learn how to do an electronic signature free with ...
- Learn how to do an electronic signature in Excel with ...
- Learn how to do an electronic signature in Excel with ...
- Learn how to do an electronic signature in Google Docs ...
- Learn how to do an electronic signature in Google Docs ...
- Learn how to do an electronic signature in PDF with AI
- Learn how to do an electronic signature in PDF with ...
- Learn how to do an electronic signature in Word with AI
- Learn how to do an electronic signature in Word with ...
- Learn how to do an electronic signature on a Mac with ...
- Learn how to do an electronic signature on a Mac with ...
- Learn how to do an electronic signature on Mac with AI
- Learn how to do an electronic signature on Mac with ...