Empower Your Animal Science Pipeline with airSlate SignNow
See airSlate SignNow eSignatures in action
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
Managing your pipeline for Animal Science
Managing your pipeline for Animal science
By using airSlate SignNow, you can save time and resources, ultimately increasing your workflow efficiency. Try airSlate SignNow today and see the benefits for yourself!
Sign up for airSlate SignNow and start managing your pipeline for Animal Science like a pro.
airSlate SignNow features that users love
Get legally-binding signatures now!
FAQs online signature
-
How do you effectively manage your pipeline?
These five tips will help you manage your pipeline effectively. Build and Maintain a Clearly Defined Sales Process. ... Forecast Like a Pro. ... Eat Your Key Metrics for Breakfast. ... Implement Effective Sales Rep Tracking. ... Conduct Regular Sales Pipeline Reviews.
-
What is the pipeline management principle?
Each stage of the pipeline should be built with the intention of making it easy to: Visually manage the various events that make up your sales cycle. See where your potential buyers are at all times on their journey from cold lead to customer.
-
How can I improve my pipeline?
10 Tips for building a stronger sales pipeline Use LinkedIn for prospecting. ... Look a level deeper when identifying decision-makers. ... Ask for referrals. ... Take time to make discovery calls. ... Take another look in your CRM. ... Strengthen your personal brand. ... Be a thought leader. ... Replicate success with templates and workflows.
-
What are the 5 stages of a sales pipeline?
Stages of a Sales Pipeline Prospecting. ... Lead qualification. ... Meeting / demo. ... Proposal. ... Negotiation / commitment. ... Closing the deal. ... Retention.
-
How can I improve my pipeline?
10 Tips for building a stronger sales pipeline Use LinkedIn for prospecting. ... Look a level deeper when identifying decision-makers. ... Ask for referrals. ... Take time to make discovery calls. ... Take another look in your CRM. ... Strengthen your personal brand. ... Be a thought leader. ... Replicate success with templates and workflows.
-
What do you mean by pipeline management?
Pipeline management is a process by which companies identify where their cash is flowing and then direct that money where it's most productive. This is called “pipeline management.” There are many ways to go about this. The most basic way to do it is to track the movement of cash in and out of your business.
-
How would you manage the pipeline and nurture process?
Best Practices for Sales Pipeline Management Pressure-test your Sales Pipeline to Eliminate Sales Pipeline Risk. ... Track the Health of Sales Opportunities to Help Sales Reps Develop Better Habits. ... Rethink How to Nurture Sales Leads. ... Concentrate on the Best Leads. ... Keep Your Pipeline Up to Date.
-
What is pipeline management?
Pipeline management is the process of identifying and managing all the moving parts — from manufacturing to your sales team— within a supply chain. The best-performing companies learn how to identify where their cash is flowing and then direct that money where it's most productive. This is called “pipeline management.”
Trusted e-signature solution — what our customers are saying
How to create outlook signature
you okay let's go ahead and get started I'd like to thank everybody for joining us today welcome to today's webinar pivoting your pipeline for micro services I'm Julius Rosenthal I'll be moderating today's webinar we would like to thank our presenters today Nathan Martin CEO at Sage core technologies and Tracy Reagan CEO at deploy hub a few housekeeping items before we get started during the webinar you are not able to talk as an attendee there is a Q&A box at the bottom of your screen please feel free to drop your questions in there and we'll get to as many as we can at the end of the presentation this is an official webinar of the CNC F and as such is subject to the CNC F code of conduct please do not add anything to the chat or questions that would be in violation of that code of conduct basically please be respectful of all of your fellow participants and presenters please note that the recording and slides will be available later today on the CNC F webinar page with that I'll hand it over to Nathan and Tracy to kick off today's presentation thank you so much and I want to thank the CNC F for hosting this webinar and having us present this is a this is a fun topic for me I have been doing I guess what you'd call the software delivery lifecycle for pretty much most of my career and I'm part of the CD foundation I'm on the CD Foundation Board which is a sister organization to the CNC f if you don't know about it go go learn about the CD Foundation we're doing some interesting things there around continuous delivery I'm I was also on the founding board of the Eclipse Foundation I'm a DevOps Institute ambassador and I love to talk about micro services I'm just gonna Nathan's going to take a second the second half of this presentation because he's going to dig down into some stuff that we're doing he's doing around sto but I want to say he's one of those customers that you love to geek out with so thank you Nate for agreeing to be on this showing what you're doing at Sage core absolutely so just to get started here what we're gonna cover you know how they tell you just to you know tell your audience what you're going to tell them you tell them and then you tell them what you told them we're gonna do it that way because there's a lot of information and this stuff can be really confusing so we're going to cover a cloud native journey in what is now called a CD pipeline kubernetes and micro-services is really moving us away from this monolithic style of software development to a service based approach and that really will require that you pivot your pipeline there are going to be some problem areas that we're going to cover as so that you can at least start thinking about it we're also going to take a look at a use case where sto is brought into the the broader picture to do some some some routing and it may not seem you know intuitive that sta would be part of your CI CD pipeline but because this do does the routing it does become part of your pipeline so that's what I mean well there's a lot to learn about how the pipeline is pivoting and I'm hoping that this presentation will get you thinking about what you need to do for the future so again things to think about most organizations when you're working in a micro service world you're going to probably within your CD pipeline have a every workflow will represent one micro service so you have a micro service will be related to repository and that repository is going to be related to a workflow which means you're going to need lots of workflow workflows running and we're going to need to talk about templates configuration management is going to be lost and we're going to talk a little bit about that and it's lost mainly because we don't do builds anymore and then the way we deploy software is going to be different and service mesh comes into that picture service mesh has the potential of finally getting rid of a waterfall approach where you don't necessarily have dev environments test environments and production environments now I know that sounds crazy but it's true and I know that most of you even if you're working in kubernetes you probably have clusters mini clusters we have I've had some customers we talked to one of them recently said they had over 70 clusters so we know that you have a lot of clusters now but servers mesh has the ability to help you reduce the number the number of clusters you have to manage so let's just go quickly over what functions are and I know this is Sen sub CNC F group so you should really know what functions are but I mean micro services micro services are functions you have to think functions when you think micro services so the key to really understanding how micro services can be leveraged is to really think about them in terms of functions and micro services are immutable now this is going to be an interesting fact when we get later and talk about is do but think about the fact that you can push a micro service to your cluster and that particular version could go on living as well as a new one living right next to it which means that you could have applications connecting to both micro services I know that's not what we use are used to in monolithic because a monolithic you take your binary you compile it and you push it out and you push it into certain areas and you it's it's copied over it's not immutable so what you need to think about in terms of what your what your journey especially on the CD fraud is that you're taking your application and you're taking a hammer to it and you're breaking it into lots of little pieces like lots of little pieces potentially Netflix has somewhere in the ballpark of forty five hundred micro-services that they're managing so it's like taking a wineglass and taking a hammer to it and now you have a pile of glass that you call your application the application is still there it's just lost but it's still there and we still need to represent it and it's still what your end users are executing right they're not running a micro service they're running a version of your application so with micro services your overall landscape begins to change when you shift to have modern architecture you have to think about how you're going to manage individual micro services that are deployed independently now while we talk a lot about managing the the kubernetes environment and I know a lot of money investment and time has gone into managing at that level what we're really talking about today is the CD pipeline as it relates to managing the applications that you're building in your micro services environment what's running on those clusters not necessarily how to orchestrate or manage those clusters but what's running on them and how are how are your customers executing what you're putting out there and why is the CD pipeline disrupted what is it that about micro services that changes the way we actually manage a CD processed isn't there still a build test and a release micro services because they're deployed independently it kind of changes everything we do it really disrupts our basic understanding about how a CD process operates and your dev test and prod environments are morphing seriously morphing and Nate's going to talk a lot about how they're morphing and how STO allows them to morph so let's just think listen we have to in order to understand the problem sometimes it's easier to look at our history and if we look at how builds are put together in the monolithic world what we what we do today and monolithic is we have a build Meister who pretty much focuses entire day on making sure that your CD pipeline when you do a check in that your your code compiles and what that means is they make decisions about what code needs to be checked out for that compiled what versions of the code they also check they also determine what versions of libraries may be used they may use something like an artifact turi or or maven to try to look at transient dependencies but at the point is they make some really important decisions about pulling together what your binary looks like now when we did the linking there we did it really early right it's a first step on our CD pipeline in a micro service environment we do it last so we go from linking a dev at the very first part of our CD pipeline to managing that linkage through api's and in a runtime environment and that runtime environment can be very different you can have many clusters that your your your doing the linking at every cluster that you deploy to and every cluster you deploy to because I could have a different configuration of micro services so our bills were shipped they you'll still have a CI process but your your compile if you do compile will be small if you're using something like golang you may compile but if you're using Python you're not but you are creating a buy a build image and you're registering a build image and that's what our new build will look like workflows I think will cause most people moving into this new area the biggest problem in terms of how your pipeline needs to shift to support this new model we have to think in terms of having a single micro service that is independently deployed so it needs its own workflow while most of you are probably using you have some very nice CI CD pipelines you're probably going to have to look at restructuring them in some way because it what you need is to be able to have a templated design you know I just learned about Jenkins X not too long ago I started putting my head into it and while the a containerized pipeline solve some problems just for your pipeline in terms of plugins and encapsulating those so you don't a new plugin doesn't break everybody else that level of that that change in thinking about putting of containerize in your pipelines means that you could actually template based on those containers so think about as you move forward how you're going to manage hundreds of workflows and potentially want to make a change in a single workflow that will replicate down or it will be automatically seen by other other workflows because what you don't want is to be managing a thousand workflows now in terms of the the you know that's the three core or let's say the two for today the two core pieces of a CI pipeline a CD pipeline is build and release and those are the two pieces that are going to be impacted the most when you have a when you have workflows running through separate Micra services running through these separate workflows and you're pushing them out independently you really begin to understand the loss of the application version and view this is a configuration management issue that was resolved when you did the build in monolithic and you made those decisions you created a build number you had an application version number that you were running off of all of that goes away so while you have all of these workflows running you're going to need to start aggregating them back down to what it is that they're pushing out and keep in mind that microservices in a really proper implementation are reusable which means one applicant one API developer could update a micro service get it pushed out to the cluster and you as an application team now has a new version of the application again we're talking about basic configuration management problems that the micro service environment creates that we have to start thinking about solving in the CD pipeline so I love to call these images Death Star's because that's what they very much look like and I saw somebody reference that on a Google post somewhere along the line a couple of years ago and I've used it again Netflix alone has about 4500 micro services you know thank goodness they have micro services because I know we all want to watch the fifth season of the last Kingdom as soon as it comes out if you haven't watched one through four you better start because it's really good but thank goodness that they can support the kind of usage and they can support that kind of usage because they are built around a micro service environment allowing these things to spin up and spin down on as they need to but that doesn't mean it comes without some issues and configuration management is one of those issues I'm hearing new terms like haunted graveyards and Frankenstein clusters questions about when you deprecated a micro service a haunted graveyard meaning that you might have a node out there that's running a bunch of containers and yet our container you've no idea what it does but it has transactions going to it so you if you take it away somebody's going to haunt you Frankenstein clusters or just kind of grow and get bigger and bigger but you really don't have a good understanding there's observability tools that are working to help us find these these this kind of data based on the running of it in a cluster but we have to think about how we solved that problem in the in the monolithic we did it at the build and we had a lot of control about what went into a particular binary we need to recognize that that is what we need to bring back into the seedy pipeline in order to have visibility of the application level regardless of the cluster that it's running in so it's a versioning issue the other thing you need to you're going to see and this is a this is an amazing topic if you're moving into micro services and I highly recommend you read about it and learn as much as you can and that is adding a domain driven design into this entire process which means that instead of having a kitchen drawer full of micro services that you just throw in there and it's this junk drawer everybody has a kitchen drunk junk drawer you know where's the where's the reading glasses you dig through it that is not what you want to do with micro services every every micro service needs to have its own drawer so you know where to go find one if you're looking for it if you if you're writing a really good security services single sign-on you want to make and you're sitting on the eighth floor you want to make sure that the teams on the fourth floor know where that is a domain driven design helps you do that it is what creates a platform for defining the patterns within your organization and your patterns are going to determine what kind of micro services that you need to develop and where the value is in those micro services so understanding how to start building out domains and how those domains relate to each other will trickle down and allow you to be able to start understanding how you decompose your application and and in some cases you might start by decomposing your applications and understand those patterns and see if you can start defining those domain structures you know database access routines centralized logging security routines so you log on there are ways that you can start seeing patterns even by decomposing one application when you get to the second one those patterns really start showing themselves and that's how you begin building a domain driven design and that domain driven design needs to be part of what you're doing in terms of understanding where the pieces and parts come from what the versions are and and where do we go if we need to understand if this value this domain has still has value so we're going to go through a few things that's that's kind of changing I think helm is certainly become everybody's favorite for creating your new can Tanner's and applying those containers to the the cluster so if you haven't looked at ohm you should be taking a close look at him because it probably will become one of the core pieces of your CD pipeline ortelius ortelius is a a open source project near and dear to my heart deploy hub is one of the main contributors we're working with companies to help them catalog and organize micro services and track the visualization of those applications SDO is what we're really going to focus on today and sto is makes for an interesting part of the CD pipeline because it really is going to change the way we think about doing releases and that's why it's important remember the two made two main functions of your CD pipeline build and release I mean there's other things in there security scanning and testing and I mean there's there's certainly other parts of the ecosystem but why we did this in the first place was to be able to compile on a check-in and get it out to the customers as quickly as possible those two key components are morphing and sto is one of the reasons why so let's just think about what that CD pipeline might look like in the future your your build is going to switch from building binaries to creating container images you're gonna have to have a way to track what microservices and and how they're being executed not just in one cluster but across many clusters it's a typical configuration management release kind of problem set that you we need to solve helm will be helping us with creating those container images and pushing those images out to the clusters and then we're going to talk about service mesh and the ability to route and then that brings us back through the the feedback loop to creating the understanding where problems are and then where we need to start them again so kind of if we go over this you're going to have container images instead of compiles your bottom reports going to have in your different ports going to have to come from some other place and that's tools like boy huh Burnett ortelius that's doing configuration management helm is going to be the piece that pushes your container out to your cluster it's it does it so easily you don't need a big heavy ara tool to do that to be quite honest and and when it comes to where it's going and how it's being routed sto becomes that piece so the results are going to be you're going to get rid of these monolithic compiled scripts everybody will be able to do a ten-minute build it will finally happen configuration management however is going to become more critical because we lose the bomb reports and the different parts and the impact analysis when we threw away the build the CD pipeline needs to be able to support literally hundreds of workflows so looking at tools that provide a good templating service for the workflows or even once that aren't going to be event-driven Tecton is part of the Google of kubernetes core and it is based on CR DS so you can execute our CSE Rd to do X to do events that the pipeline would have normally pushed and then routing and feedback is going to come from Sto so if you think about it I'm just this is my last slide and I'm going to pass it over to Nate he's really going to dive down into how he implemented is do sto is a it can perform user routing and right now we are working with Sage core to beta test this out but what has the ability to do is to be the central point to pass off a change in a micro service so let's say we push email service and v1 dot seven out to the to the cluster St oak has the potential is saying hey you're a developer when you execute this application you need to execute the new version of the application which uses this new version of the email service now that's different from installing it into a development cluster and then saying let's execute the the new micro service and the dev cluster it means that it's going to be using all of the other stuff from the production level and when it comes to a release it's just routing again now when I first realized that this was possible I spoke to some people and they looked at me like I was green seriously like you're insane and then we did a recently we did a meet-up with a company out of Santa Fe called decart labs and their big spinnaker user and he he had a pretty detailed presentation around spinnaker and then towards the end of the end of the session he said now I'm gonna tell you guys something and you're gonna think I'm crazy but we have gotten rid of all of our dev test environments and now we just have a dev test cluster we don't we don't track we have a dev test cluster and then we have a production cluster we put everything everything we do for dev and test isn't a single cluster and they did that because they had such a bad cluster sprawl problem they needed to figure out a better way to do it and when they put their heads together some really sharp guys from Los Alamos they really realized that that was the best way to do it and on that I am going to pass it over to Nate who's going to go through a real-world Kate use case and using Sto in this way Thank You Tracy well thank you everyone for being here today I'm gonna dive into some of the things like Tracy said about you know really some of the challenges that we had as a company what our software is that that led to these challenges ultimately and how we solve those and some good examples that will help you to think about your architecture moving forward because sometimes knowing the these examples will help you to work on your domain Devon driven design your routing your your dev test cluster you know all that kind of stuff so we started a couple years ago in the true dynamic enterprise resource planning room what I mean by that is traditionally you know the last eight years saij core technologies has been custom developing enterprise software to essentially manage track and and report on data for various organizations right now we started developing tetra core to help us solve the problem of scalability within that sector and ultimately we decided to move to a managed PK or a managed kubernetes platform obviously gke is our choice with sto as the service mesh for the routing so our main challenges really came down to the deployment staging so being able to see of a micro service being able to test on it in a very specific context and then release it to then another specific context the same thing with versioning you know if you deploy a micro service that you need to change you have to be able to version that appropriately we'll dive more into that our request routing being able to actually route to a specific micro service for a particular context I'm gonna use context a lot in in my slides only because in the dynamic Enterprise realm everything is context there's nothing really set in stone so hopefully I don't confuse people with this so our host based routing is really a good example of the domain driven design that Traci got into you earlier so in this example we're routing all traffic from my app comm which is default to production and then we also have prod mayakam which is our direct production now both of them are going to route to the same spot however in the future I could choose to split those if I wanted to in this yam well you'll see that they're both routing to my production version and we'll get into a little bit of how that's actually delegated here in a couple of slides so what we realized here was that you know well yes we can do some host based routing within that could raise ingress objects and whatnot however there were some limitations when we actually use that in production we also really needed a much more dynamic lifecycle to actually test each service to test the routing on different domains given that our solutions really there's not a lockdown in domains so we actually have to be very very very structured with our domain driven design and we also need to actually automate our CD pipeline so that when we roll out or when we push a new image that it's very easy for us to actually put it into the right spot to deploy it in our cluster and route it ingly without tons of tons of heartache right and work so we'll dive into how we do that next so one of the things that we do for the routing of different destinations so different micro service versions I should say is by setting up the sto destination rules so this really for for us has been able to make a host driven deployments much simpler and much quicker than the kubernetes ingress so what really makes us different you know obviously I can update my kubernetes ingress object you'll see that your routing will update you know ingly and everything's fine the problem is is usually in a especially in a managed scenario like GK or AWS you're going to have a lag time right while they're setting up all the abstractions to their routing rules there all that stuff it takes you know on GK you're talking a couple minutes right that can be a problem when you're talking a massive organization for example that we might be serving you know a couple minutes can be the difference between you know I almost want to say life-or-death you know I mean we're talking Big Data you know anything can happen when you're processing huge amounts of data so you have to have that ability to update things instantly be able to rectify things instantly whether it's in your routing on the host level on the path level etc and for us sto really solve this problem nicely we've you know looked into some of the other other service meshes and I won't talk to the validity of them I'm sure they're all great but sto seem to be the easiest implements within the kubernetes environment quite frankly so that's why we chose yeah so another problem that we really had especially in our case was that our request routing is not just static so obviously you know we're using a whole different combination of hosts and and path based routing cookie routing etc but you know we really needed that that core fundamental architecture to be able to actually define these things once across multiple clusters and then all of those routes persist right that way it's it's more of an issue of versioning than it is of you know am I actually routing to the right application or microservice I should say within my application so this you know the you can hit a limit quite quickly on the kubernetes ingress routing and that's really the biggest limit that I think we saw it's great for your top level network design very very top level but I think there's a there's there's always a limit I think it's up to like 13 rolls in gke but I won't speak to that let's see what's next here okay so like I said before you know we're we're combining different methods of routing headers domain based cookie based even custom headers to help us do different things with different entry points we'll say and then our path based routing you know different essentially URLs within a domain without sto we really didn't have much of a way to actually connect these things in a way that will scale across you know the US or even across the world and better yet not have to have tons and tons of developers to try to organize this stuff so I'll walk through a couple examples of these routings this is meant to be a broad scope gamal example so obviously there's going to be some things you'll have to look into documentation or self these aren't necessarily full examples but they do highlight the core components of you know how the routings being defined so in this example you can see here that we've defined our hosts beta i accom very similar to the one that we defined for production however in this one we're now routing to the destination of beta which is is still my app but it's another version right so this might be you know for testing your beta right obviously let's move on here so in your your CD production your CD process right you we've talked about having the different labels to track your various releases right obviously none of these things when you release something they don't overwrite an old version however there is a human need for labeling a deployment or a version in a sense of which version in human terms am i working off because you know in monolithic right we're all used to having a version number that we're working on for a period of time and you know at some point we all start working on the next version number in this case you know obviously it's not possible within sto or within microservices in inherently if you're you know using it because every deployment every container that you built it's an a unique version right so we do here is we label the deployments based off of the the ISTE of subsets so you'll notice here that in these animals I've got two versions one is my app version 2.1 the other is my app version 1 so on the very writes my deployment for production is version 1 and it's almost identical to alpha or alpha to production you get the point except for the different version label and so what this will do if we move onto the next slide this will correlate to the something called a destination Rome so destination rule in in my mind really when I was learning these stuff you know it kind of abstract that that problem that then we get with moving from monolithic thinking is that you know we're still working on a version you know you don't want to have to increment by you know a version every time you build a new image you know it's very cumbersome to manage so what we do here is in our dist destination rule we basically define a subset that will correlate to the version label that was in the previous slide so the previous slide was your deployments right these destination rules label a human friendly name to those version labels so you'll see I put a production label - version 1 I've got a beta label for version 2.1 and an alpha for version 2.2 now let's move on to the next slide ok so now what we can do is actually define virtual services to route to those appropriate subsets so instead of saying that this this let's just take my opcom for example instead of trying to route it every time to version 2.3 and then I update you know release a new version then I have to go and update every single virtual service - 2 3 or using this method it would be simply defining that this this host mayakam is going to go to the subset called production and then in our destination rules we then define what production means in sto terms right so now that that changes our pipeline a little bit to for deployment let's go to the next slide please so in this case obviously we're showing beta is going to the beta subset so this is another host based routing example and what we'll what we'll do here let's show the next slide please so what we'll do here is now actually update the destination rule to change what version right we'll go to an actual subset so in this case you'll notice that version 1 is no longer the production version anymore now we've elevated version 2.1 that's now the most stable version everything's ready we update our UML file push this out and then now every single production routing will go to version 2.1 another thing to that's really nice with this is and we won't get into it you know too much in this conversation but being able to route 50% of your new traffic to production 2.1 or excetera right but we'll save that for another time sorry so the the biggest challenge that we really had with migrating from monolithic you know the mentality right is really being able to see what's happening you know when you had a server you know you could very easily go and look or even design the logs to essentially spit out exactly what you need to know about that function within your application however now our functions everything's been exploded and now we're actually talking about real micro services all with their own complexities in different environments that they may be running right so how do we actually visualize what our cluster is doing what sto is routing to and help us to solve problems that might come with converting from monolithic mentality into the major services so one of the things here and I'm sorry Traci you might have to jump in on this one yes so the way for for helping with this routing what ortelius which is an open source project does is helps with what what Nate just described it helps with them the mapping of the the services over to the clusters basically so it's it's doing the configuration management needed to update all of that those that variable data that Nate was pointing out to the tracks the routing and replaces what basically your deployment so it sounds like it makes it much easier for the end developer trying to come into this world of micro services correct yeah yeah they don't have to go through everything that you just did awesome well so as far as this realization goes and this has actually even more more pertinent when you have a managed configuration would be to see how everything is actually performing so you know which services are actually in this match how are they connected how are they performing are they healthy can I operate on that or how can I operate on them right and being able to see that in an easy way that's not necessarily yamo files right so Keala really solves this problem very well and you know there's some other tools out there that I urge you to look into but Keala really was a the one that jumped out as far as visualization super simple it's you can get the instructions to it in your cluster on the SPO website as well as Jiali but I believe sto is documentation that's going to be better in this context so you can see here that we have a list of our micro services and you can see that they're healthy you can see the various labels that are attached to them a lot of this stuff really would be managed by ortelius right I would imagine as far as the labels and whatnot goes so that's another tool that would really help with this so we've got our rich micro services in the mesh now on the next slide here we can see how they're performing in the sense of both health both the namespace which is actually a very very very big deal in Inis do be very aware of your namespaces and how they relate so this helps with that and the next how are they connected so this actually solves problems for me all the time you'd be surprised you know sometimes you may do a wild-card host or something that you know you might have done it twice on two different gateways and sto or something what will happen is is then you'll be able to see exactly where the request is coming in what micro service is routing to it and a lot of times even if there's no possible destination for this request to go to and Kelly does a very great UI for that you know it basically will mark things red or you know alert you if there's something wrong and a lot of times it's very easily fixed but you have to be able to visualize it right and next so it's fine so Thank You Nate very much for going through that I think you gave a really great example of how you're using sto to manage that routing this is just a kind of a word from from your sponsor we do have art Ilyas out there that's helping with the configuration management to be able to create those that those destination rules that Nate covered if you're interested we would love to have you join the project and start talking about what that configuration management looks like and how to make those connections ortelius was the first map maker so that's what our Tillis is doing is we're building maps and we see that when as Nate pointed out trying to adjust culturally from a micro service to a monolithic can be really challenging this this particular tool is helping solve that problem so visit us on the art ileus github site and there is our contact information for anybody you want well I'll leave that up I'll leave this screen up here so if you want to reach out to Nate ask any questions about how he set up those destination rules if you want to reach out to me that is how to do it and on that we will open it up for questions and I'm gonna bring up the Q&A here great thanks Tracy and Regan do so first question do ortelius and ki Olli is Sto saw the same types of problems or are they different they're different so ki Olli is looking at what's occurring in the cluster on your behalf based on the routing the ISTE oh did ortelius is happening before you ever get to the cluster so it is the way you can manage your domains and structure your domain design it allows your application teams to create what we call an application based version so that as the microservices change underneath it you get a new version number associated to it and it tracks where what clusters being routed to once it's being routed ki Ollie I can help you understand what's actually occurring between your transactions great question thank you whoever asked that any other questions I was wondering can you answer that one yeah so the question is can grow fauna be used and yes I had a feeling someone would ask this Griffin is actually very popular and yeah it can be used and actually I believe consumes the same data that and I anyway but what it does though is I think Griffin is a lot more metric base so if you want to dive into certain things you know based on labels or whatnot I think that's more the or having a dashboard that you'd view more often I think that's when correcta fauna would be used more but that's more personal preference than anything I haven't seen though the kinds of relational micro service capability in Griffin oh yeah that key Olli offers and I'm gonna let you take the next one to Nate you're on a roll sure so question is if people are able to consolidate dev slash test and production clusters how do they segregate data I love this question I'm gonna try not to go too deep but essentially it depends on your design so if you have a single cluster and you're a I'll say a SAS model that me or let's say you're you're testing or your your your data isn't actually sitting in the same micro service as your production microservice that would hold your data whether that's a you know hosted SQL or whatever that is separating the data out but keeping the routing within a single cluster that's the simplest answer I can provide there without going too deep do you want more on that send me an email I hope that's enough and I'll take this one the question is are there cases where Kali and Hort Ilyas are both used in the same cluster so yeah ortelius does not run it all in a cluster it is again sitting outside of the cluster and it's looking at what's being pushed across all clusters so you could look at art Ilyas and say I know what version of my application is running in any cluster with Kali you would be running it at the cluster level and you would be looking using it to visualize the routing and the transactions so you would be running key ally in multiple clusters and ortelius would be pushing the destination rules before it ever gets to the cluster so it's doing the configuration management and Kali is watching what's actually occurring and I think we should get Nate to write a blog on the data question so he can dig into it I'd love to hear a really complete answer about it because it's not that is not the first time that that question has come along and it's actually you know we we've gone through so many iterations ourselves to find the best production method and a lot of it does come down to what are you actually doing you know what does your application do I agree and there's this question I you know I don't think I can answer it I know that everybody has a different journey when they're going through the migration over to a micro services and kubernetes environment from what I everybody that I have spoken to they they end up saying I wish I had to started using SDO or some kind of service mesh sooner I think it's a very important part of the journey Nate what is your what are your thoughts you know I would say that when you have more especially on the managed kubernetes level if you have more than I'd say three to five host based routing configuration that you're using on a traditional ingress you're going to want to consider sto pretty quickly um that's that's really probably the best indicator on a broad level that I can provide great oh there's another one yet the question is doors or Tilly's have a dashboard I understand per cluster view Kelly can be used but is there a higher level uiview yes that sort Ilyas has a dashboard and that dashboard is going to allow again it has a domain structure so that allows you to publish micro-services and when you publish a micro service you publish it with all of its deployment data like chart it needs to be used what the SHA number is what the get commit is to make that particular micro service a unique version as Nate pointed out as he was talking you need to understand what version of a particular micro service that you're referring to so it does that versioning and then what it does is allows application teams to consume those and then we track through the CD process we track the versions of the applications and their configurations think about it as a newbomb report or a new different port or your impact analysis so then Keala is sitting there on the cluster and actually watching what's at what's occurring there is probably some good integration that ortelius and Keely could have between the two especially when it comes to things like deprecating because deprecation seems to be a problem for a lot of micro service environments it's good to know who's using a micro service it's good to know what teams are using it and what version I think we could probably be pulling some of that information back but we're really just really pulling it back from Misty Oh great oh thank you yeah thanks Tracy and theythey Nathan for a great presentation thank you everybody for joining us today the webinar recording and slides will be online later we look forward to seeing everybody for future CN CN CN CF webinars everybody have a great day thanks thank you thank you Nate thank you
Show more










