Introducing the Pipeline tracking tool for Animal science

Efficiently track, manage, and streamline your Animal science pipeline with airSlate SignNow. Boost productivity and collaboration like never before.

airSlate SignNow regularly wins awards for ease of use and setup

See airSlate SignNow eSignatures in action

Create secure and intuitive e-signature workflows on any device, track the status of documents right in your account, build online fillable forms – all within a single solution.

Collect signatures
24x
faster
Reduce costs by
$30
per document
Save up to
40h
per employee / month

Our user reviews speak for themselves

illustrations persone
Kodi-Marie Evans
Director of NetSuite Operations at Xerox
airSlate SignNow provides us with the flexibility needed to get the right signatures on the right documents, in the right formats, based on our integration with NetSuite.
illustrations reviews slider
illustrations persone
Samantha Jo
Enterprise Client Partner at Yelp
airSlate SignNow has made life easier for me. It has been huge to have the ability to sign contracts on-the-go! It is now less stressful to get things done efficiently and promptly.
illustrations reviews slider
illustrations persone
Megan Bond
Digital marketing management at Electrolux
This software has added to our business value. I have got rid of the repetitive tasks. I am capable of creating the mobile native web forms. Now I can easily make payment contracts through a fair channel and their management is very easy.
illustrations reviews slider
Walmart
ExxonMobil
Apple
Comcast
Facebook
FedEx
be ready to get more

Why choose airSlate SignNow

  • Free 7-day trial. Choose the plan you need and try it risk-free.
  • Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
  • Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
illustrations signature

Pipeline Tracking Tool for Animal Science

Are you looking for a reliable pipeline tracking tool for Animal Science? airSlate SignNow is here to help! airSlate SignNow is a user-friendly platform that allows you to send and eSign documents seamlessly. With airSlate SignNow, you can streamline your document workflows and increase productivity.

Pipeline tracking tool for Animal Science How-To Guide

In conclusion, airSlate SignNow is the perfect solution for your pipeline tracking needs in Animal Science. Try airSlate SignNow today and experience the benefits of a streamlined document workflow.

Sign up for a free trial now!

airSlate SignNow features that users love

Speed up your paper-based processes with an easy-to-use eSignature solution.

Edit PDFs
online
Generate templates of your most used documents for signing and completion.
Create a signing link
Share a document via a link without the need to add recipient emails.
Assign roles to signers
Organize complex signing workflows by adding multiple signers and assigning roles.
Create a document template
Create teams to collaborate on documents and templates in real time.
Add Signature fields
Get accurate signatures exactly where you need them using signature fields.
Archive documents in bulk
Save time by archiving multiple documents at once.
be ready to get more

Get legally-binding signatures now!

FAQs online signature

Here is a list of the most common customer questions. If you can’t find an answer to your question, please don’t hesitate to reach out to us.

Need help? Contact support

Trusted e-signature solution — what our customers are saying

Explore how the airSlate SignNow e-signature platform helps businesses succeed. Hear from real users and what they like most about electronic signing.

airSlate SignNow is so helpful for any type of biz
5
Agency

What do you like best?

It’s so easy to use! We upload our agreements, contracts, accounting paperwork, waivers, etc. then add a few quick fill in or signature spots and send it off to clients or vendors for signature. Easy peasy. And we love that we always have a record of signed docs showing when they were signed for our records. And the reminder send is great for forgetful or busy signers.

Read full review
My experience has been generally positive as it has improved efficiencies in my business.
5
User in Banking

What do you like best?

The convenience and user-friendliness of the platform is what I like best. It is extremely accessible for clients who are tech savvy and those who are not. It is an intuitive program overall, and comes at a reasonably low cost for a small business like my own.

Read full review
The Only Contract Solution I Need
5
Administrator in Photography

What do you like best?

airSlate SignNow’s robust suite of tools allows me to add fields to any type of document for any purpose, and disseminate the documents in any method needed.

Read full review
video background

How to create outlook signature

my laboratory really works on one of the greatest challenges of neuroscience which is how the brain drives adaptive behavior and as part of this quest we really try to combine neuroscience engineering and machine vision in order to answer these type of questions our kind of collective mission is really to understand foundational principles in neural computations in order to build next-gen adaptive ai systems and so what i'd like to do today is tell you about our efforts uh related to machine vision in our laboratory so as you probably know adaptive behavior can really spend a huge state space right so anything from more classical trial-based behaviors which is so quite interesting to certainly uh taking these things into the wild and back you know bringing the wild into the laboratory which has its own rich and interesting history and of course complex motor actions which we study in our laboratory so just to give you a few examples part of the work in our lab is not just on machine vision but actually doing experimentations with animals to see how they adapt to the world so here's a mouse that we teach to play joystick video games even simple questions about timing and uncertainty or more ethological tasks for mice which in the one case where they're actually predators they'll go hunt crickets this is collaboration with chris neil's group who developed this beautiful paradigm and then just of course trying to understand the animal in its own ecological nation uh since these are laboratory mice just actually studying them in their natural habitat i think is quite fascinating for aspects of neuroscience and medicine as well and so i just want to highlight a few things that when we started getting into this field a few years ago there are particular challenges that we sort of saw that were unique to maybe animal pose estimation compared to the elegant and beautiful work in human pose estimation and so for example this sounds kind of obvious but humans have really different bodies right we couldn't leverage a skeleton or a pro pose prior across all species um so while these are all mice visually they look quite different in this scenario but even say a natural mouse feeding or other types of animals that have been used now in computer vision like studying the hydraulic legs of spiders um or the fly example that michael also nicely showed or things like comparative biomechanics across different types of lizards and geckos and so forth another kind of challenge if you will is that it's not really practical to label really large-scale data sets and so when we started this this um it was kind of unknown like how little data could you get away with in order to make tailored neural networks for these types of applications the other thing is which i won't really touch much on today but it needs to be quite fast so in order to study the brain we often want to do closed-loop perturbations where based on a behavioral event you want to say do an optogenetics based perturbation in the neural circuitry and so we need these tools to be very fast in even super real time with low latency we also saw challenges for doing multi-animal tracking um because they look really identical oftentimes like if you take two black mice in their home cage they can look highly identical even to a human observer they're very hard to distinguish and lastly which i'll touch more on today is that we're really interested in this vision of making robust like plug and play solutions right so many of you are computer scientists in this room but not all biologists or pathologists or ecologists have a rich computer science background and so to make these tools broadly accessible is actually takes quite a bit of work but we can also try to engineer better solutions in order to give these tools back to the community and really democratize ai in that way i sort of think of it as a you know i'm standing on the shoulders of giants many of you in this room and get kind of paying it forward to the rest of the life science community so i'll talk a bit more about that today as well okay and so i know this is a bit of a diverse audience so this is probably very uh familiar to most of you but just in case there's an incredible rich history and human marker this pose estimation starting back from deep pose and incredible algorithms like deeper cut and open pose and convolutional pose machines all the way up to now kind of benchmark topping things like higher hr in it and so in general uh many of these algorithms work in a similar way which is that you have supervised ground truth data you have a predictor which in this case is a neural network and you get out of pose there's a lot of free parameters so kind of back of the envelope calculations can tell you that you need a lot of label data potentially and we know that deep learning is really accelerating in the performance domain when you have this large amount of data and so for us when we started back in 2017 we sort of saw these as data hungry algorithms and i think we all probably agree that they still are but we wanted to know how we could bring this to the laboratory and so the short answer is essentially to leverage transfer the name which you're probably familiar with which is namely taking a trained network and asking it to learn a new task so i think uh most famously imagenet has had an incredible influence on the field for a multitude of reasons but also pre-train on imagenet has become quite a popular and almost standard technique now in many computer vision applications and so just to walk you through this this is a large data set of over 1.2 million images with many classes many animals and fruits and all kinds of things and you know typically what you do is you take a network for say classification you showed a picture of this cat and it should give you the label cat right and so what we did is building an inspired on deeper cut actually when they were sort of leveraging on some really nice algorithmic developments in deep lab was to essentially take a pre-trained network and leverage this to get away with very little data so now what you can do is essentially label the key points of interest so in our case right i kind of showed you this we're interested in mouse reaching so here's 13 key points on the hand you fine-tune this network end-to-end and have new deconvolutional heads and you can get away with very little data so this was trained on roughly you know 140 images the mice that it now can work on or not in the initial data set so it generalizes well so this was our first paper in this path and i highlight a few down here that we've worked on in the subsequent years as well so the package has really changed dramatically over the years we try to stay up with state of the art we have a lot more networks mini imagenet pre-trained backbones and a plethora of other tools and some of them i'll highlight today for you and so just to um define what i mean by efficient to some degree is in this first paper what we actually showed was that we first built a data set a benchmark of mice and here just to give you a sense of scale the nose is about 10 pixels wide and we showed that you know very little data now not into the thousands but into the tens around like 50 images give you a biologically meaningful performance level right so if the objective here is to label the nose of the mouse um the nose is 10 pixels so even tens of frames actually give you a really reasonable performance and a few hundred frames now you're meeting matching human level accuracy for the best labeler that we could find as i mentioned it also generalizes of course to unseen animals which uh you know black six mice to the aficionados they can be quite different they're different sizes different ages uh different sexes but also these networks are not um you know they're able to also say like if you put three mice in a similar setting you'll also be able to detect the poses of multiple animals in this scenario i think importantly for the life scientists too so i just point this out that you know the ability to know if a network has done a good job or leverage something about the confidence of the network is really useful particularly in the cases of highly articulated objects and occlusions because this was something that was quite challenging with many of the conventional or other approaches at the time to do and so this allows you of course then to label say a fly that's moving around in 3d space and get both in the left and the right side covered but not plotting data that's off the animal of course when they when it's not seen um and just as a kind of a highlight here to say that it doesn't just work on animals there's been a lot of really interesting applications so for example tracking cancer cells and how they lyse in medical applications so in the larynx or measuring tendons in humans during performance uh down to of course multi-animal which i'll talk about more so we've been really fortunate to have a huge uptake in the field we just hit 230 downloads this week which i'm quite excited about and also just maintaining and engaging with this community has been a big part of this software project and we've been lucky enough to be also featured and have our users featured in some popular media outlets uh you know helping to spread this tool to many different fields from ecology to life sciences um and just to kind of recap on that i think you know a large part of deep lab cut success maybe aside from the scientific contribution about transfer learning it was really um the usability of this tool and making it an entire software package so going from say training these networks or developing data sets applying different data augmentation types different network selections and to be able to run new inference and do this kind of all through graphical user interface uh was one of the things that we really wanted to make sure that our users could do so it's really allowed a couple different ways to the software to develop one is sort of a plug-in to a lot of say neuroscience specific things or like real-time pose estimation platforms but also then there's been a huge community around this that has built tools like taking the output of pose estimation and so of course i'm primarily talking about deep blood cut but as michael rightly noted the field is growing and it's really exciting there's so many really nice tools coming out now and um across you know many different groups including those from like ian cousin and others and so it's been very fun collectively as a community to see what people are doing with the outputs of such uh close estimation packages so just to give you a few and and nod to muybridge who as michael noted actually had a scientific question about whether the horse lifted all four legs off the ground now hopefully he would be impressed that he no longer had to build a zoopraxis scope to just study this type of behavior but could use computer vision to do this all right so in the next few slides i want to give you some case studies so this is some work uh in collaboration with amir patel's group at university of cape town south africa so he's a biomechanicist and roboticist who's interested in optimal kind of tail locomotion in these cheetahs this is work that was just published at icra last month but in general for this community i just want to highlight it they also have the at the poster session tonight but it's a large scale data set of over 7000 frames and particularly more than 20 000 3d frames of cheetahs essentially in the wild so i encourage you to check that out and talk to them for more information [Music] uh recently we've also been working on this problem of multi-animal pose estimation looking at really identical looking animals so what we did is develop four benchmark data sets that i'm showing you here so from swimming fish from multiple mice to marmosets to parenting mice where the pups look really identical and through the course of building kind of optimizations around this multi-animal problem we've also been building new network architectures and so we've tested these against say state-of-the-art models on cocoa or other multi-human pose estimation benchmarks and do show that we have state-of-the-art performance on these data sets compared to those data so we used um some of the in-imposed implementations and others and and this data in this network is now available as well through deep webcut and so jesse and the other authors will be here tonight if you want to get more information but i do want to just kind of highlight one kind of interesting thing to think about in you know humans or animals of course but you know animal pose estimation compared to humans um one thing is this body agnosticism all right so it's not necessarily obvious when you take a network and you're saying doing both learning the body parts with like key points but also then you know building the limbs like part affinity fields much inspired by open pose you know what lin should actually be connected and so to do that we built kind of an adaptive uh graph based algorithm that can say take what we might consider a baseline or naive skeleton uh but then also try to find a data-driven skeleton for this right so i think this is um probably one of the reasons or yeah we sort of know this is one of the reasons that these networks are actually doing a better job because we've optimized them to do better graph assembly for this so here what i'm showing you are just some of the output metrics of this namely how many unconnected body parts are left and the purity of the assembly at this stage of the thing um so baseline here is as i mentioned sort of naive skeleton and then this data-driven graph proving algorithm that we developed and then another algorithm that essentially is a calibration so it takes in a prior cost of like a pose prior of these learned edge associations so the discriminability of the different edges along the skeleton and so in the next um time yeah so in the next part of my talk what i wanted to kind of give you a vision is about kind of the robustness and where we're going with these types of tools so i hope i've convinced you that uh deep lab cut and related animal pose estimation software packages have really risen to the challenge of like you know kind of solving this for the one lab experiment or even within a lab right you take mice you can build these networks they're reasonably robust on your setting um but how will this look if you really want to share like these networks or data sets across labs or even across say an entire animal group like quattropeds or maybe across all mammals like is there a way that we can leverage like all this data that people are amassing to do this in a more interesting way and so you know just you know simply you know the larger the scale of this the greater the diversity so we're trying to think about interesting ways in order to like leverage this data uh smartly okay so our question really is is how can we create more generalizable if you will robust pose estimation networks for the users and so to do this uh as as many people do in computer vision we started with a benchmark um so it's not the weizmann horse dataset but we developed a new dataset which we call horse10 so this is around 8000 images of thoroughbred racehorses that sails across the united states and they were labeled by an expert equinist who really knew the key points and the anatomy of horses um super well and so the general idea is that you know all the horses are actually walking left or right they're all roughly the same age at these sales but of course they have very different coat colors there's different image statistics with the backgrounds they have different handlers so and so forth and so the challenge was if we train on a small subset of these individuals how well could these networks actually generalize to horses that we consider like out of domain individuals and oh yeah this is my reminder to tell you that all this data is linked on papers with code so if you're interested in playing with this benchmark or the data sets it's all of course available there so the first thing we wanted to ask in this work which was published this year at wacv was how good are these different uh backbone architectures is there anything inherently better about better imagenet performing architectures for pose estimation so kaiman hey and others have also looked at this now in the context of object recognition and other challenges but what we found is largely in yeah the same vein is what they found which is that you know propose estimation these networks that are better on imagenet are actually performing better for pose estimation so i didn't put the network names here but we tested a lot of different mobile nets resnets and the state-of-the-art efficient nets as well and so what you're seeing here is the um the blue is of course the train the test is within domain so on these ten horses it's just held out images from these animals and then the kind of interesting test is this out of domain right so these unseen horses that we actually see and so you do see that these more powerful architectures generalize better which was maybe a bit surprising given that they have many more free parameters and could be prone to overfitting and this is a pre-trained on imagenet backbones which will become important in a moment and so in general um what we found is that pre-training really matters that's kind of the take home i'm not showing you the data here but you can train for less time up to six times less with half the amount of data and get the same amount of performance but importantly compared to training from scratch where you see this huge gap between the transfer learning based networks so pre-trained on imagenet or training from scratch especially for these higher performing networks there's this huge gap to fill so even just using these pre-trained networks for animal pose estimation has already given you quite a boost in performance in this robustness domain to kind of contrast this to the domain shift that might be inherent in the horse 10 task right this out of domain individual we built in a horse sea right so from dan hendricks and colleagues uh you're probably familiar with image netsy and so we took these same corruptions and essentially corrupted these horses to see how well that they would be um or how well these networks would perform in this kind of challenging set and so i'm not showing all the data here of course but we tested different uh domain adaptation techniques in order to try to close this gap but again just as a take home these transfer learnings so the starred lines here are always better than the um from scratch training on these networks so again showing that there's a kind of a benefit for this pre-training and also for batch norm and test time adaptation techniques worked as well okay but what if we like think broader here this is horses but what about like across species so there's a nice paper at iccb a few years ago which introduced this animal pose data set and so we took this data set and what we did was say train on all the sheep data in the in the data set but then test on the held up species and here again by class like mobile nets resnets efficient nets you saw the same trend that we saw before on the horses namely that better imagenet performing architecture is performed better on this auto domain robustness challenge but kind of going forward we want to think kind of broader right so um what if we could build data sets that were better than imagenet for pre-training that kind of had like more of a pose you know prior if you will like uh and animals certainly are more represented in the data sets so we started collecting uh data sets from others and ourselves from like macaque pose which is also here tonight which is very cool stanford dogs the animal pose data set uh baja so i think many of the people in this room are represented in this data set that we collected and so in total we had around you know 50 000 images now that had maybe slightly different key points so there was some kind of fun things that we had to do to solve like how to train the same backbone and not penalize like missing key points or key points that weren't um expressed in other data sets so we built the gradient masking techniques we tried to map them across these data sets and you know the short answer is is that this is does improve pre-training compared to image net pre-training so these are the source domains if you're interested and again we have a poster tonight so you can talk to xiaoke the first author of this work about this in more detail and then we had some held out data sets for testing in this way so we built a new data set which we call irons based on i naturalist so we scrubbed i naturalist and labeled a ton of different rodents from this and then had more kind of lab related work as well and so qualitatively what we found very early on or in this in this test is that you can get away with smaller amounts of target fine-tuned dating so even with 16 images of your new domain using this kind of super animal pre-training model base is better than imagenet and this of course is quantifiable and so i'm not going through all the details here but we also kind of mix and match and combine different data sets to see which ones could confer a positive transfer versus ones which might actually hurt performance and so you can actually kind of come up with a formula of which data sets are better to combine for your target domain but in general we found overall that um most always the super animal kind of model outperformed imagenet pre-training and importantly it even gave an uh you know this out-of-domain robustness test on horse 10 given about 11 map improvement uh in this challenge as well um this is just kind of ongoing work but i also want to highlight that of course you know we're also looking at other things besides domain adaptation techniques we're also interested in self and semi supervised learning and so we found that these super animal models were also very good at supervised learners so this gave about a 2x improvement especially in this low target regime data of using pseudo-labeling to to enhance performance so just coming kind of towards this like larger vision of sharing networks and sharing data we last your launch we call the deep lab cut model zoo kind of a tongue-in-cheek name given that we work on so many animals but the idea here is that without any installation of the software people can just use google collab they simply can click like cat or dog there's some amazing user contributed things like primate faces or the model from macaque pose humans of course you can just go and upload a video and run inference for these different animals on your data set and of course the cheetahs in case you have any really big cats at home and so what we're really trying to do now in this way is to you know actually just ask you as a call of action for many of the life scientists or those that are interested in computer vision is to help us make better models by making these better data sets so we've launched this web app if you're interested um if you have some free time and want to you know click on really adorable animals from across these cool data sets go to contribute dblabcut.org and this is all going back into this community driven effort of building better models for these plug-and-play type solutions all right so i think i'm good on time so i'm going to make sure it wasn't over today so the kind of the take-homes here and again here's the the nod tomorrow of this a really amazing legacy which we also cover in a recent piece and current opinion um about you know how incredible that we're standing the shoulders of giants but i hope in this short talk i've convinced you that uh you know deep learning has really revolutionized our ability to analyze animal behavior with higher precision and at scale that was never before possible in life sciences um we think that they have particularly interesting challenges in animal pose estimation and even maybe novel solutions so i think there's a real opportunity here to bridge across the domains in other interesting ways um i showed you that we built an adaptive data driven assembly which boosts performance compared to say higher hr net which is current state of the art on cocoa our pre-trained image net networks offer known advantages namely like these shorter training times and less data requirements but importantly they have a novel advantage which is this robustness to out-of-domain data which we think a lot of the life scientists sort of find themselves in this regime of course there's still a gap to close right i didn't solve everything in the last few years and so we're very much interested in kind of new techniques that are being driven across whether that's transformers or batch norm um self supervised learning and better pre-training and so i kind of hinted at a few of these things that we can start to close this gap with um and importantly i think you know thinking about this it's also kind of an exciting time to think about like which data sets are converting positive versus negative transfer and how we can better leverage all this amazing community data that is coming in um with that i want to thank of course my amazing collaborators and my group at epfl in my former lab at harvard the people that really were involved in the work today jesse lauer xiaoke yi tian q stefan schneider maxine vidal muzu and my long-standing collaborator alex mathis who really uh spearheaded all of this work with me today

Show more
be ready to get more

Get legally-binding signatures now!

Sign up with Google