Create Initials Understanding with airSlate SignNow
Do more on the web with a globally-trusted eSignature platform
Standout signing experience
Trusted reports and analytics
Mobile eSigning in person and remotely
Industry regulations and compliance
Create initials understanding, faster than ever before
Useful eSignature add-ons
See airSlate SignNow eSignatures in action
airSlate SignNow solutions for better efficiency
Our user reviews speak for themselves
Why choose airSlate SignNow
-
Free 7-day trial. Choose the plan you need and try it risk-free.
-
Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
-
Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
Your step-by-step guide — create initials understanding
Using airSlate SignNow’s eSignature any business can speed up signature workflows and eSign in real-time, delivering a better experience to customers and employees. create initials understanding in a few simple steps. Our mobile-first apps make working on the go possible, even while offline! Sign documents from anywhere in the world and close deals faster.
Follow the step-by-step guide to create initials understanding:
- Log in to your airSlate SignNow account.
- Locate your document in your folders or upload a new one.
- Open the document and make edits using the Tools menu.
- Drag & drop fillable fields, add text and sign it.
- Add multiple signers using their emails and set the signing order.
- Specify which recipients will get an executed copy.
- Use Advanced Options to limit access to the record and set an expiration date.
- Click Save and Close when completed.
In addition, there are more advanced features available to create initials understanding. Add users to your shared workspace, view teams, and track collaboration. Millions of users across the US and Europe agree that a system that brings people together in one holistic digital location, is the thing that companies need to keep workflows functioning efficiently. The airSlate SignNow REST API allows you to integrate eSignatures into your app, website, CRM or cloud storage. Check out airSlate SignNow and enjoy quicker, smoother and overall more effective eSignature workflows!
How it works
airSlate SignNow features that users love
Get legally-binding signatures now!
What active users are saying — create initials understanding
Related searches to create initials understanding with airSlate SignNow
Create initials understanding
all right so thank you so much for the invitation so as was mentioned to this paper is joint withdrew feudin Berg at MIT and the paper is about using machine learning to help us achieve not just better predictions but also potentially better understanding into a given domain so there's been a lot of talk about machine learning many of the applications have been in the context of prediction problems so there's some outcome of interest that we want to predict and we have access to a large set of features okay so the techniques for machine learning are very productive towards the school of predictive accuracy but the methods that are used are often notoriously blackbox all right so that's fine for many applications but from the point of view of research or from a modelers perspective it might not be fully satisfying and so Drew and I were curious about whether we could use these techniques not just to help us achieve better predictions but also potentially to teach us something about an underlying domain okay specifically that might look like using machine learning to help us identify new patterns or using machine learning to help us build upon existing models from this perspective machine learning really need not be seen as a sort of substitute or replacement for traditional modeling it might actually serve as a complement to these these techniques and more in our paper what we do is we focus on a particular domain in which we can ask these questions and that domain is predicting initial play in games so I wanted to mention there's been some very nice work by Kevin Layton Brown and James Wright for this problem as well all right so the specific prediction task people look at is the following I'm going to give you a three by three payoff matrix those are your features and I want you to predict the action that is most frequently chosen by subjects acting as the row player so it's a classification problem I'm going to assess the accuracy of your model using the correct classification right so simply in what fraction of games is your predicted modal action in fact the true modal action in our data set and throughout I'm always going to report for you 10-fold cross-validation prediction errors so everything is going to be an out-of-sample test all right so this is an old problem in game theory there are many models one could use for the prediction task with outlined and what we're going to ask is whether we can use the machine learning to improve our understanding beyond those existing models and potentially to build extensions of them so the paper is going to come to take unfold in three parts okay and I'm going to give you a quick outline of these three different parts and then I'll go through these and more details slowly okay so in the first part of the paper what we do is we look at instances meaning games where machine learning predicts well but the existing models don't so those instances are particularly interesting because the fact that the machine learning algorithm predicts well means that there's some sort of regularity that well the fact that the existing models don't suggest that that regularity isn't yet captured by the existing models so we look at these instances and from them were able to identify a single parameter extension to our best model which then achieves the performance of the black box all right after that we then try to break this best model so we go beyond the data set that we initially had in part one and we use machine learning to help us algorithmically design new instances of games that we then get data for and we then find a model that fits these new games better than the model we identified in part one and then finally we loop back and use machine learning to bridge these two models and a sort of hybrid model where initially the algorithm helps us choose between which of these models we want to use for prediction ok so again that was fast let me now go through these parts slowly alright so we begin with a data set of 86 symmetric 3x3 normal form games this is the meta data set that was aggravated over six different lab experiments and the original sources are here we have 42 147 observations per game each of these observations is really initial place there's no learning and no repetition this data said was aggregated by Kevin and James whom I mentioned earlier so we're going to consider two different models for predicting play in these games the first model is built on Nash equilibrium right so we can simply of course predict an action that is consistent with pure strategy Nashville if there are more than one action that has a property we can predict uniformly at random from those actions the second model that we consider is a leading model from behavioral game theory so it's called the Plus on cognitive hierarchy model and let me just very histah Kalia a sense of what the model does so it says that there's potentially multiple different levels of sophistication that players can exhibit so level zero would be a maximally unsophisticated player this player is just going to choose from his actions uniformly at random a level one player it's a little bit more sophisticated that player is going to act as if their opponent is level one sorry level zero and best respond to that level zero opponent and then we can then recursively define higher and higher levels of sophistication okay this model assumes that there is a Poisson distribution over these different levels and that distribution has a rate parameter tau which is the single free parameter of this model which is then estimated from the data all right so how do these models perform on our prediction task so throughout I'm going to show you these prediction accuracy is relative to two benchmarks one benchmark is completely naive prediction or you can always guess at random and that's going to give you an accuracy of one third on the other extreme if you have access to the test set then you should be able to predict the modal action 100% of the time so here are the accuracies from the two models and what we see is that both models improve on guessing at random so uniform Nash correctly classifies the modal action forty-two percent of the time PC HM really does quite well it correctly classifies the modal action seventy two percent of the time okay additionally when we estimate this free parameter tau from the data this more complex model actually collapses to just prediction of the level one action so it's as if game by game we simply found in the action that was the best response to uniform and predicted that action so throughout reaction just going to ignore the more complicated model PC Asia I'm gonna work with this simpler model level one which equivalently achieves this accuracy of seventy-two percent so those are the standard models let's now try a machine learning approach and we take really just a cut-and-dry machine learning approach where we identify each game with features that we think are strategically relevant so just to give you some examples for each action we define indicator variables that capture certain strategic properties of the actions like you know is it part of a pure strategy nash equilibrium is a part of a purse tragedy Pareto domination nash equilibrium by which we mean the payoffs in that nash equilibrium Pareto dominates the payoffs in all other pure strategy nash is it level k is a part of a profile that maximizes the thumb of player payoffs and so forth then we train a decision tree ensemble to predict the modal action given access to these features and we find that that example improves upon the existing models okay this i'm actually improved by a whole lot so we're getting 277 percent accuracy but it is an improvement and therefore we can ask where is that improvement coming from so what we do to answer that question is we find those games that are correctly predicted by the decision tree ensemble but not correctly predicted by the level one model one example such game is the one i have up here so in this game action a1 is the level 1 action it has the highest expected payoff against uniform play but if you look at action a2 it has almost as high an expected payoff against uniform play and additionally it leads to lower variation in the payoffs that you might receive so in this game although level 1 is although a1 is a level 1 action the action h2 is actually the action that is most frequently chosen by row players and this is a regularity that we find in all of the games where the modal action is correctly predicted by the algorithm and not by level 1 so this looks quite familiar it essentially looks like risk aversion and risk aversion is something we can take and port back into the level 1 so we do is we just extend level one by supposing instead that players have a utility function described as follows X is the payoff and alpha now is a parameter that governs the degree of risk aversion this adds one free parameter to the level one model we're going to estimate alpha on the data test it out of sample and this extension of level one actually achieves even weekly improves upon the performance of the decision tree ensemble so what we take away from this is you know maybe we don't like the black box algorithm as the end outputs nevertheless we might be able to use it in some cases to lead us to interpret about parametric extensions of our existing models all right so we could have stopped here the level 1 alpha model is achieving correct prediction 80% of the time that sounds pretty good alright so we might say look we're just done that's an almost complete model of play the catch is that the data set that we're using these lab games are actually really special games so these are games that were designed by experimental game theorists to achieve certain experimental goals and we might worry that the performance of level 1 alpha is due to those properties so to test that we really need to see play on new kinds of games and then this brings up a question of what kinds of games do we want to look at so even in the space of 3 by 3 payoff matrices there's just a very large number of games we could try it to get data for uh and so the first thing that we do is we actually try something sort of naive we just generate payoffs uniformly at random this turns out to still generate some special games these games are special in the sense that they're strategically very simple they tend to have a small number of Nash equilibria many of them have a strictly dominant action and as a result we actually find that level 1 alpha performs even better on those randomly generated games relative to the lab games so we want to be able to discover a part of the game space where level 1 alpha doesn't do so well we actually need a slightly more sophisticated procedure for a game generation and that's what we do in the second half of the paper second part of the paper so we use machine learning to try to help us find cases that will break this model but our procedure is as follows we begin by taking our data set of games that have observed play and we train an algorithm algorithm that takes us input the payoff matrix and outputs now no longer a prediction of the action that is taken but instead a prediction of the frequency of play of the level one action then we can take that algorithm again generate new games at random now use that algorithm apply it game by game to generate a predicted frequency with which players will play the level one action in that game we said 50% as a sort of arbitrary cutoff where we decided that more than 50% was just a lot and less than 50% was acceptable and we took all the games where the predicted frequency of play exceeded 50% and we dropped them from our data then we repopulated those instances with new randomly generated games again applied our algorithm to predict the frequency of level one play and just repeated this procedure until we had 200 games that were all predicted to have relatively low frequencies of level one play then we took those 200 games elicited play on those games on Mechanical Turk alright first just note that there's no guarantee of course that level one play is actually going to be low in these games it's just that the algorithm thinks that'll be the case all right so the first thing we verify is that in fact it achieves at that goal so remember level one alpha was achieving an accuracy of roughly 80% to level 1 alpha 80% in the lab games and actually in the randomly generated games the first time it was about 90% and here we see it really plummet for the performance on these new algorithmically generated games is about 40% so that's not to say that there isn't structure in these games and if we look at a decision tree ensemble that's achieving 73 percent accuracy so there are regularities in these games it's just that those regularities aren't captured by the level 1 alpha model so what are those regularities what we do here is we actually directly look at the decision tree itself and we don't want to look at the ensemble because that's really a nightmare it's it's just huge and difficult to interpret but we can look in said at a very constrained version of the ensemble and by constrained I mean we're gonna look at a single decision tree and we're going to force it to have just two splits so the best to split decision tree turns out to look like the following so it begins by asking whether action a1 is part of a Pareto dominant Nash equilibrium if so it says to predict that if not it says to ask whether action a2 is part of a Pareto dominant Nash equilibrium if so predict that and if not predict a3 so this is very interpretable it basically looks like the algorithm wants us to predict the Pareto dominance Nash equilibrium so we didn't really like the ordering of actions used here and but we took that and had it motivate instead this very related rule we're going to call PG&E the rule that predicts at random from all actions that are consistent with some Pareto dominance Nash equilibrium and otherwise predicts at random from the available actions and this simple PD any rule turns out to achieve a large part of the improvement of the decision tree ensemble over the level 1 alpha model now that's not to say that PG&E is now discovered to be a better model than level 1 alpha because remember this again it's a very special set of games there's this tendency sometimes to want to run these horse races between models and say model a is better than model B but what we really take away from that is that you know there's not necessarily a lot of meaning to not because the ranking you're going to get is very particular to the specific game generation process that generated your dataset right so instead of forcing either a PD an e or level one alpha to be a sort of Universal predictor of play why not actually try to merge these and accept that they're both good models they're just models good models of play for different kinds of games so that's sort of further evidenced in this table so here we aggregated all of our games the lab games the randomly generated games and our algorithmically designed games and you see that sort of both models do well but neither is really the complete story right so in the final part of the paper which I'll just do very fast essentially we try to use machine learning to help us decide a game by game which of these two underlying models we want to use for prediction so first what we do is we take our training data and we train for each model a decision tree which takes as input the payoff matrix and outputs a prediction for the accuracy of the associated model then for a new instance where we can do is we can take that game run it through these two decision trees associated with the two models I would put a predicted probability of accurate prediction for each of the models and then simply go with Mitch which which ever a model is predicted to have a higher probability of accurate prediction so this hybrid model our security accuracy as I've shown you previously right so level one else so achieves 68% PG&E achieves 56% and the hybrid model really blows these out of the water achieves an 80 percent prediction right so I want to point out that you know this more complex model wasn't guaranteed to improve upon the two component models because these are all out-of-sample tests and also especially the the extent of the improvement is also not guaranteed okay now this improvement is also you know special again to this data generating process and how much we expect the hybrid model to improve would would depend a lot on which game generation process we used okay so let me just conclude so in the paper we also look at how the model assignment has done and we learn a few things from that but let me skip that here okay so then just to conclude we can sort of roughly think about models as living in this space we evaluate them by their interpretability and also by their predictive accuracy okay traditional economic models tend to be high on interpretability and predictive but maybe not the most predictive okay on the other hand we have blackbox algorithms which are highly predictive but potentially very low on interpretability right now there's some sort of trade-off between these two goals but potentially there is another model class out there that's you know roughly as predictive as a blackbox algorithms and not that much less interpretable than our traditional models right and if there's a model class like that out there we sort of want to know what it is and this paper really has been about how we might be able to search for that model class making use of the blackbox algorithm all right thank you we do question two [Applause] I'm not sure I understand the question so we basically took the ensemble that we had trained and we looked at the cases in which are predicted correctly right so yeah so socially we can look at it for each game we can we can output whether it was correctly predicted by level one and whether it was correctly predicted by the ensemble and then look at the cases that have the property that it's correctly predicted by the ensemble and incorrectly predicted by level one
Show moreFrequently asked questions
How can I have someone sign on a PDF file?
How do I insert an electronic signature box into a PDF?
How can I sign emailed documents?
Get more for create initials understanding with airSlate SignNow
- Email Professional Resume byline
- Email Professional Resume autograph
- Email Professional Resume signature block
- Email Professional Resume signed electronically
- Email Professional Resume email signature
- Email Professional Resume electronically signing
- Email Professional Resume electronically signed
- Email Basic Employment Application eSignature
- Email Basic Employment Application esign
- Email Basic Employment Application electronic signature
- Email Basic Employment Application signature
- Email Basic Employment Application sign
- Email Basic Employment Application digital signature
- Email Basic Employment Application eSign
- Email Basic Employment Application digi-sign
- Email Basic Employment Application digisign
- Email Basic Employment Application initial
- Email Basic Employment Application countersign
- Email Basic Employment Application countersignature
- Email Basic Employment Application initials
- Email Basic Employment Application signed
- Email Basic Employment Application esigning
- Email Basic Employment Application digital sign
- Email Basic Employment Application signature service
- Email Basic Employment Application electronically sign
- Email Basic Employment Application signatory
- Email Basic Employment Application mark
- Email Basic Employment Application byline