eSignature Lawfulness for Business Ethics and Conduct Disclosure Statement in European Union

  • Quick to start
  • Easy-to-use
  • 24/7 support

Award-winning eSignature solution

Simplified document journeys for small teams and individuals

eSign from anywhere
Upload documents from your device or cloud and add your signature with ease: draw, upload, or type it on your mobile device or laptop.
Prepare documents for sending
Drag and drop fillable fields on your document and assign them to recipients. Reduce document errors and delight clients with an intuitive signing process.
Secure signing is our priority
Secure your documents by setting two-factor signer authentication. View who made changes and when in your document with the court-admissible Audit Trail.
Collect signatures on the first try
Define a signing order, configure reminders for signers, and set your document’s expiration date. signNow will send you instant updates once your document is signed.

We spread the word about digital transformation

signNow empowers users across every industry to embrace seamless and error-free eSignature workflows for better business outcomes.

80%
completion rate of sent documents
80% completed
1h
average for a sent to signed document
20+
out-of-the-box integrations
96k
average number of signature invites sent in a week
28,9k
users in Education industry
2
clicks minimum to sign a document
14.3M
API calls a week
code
code
be ready to get more

Why choose airSlate SignNow

    • Free 7-day trial. Choose the plan you need and try it risk-free.
    • Honest pricing for full-featured plans. airSlate SignNow offers subscription plans with no overages or hidden fees at renewal.
    • Enterprise-grade security. airSlate SignNow helps you comply with global security standards.
illustrations signature
walmart logo
exonMobil logo
apple logo
comcast logo
facebook logo
FedEx logo

Your complete how-to guide - esignature lawfulness for business ethics and conduct disclosure statement in european union

Self-sign documents and request signatures anywhere and anytime: get convenience, flexibility, and compliance.

eSignature lawfulness for Business Ethics and Conduct Disclosure Statement in European Union

In the European Union, complying with eSignature lawfulness for Business Ethics and Conduct Disclosure Statement is crucial for businesses. Utilizing airSlate SignNow can simplify the process and ensure legal compliance.

How to Utilize airSlate SignNow for eSigning Documents:

  • Launch the airSlate SignNow web page in your browser.
  • Sign up for a free trial or log in.
  • Upload a document you want to sign or send for signing.
  • If you're going to reuse your document later, turn it into a template.
  • Open your file and make edits: add fillable fields or insert information.
  • Sign your document and add signature fields for the recipients.
  • Click Continue to set up and send an eSignature invite.

airSlate SignNow empowers businesses to send and eSign documents with an easy-to-use, cost-effective solution. It offers great ROI, tailored for SMBs and Mid-Market, transparent pricing with no hidden fees, and superior 24/7 support for all paid plans.

Experience the benefits of airSlate SignNow today and streamline your document signing process!

How it works

Rate your experience

4.6
1640 votes
Thanks! You've rated this eSignature
Collect signatures
24x
faster
Reduce costs by
$30
per document
Save up to
40h
per employee / month
be ready to get more

Get legally-binding signatures now!

  • Best ROI. Our customers achieve an average 7x ROI within the first six months.
  • Scales with your use cases. From SMBs to mid-market, airSlate SignNow delivers results for businesses of all sizes.
  • Intuitive UI and API. Sign and send documents from your apps in minutes.

FAQs

Below is a list of the most common questions about digital signatures. Get answers within minutes.

Related searches to esignature lawfulness for business ethics and conduct disclosure statement in european union

eIDAS Regulation
Qualified electronic signature European Commission
EU Trusted List electronic signature
ETSI EN 319 142
eidas regulation (910/2014)
Electronic signature Regulation
signNow
Valid e signature
be ready to get more

Join over 28 million airSlate SignNow users

How to eSign a document: eSignature lawfulness for Business Ethics and Conduct Disclosure Statement in European Union

uh i'm not going to anticipate any any uh comment or opinion on my end because i want to lose you to really take the floor the only thing i want to say is something that is that i've said already when the proposal was uh uh adopted and published two days ago which is whatever you think of it this is going to change many things meaning there is a before when many governments many private sector representatives were working on frameworks principles and so on and the reason after this proposal when a one institution the european commission has taken a position on uh creating a governance framework and proposing a governance framework for artificial intelligence in a way that is not just a declaration of principles but it's a full-fledged regulatory proposal so i'm sure that this is going to also um steer the direction of the debate from now on not only uh in europe but certainly a lot in europe but not only in europe but also um outside europe and this is why today uh what will be presented by lucy will then be commented upon by academics uh academics that also have a business ads if you wish a corporate hats but also those that will need to work on the proposal from now on uh a member of the european parliament but also an expert that connects from the united states that has been uh extremely um um let's say uh interested and attentive in uh in looking at what uh uh the eu was doing in this field so i will present them them bit by bit uh as we move on um uh what i want to do now as we have already 336 people in the room is uh to uh give the floor to lucy cioli and lucy is the director for artificial intelligence and uh and uh uh digital uh industry uh in the gconnect director a uh she is the person that materially not only has orchestrated and accompanied before the work of the high-level expert group the work of the commission um on ai but also the work that has uh led to the uh a finalization and presentation of this proposal so lucy we have been uh fighting many battles we have been having many meetings till late hours of the past years and uh uh we are extremely excited that this has happened um i don't know if you're relieved or now your communication work starts so maybe you cannot take a break uh that much but we are extremely happy and proud that you have accepted our invitation today so i give you the floor i will share my screen so that you whenever you want me to uh to uh change the slides uh you just tell me next and i will give you the floor for your presentation thanks a lot thanks a lot andrea and thanks a lot to seps for inviting me to to make this presentation on this regulatory framework as andrea said it was published only a couple of days ago so it's very fresh and this is probably one of the first presentations i ever make about this regulatory framework i hope i am actually able to explain it because it's complex it's complex it's the outcome of a number of years of work and although andrea has kindly indicated me has been accompanying all this work i was of course surrounded by an excellent team and a lot of other people including the high level expert group upon whose work this is really developed so let me start with the next slide which simply says a very simple thing uh you know we're gonna be talking about the regulatory framework in the next uh few months and probably years and this is gonna make artificial intelligence look very bad because everybody thinks that we regulate because of course this is a harmful kind of technology so for me the most important thing to to start by saying is that artificial intelligence is actually very good for our economies for our societies we've seen the impact it has made during covid it's very versatile it can be applied to many different things and but however it does raise some concerns in particular in terms of uh fundamental rights and in terms of safety and it is because we want to be able to make sure that the use of artificial intelligence does not violate the rights and the high standards of safety that we have acquired in our lives in the european union this is why we think we need to come forward with some rules for artificial intelligence next slide and next slide so basically our starting point was simply to note that there are characteristics of ai like opacity like being probabilistic like being autonomous um that were giving rise to some issue and as i said in particular in terms of safety and fundamental rights and this has created a certain mistrust today if you look at the statistics there are not many companies using ai in the european union and about 65 of them would say that this is because they don't trust artificial intelligence so the way we have to look at the rules on ai is an instrument to enhance trust and having more businesses and organizations use artificial intelligence because artificial intelligence as i said is good for our societies next slide we immediately faced a difficult problem when we started regulating on or writing this piece of legislation the fact that it's probably the first time somebody pretends to write a regulation about a technology and actually this is not a regulation about a technology it's a regulation about the the way this technology is used or applied in certain specific applications but one complexity is the fact that ai is a set of techniques and that these techniques evolve so the way uh ai is developed today is probably not the same in which it will be developed in a few years time so we had to come up with a definition of ai there was kind of future proof like the whole legislative piece had to be a little bit future proof and so one decision we made was to opt for a definition which is a bit general like which like the one you have on the screen which is actually the oecd definition and using this definition allows us also to indicate a list of techniques in annex and our legislative technique exactly is to be able to update these annexes which are more technical over time maybe through delegated access so that we can take into account the changes that take place in the technology next slide we come up with a pyramid of risk you know that um you know maybe the second complexity we had to face was the fact that uh we didn't want to regulate the whole artificial intelligence market because we don't think we need that we think we really wanted to only focus on what we consider to be higher risk regular applications of artificial intelligence so those applications have a higher risk of violating fundamental rights or safety and um because of that we we we we we we had to come up with a definition of what is high risk this is what you will find in the annex to our regulation and um we identified through also the evidence that we had some specific use cases of of high risk now when we look at this pyramid this pyramid summarizes all the different levels of risk that we have addressed most of the artificial intelligence marker is made of either is not risky then there is maybe a very small top of what is actually an is cannot talk about it and then there is a group in the middle of high risk and this group can include both artificial intelligence embedded in in products like medical devices or artificial intelligence we call it stand-alones artificial intelligence that is in software it can be recruitment applications it can be applications dedicated to um education or to determine the credit worthiness of a person and then we are going to have another group of ai applications uh to which we only apply transparency obligations and this is to allow people for example to be aware of the fact that they are interacting with artificial intelligence next this is what we i've just mentioned now it's the fact that one important part of the legislation will impose transparency obligations on some ai systems that need to make people aware that they are interacting with ai and so this can happen with the chat about but it can also happen with deep s not so much to make people aware they're interacting in ai here but mostly to make sure that if this is not obvious that people are not deceptive by by by the deep facts and i think another important element is that uh these systems if they use emotional recognition uh systems they should flag it to people because this is something that uh emotional recognition is something that can actually be very good in certain applications but less tolerated in certain other cases next slide let me now focus on the most important part which is the pastor or the high risk applications and here as i said earlier in a way we have two groups and this is how the easiest way to maybe imagine this regulation is that one group is made of artificial intelligence there is a component of products and many of these products that are already regulated in the european union because of safety and i think for example medical devices or think of toys or think of robots for example and then there are other categories of artificial intelligence where it is not embedded in products and which in our opinion represent use cases that can be high risk from the point of view the violation of fundamental rights or safety and when i talk about fundamental rights i talk about the whole group of fundamental rights but the one that normally stands out the most is obviously non-discrimination and what you can see in the slide we're just briefly listed the areas that are subject to um to the list of use cases if you're interested more you have to go to annex 3 of the legislation and this will include biometric identification it will include critical infrastructure education employment use cases access to private services like financial services and also public certain public services law enforcement migration as a room and border control and administration of justice and democratic processes so in essence this is the groups or the areas in which the high risk use cases fall into next slide what do we do with these use cases basically what we are putting in place is a sea marking process sea marking is a process that is applied in our product safety legislation and basically what it does is simply say is before a product is put on the european market it has to get a c marking to do that it means that it has to comply with a certain number of rules and obligations and once it is c marked it can be put on the european market and you can see accurate freely across the european union market so we use this approach because it's used in safety product legislation it's pretty successful and um it can apply also to ai also when ai is a software because in fact what we notice also in the legislation is that increasingly for example in medical devices regulation software is considered like a product so we consider emitter be the functional equivalence of ai whether it's in a product or whether it's in a software and we apply the c marker system to all of them next slide what obligations do these uh artificial intelligence systems have to comply with in their sea market meaning that before they have been put on the market they are going to be checked in terms of their compliance to a certain list of obligations and this list is what we have on the screen now it we're going to be checking that the artificial intelligence system is using high quality data sets that there is documentation that has been established in relation to the way the system has been designed and the parameters that has been fed with um it will have to provide a certain level of information to the user that information will also critically include information about the human oversight system that has to be put in place in particular by the users and then also give information about the tests that have been carried out to make sure that the system is robust accurate and resilient as much as possible to cyber security now these are five requirements which are very much based on the work of the high level expert group they came up with key requirements and they were probably six or seven we have merged probably a couple of them into one uh so this is the list and so this list is also the outcome of the piloting that we have made through the work of the high level expert group next slide in essence these obligations imply both the provider and the user of artificial intelligence systems the provider has to make sure exactly when it undertakes any an assessment in front of a conformity as part sorry of a conformity assessment and before putting the the good or the software on the market it has to establish a quality management system in its organizations it has to draw up the technical documentation it has to comply with login obligations so that the operation of the ai system can be monitored when in use undertake the conformity assessment before the artificial intelligence system is put on the market and get the sea level in market help in monitoring exposed the artificial intelligence systems and collaborate with the market soviets authorities because a system of conformity assessments where a provider of a product or a software goes and makes certain checks about the quality and the transparency of the system is fine in terms of minimizing the risk that can emerge with artificial intelligence but it doesn't eliminate those risks completely so there is a need to continue monitoring uh the system when it's news and also to make sure that when it's in use there is a system of market surveillance authorities who are able to intervene in case of harm and these market surveillance authorities may go and check the documentation that has been presented at the time of the conformity assessment to see if there is any problem there or simply check whether maybe it's a fault on the side of the user or it's nobody's fault and then act by consequence by either asking to withdraw the system from the market or imposing certain sanctions as as needed so the other set of obligations is actually on the user but these are much lighter than on the provider and the user basically has to follow the instructions of use very importantly the user has to put in place the human oversight system because it's only the user of course who can ensure that monitor possible risks and so on existing legislation like gdpr continues to apply next slide so if we look at this from the point of view of the life cycle we have an operator or a who has to or a developer who has to design the artificial intelligence systems in lines with the requirements explained earlier bring this to a conformity assessment this is done by notified bodies in european union or make a self-assessment when a notified body doesn't exist and continues to monitor the performance of the system once in the market also because the system may continue to learn and in this case if continuous learning or um the the the retraining through a new set of data for example of that same system gives rise to big changes in terms of the behavior and the performance characteristics of that system then that system will have to be assessed again and we have to undertake a new conformity assessment next slide that was about what we would do for the high-risk systems which i said are those ai systems embedded in products which are already subject to our safety legislation and the list of use cases that you will find in annex 3 of the regulation but then we also prohibit some practices and we prohibit these are only four practices they are very limited but we think they are important we prohibit the subliminal manipulation which may result in physical origin psychological harm or the use of ai to exploit children or mentally disabled people we forbid and prohibit the general purpose social scoring because we think this is not for the european union and we also prohibit real-time remote biometric identification systems with a number of exceptions i will come back to that later next indeed i come back to it now um remote biometric identification systems is a chapter on its own and you may have seen it's the one that is most debated in the media nowadays basically um by a remote biometric identification in particular we are concerned when this biometric identification takes place in a public spaces is already addressed by gdpr and it is already prohibited for the private but also for the law enforcement authorities when um in certain circumstances what we do here is to actually prohibit the use of artificial intelligence in the context of the real-time remote biometric identification systems in public squares so we are not forbidding the fact that there are cameras or that that that you know municipalities put up cameras that doesn't have anything to do with ai what we are prohibiting is uh when artificial intelligence is used through these systems to identify people that may belong to a watchlist now when this happens in real time this can be very intrusive from the point of view of fundamental rights because here is the case when a law enforcement authority will immediately stop a person and will immediately limit the freedom of a person and we think that this can only happen in very specific cases in particular when the law enforcement authorities are looking for a very serious crime and when they have a judicial authorization to do that also the national countries sorry the member states need to have a national law that allows the implementation of these exceptions so in essence the use of real-time biometric identification is prohibited except for these very specific circumstances but need to have a national legal base to be put in place and when it comes to exposed remote biometric identification system we think that this should be allowed and together with the allowed cases of real-time systems all this has to undertake excellent a third-party conformity assessment because remote biometric identification systems are well known for making mistakes so one day hopefully the technology will be improved but at the moment they make significant mistakes and so this can be particularly problematic in their implementation next slide um this um regulation has to you know tries also to make sure that we support the development of artificial intelligence so we have introduced for example regulatory sandboxes and we have introduced also the possibility of using data that have been lawfully collected from including on on personal data to be used to retrain artificial intelligence systems with very specific public interests like applications for the green economy or for healthcare under the supervision of the data protection authority in very controlled environment this is an important novelty that i hope that will be picked up and discussed as well and then we have some support for smes and startups in particular in parallel with the regulation we have published the coordinated plan will be setting up testing experimentation facilities that will offer free services to the small bidding enterprises and the startups as well as equity funding as well as the digital innovation hubs that will help the smes to update ai and understand it a little bit better next slide and now basically to conclude the governance structure that all these needs is based on two levels there is a national level where the competent authorities of the member states will be organized now these competent authorities are at different levels i mean the member states should be able to choose the notified bodies which do the exante conformity assessment but also the market surveillance authorities in the product safety legislation the market surveillance authorities tend to be regulators of certain areas they are normally divided by sectors and for example for credit worthiness maybe the member states we decide that the financial regulator is the right market surveillance authority um for biometric identification data protection authority could be the right authority and so on this is a competence of the member states to choose and then at the european level we would like to organize an artificial intelligence board shared by the commission but made by representatives of the member states probably the market surveyor supervisors as well as the digital sorry the data protection authority supervisor could be a member of this board and they would be advised and helped by an expert group we need again an expert group different from the high-level expert group we had before because we don't need i mean we already made extensive use i think of the very important guidelines and the work they produced and now i think we need a group that will help us monitor in the market and see if other new use cases are coming up they could be part of our list of use cases that we could update regularly and they could also advise or make recommendations to their artificial intelligence board in terms of next steps and adjustments to be made and then i think that the next slide is probably my last slide simply because i would like to say here that the ai package we published is not only the regulatory framework i've just presented now it's also a coordinated plan written with member states it's an update of something we did with them in 2018 and now we publish a review and this review is due because markets have changed because now we have the recovery and resilience fund because after the pandemic everything is different healthcare is much more important green deal is much more important we need an infrastructure also for artificial intelligence i'm talking about cloud i'm talking about the computing infrastructure and so the coordinated plan addresses all that including specific measures to support research development innovation and uptake and skills as well in ai so thank you very much and i look forward to the discussion thank you thanks a lot lucy very very comprehensive description of indeed uh quite a read uh 108 pages with all the supporting documents uh back you know the studies that back then but i'm sure so that many of the 402 participants here in this uh this webinar has spent uh quite some late hours trying to to get ready for for this event or in any case to catch up with what the commission has um has proposed um i uh i think there are some some things that uh very few people would have expected or people maybe do not fully expect in this proposal um i don't want to preempt what will be said by by the different commentators but i was uh throughout the process uh i think uh uh i noticed the introduction of the top of the pyramid and you you know probably barry and francesca remember very well that the high-level expert group got a little bit in trouble where we moved we moved from our early mention of the red lines into a subsequent mention of areas of critical concern which we find to be found to be a little bit too too soft on our ends and i think the commission has been quite bold there the sandboxes part is quite interesting uh the the council of the eu has called on the commission already in last november to to speed up on regulatory sandboxes and i think this proposal is perhaps after a few provisions in the fintech area the first proposal that really embeds embeds the sandbox instruments into the text of a proposed regulation this is going to be very interesting also because of the inherently experimental nature of um ai developments and the ever-evolving uh opportunities and risks that aai comes with the uh one of the boxes that lucy i think has not fully mentioned in the pyramid but is important in my opinion um is the not mutually exclusive uh notes that you had there but i think there's a lot of um playing around with those categories uh uh that we probably will do with the comments of our of our discussions uh and speakers uh today um and um and finally one thing that is also important with respect to the debate as it had unfolded on the risk assessment which is at some point also as as i was involved in the in the in the study on the impact assessment uh there was uh a little bit of concern among stakeholders that did this would lead to a sort of a a massive widespread uh notified bodies let's uh conformity assessment set of procedures uh everybody had to go look for a notified body that notoriously there are not many many of them that are available to to to perform checks on ai as well and indeed many of the um uh assessments that the conformity assessments the the commission um associates with the um as obligation with the with the high risk application are internal checks internal self-assessments and this also will potentially um pave the way from for the development of standards uh guidelines uh methods and evolving knowledge on how to perform those assessments in the best possible way so there is uh something quite a lot for the market to develop as well and this is a perfect bridge for me to my first discussion to my first uh friends uh in in our lineup which is barry because barry not only has been the vice chair of the high level expert group as professor of computer science as um at the university um uh college cork uh but many many hats in the in international academic uh uh groups on ai very also engaged with the public uh debate on the global governance of ai on the orientation of ai towards sustainable development but among the 52 musketeers of the high-level extra group he was also the one that took the lead on enabling us to develop a first embryonic version of an assessment list on trustworthy ai uh which was then translated into a software which is publicly available and might who knows become a first uh a first input into many future um versions of what this checks will be about so barry i give you the floor and i will introduce the speakers one by one i give you the floor you're free to share your screen and show you uh show us your slides and i thank you very much and for being crazy great thank you thanks andrea it's it's um i wish you were introducing me more often actually so thank you for that very kind introduction um i actually don't um i'm not going to show slides because i think um i really want to talk to the audience um and uh so great thanks to seps and yourself and all of your colleagues for um for the invitation today i really do appreciate it i suppose the first thing i should say is huge congratulations to the commission for getting this um over the line um as um the former vice chair of the high level expert group our mandate finished in 2020 you know we worked on some documents and uh with the commission i can only imagine um the the challenge and the the long hours um spent on this document and one of the things that as i read the document i'm just conscious of is um how rich it is and i think it's going to take quite a while to work through the detail and to fully understand the consequences so um i think um one of the things i noticed this week is that there's a lot of commentary from both industrial uh people like from the industrial side the academic side the civil society side and what's interesting is that there are as many positive remarks as there are um reservations and i think this is actually a very good sign actually the um i think it's it's good that um it suggests um at least that the regulation has passed the first test of balance and fairness in the sense that's that that people see um see that this is a substantive piece of work and so i think that's great um obviously the regulation is going to give and does give great clarity to not only the citizenry but also um the industrial world you know it shows what is acceptable what is not acceptable it is really great to see that uh some of the work in the in the high level expert group um we can still see it today in the regulation and i think um just you know on behalf of the hleg i think people would feel very proud of that fact that um that the the huge amount of work that went into the high level expert group activity um was was taken very seriously and and listened to and i think it's it's um of course we don't have the monopoly and what is what is appropriate for legislative for e-regulation but i think it's great to see many of the aspects of of what we recommended in terms of you know key requirements in terms of um a risk-based approach is what we see today and the subtlety of the um of the definitions and the considerations and the um in the various chapters and articles i think is uh is is hugely um it's is to be welcomed enormously um i suppose the other thing we really need to do and i think this is just a word to the community at large this is a 108 page document that has been that has been written by um with with great attention and consultation and expertise and i think um often when i've read documents of this complexity um i've often found that i've you know i've thought i've had a very strong opinion about something only to discover that i misunderstood some subtlety in the argument so i think we all need to take take great time in understanding what the um what the document says obviously there are challenges in a document like this um ai is a is a field that is not easy to define it's not easy to define where the boundaries are it's not easy to define what's in and what's out of ai and i think we'll see lots of discussion about well um does my um my rule-based system that i built 15 years ago is is that considered ai here you know is the fact i use this technique which you know statisticians also use is that is that ai and i think um we'll see a little bit of a shakeout here i think as a consequence of the regulation we're probably going to get a firmer view of really what's going on in terms of ai in europe because those companies who are describing themselves as ai companies um and who may be not doing ai i think there's a because now the incentive for them to to sort of own up and sort of you know admit that look you know the ai in here really isn't ai at all um also i think it is going to be a challenge for regulators to make sure that that people are not abusing that definitional fuzziness you know that's um so that's that's going to be very very interesting i think the the move away from the white paper two level uh risk um categorization to this four um level risk characterization is very very interesting and i think andrea you were very right to call out this um this um this box in the center that as as i think um commissioner berton said at the press release um you know he referred to you know things being a little bit of yellow and a little bit of orange and i think um i think you know it gives it gives great flexibility in understanding what um um and expressing i suppose some of the um some of the issues i think one of the things that is going to be interesting with a risk-based view is and i think it was interesting in lucy's slides this idea that you know users need to use ai properly we've seen lots of examples of ai systems that i suppose we would have perceived as being very low risk but they turned out to be very very high risk as a consequence of how they were used a very good example of this is the the youtube recommender system so youtube turned off um recommendations and commenting on videos related to young children for example uh quite some time ago and that's because um i suppose people with um particular um uh you know interests in children um uh were basically exploiting um the operation of the recommender system to get access to additional content that might sort of satisfy their peculiar interests and so in a sense there was a there was a there was a technology that in a sense is not high risk but can be used in a way that is a high risk use and so i think there's going to be lots of challenges around um observing when ai systems deservedly become high risk because they are um because they're used in a particular way and as a consequence there are all sorts of um detrimental impacts and so on and just the the natural use of them shifts the categories and i think it's very nice to see in the regulation that um these will be that that conformity and so on and the risk categorization will be reviewed on an ongoing basis um so i think that's that's that's going to be very interesting i think um the the um on the whole i think the the you know my view of the regulation is extremely positive um the you know the subtlety of some of the issues here are going to be um very very challenging and i think it's going to be interesting how you know as the as the regulation progresses how these um how these technical issues um do come to play and i think a lot of the things that we face will be around definition i think you know is is my technology is my product covered here in what way is it covered and how should i regard it i think it's very wise to establish this airport it's very wise to consider the establishment of an expert group because i think some of some of the issues that will arise will need some discussion and some interpretation and i think that's that's only natural in this in this um in this context and i think the fact that the that the the authorities the national authorities that are at play here are the ones that in a sense we already have so i think it's clear to me at least that there's that those great um efforts being placed on um trying to reuse the machinery that we already have for regulating um personal data for example and so on so you know which recognizes the fact that ai is not unregulated this is not the first time that we see regulation that's relevant to ai to the gdpr is um is a is a classic example so i think great congratulations to the commission i think they've done a really fantastic job i hope that um i hope that lucy has a long uh restful weekend this fall what i'm sure has been the year of really um the incredible hard work and uh you know dealing with some of the um uh challenging issues but i think this piece of legislation this regulation this this leadership will reverberate right around the planet regardless of all the companies are selling into or developing in europe or not um i think it's it's tremendous and i think it really sets out the stall of what europe wants to be in terms of trustworthy ai and i for one um i welcome it strongly and i really congratulate the commission so thank you thanks barry that is really a plus one as sometimes people say you know that's super so uh indeed as as time goes by and we don't have a lot of time for this seminar i actually encourage the speakers since we already are exploiting them and not sufficiently on a friday afternoon to also check a little bit the q and a section that they have because there are some questions filing up there that they might want to answer directly by typing so i will i will immediately pass the floor to another colleague that has been uh a glorious member of the high level expert group an extremely good colleague in that context francesca who is working with ibm but at the same time an academic a very prominent academic also in the context of the triple ai association and uh uh very well known also for being able to manage the technical aspects of ai together also with the with the regulatory and policy aspects you demonstrated that francisca uh uh regularly constantly when we were working together on the ethics guidelines and also on the policy and investment recommendations so the four is yours let's see it's a it's going to be a plus one or a plus 0.5 or let's see let's see what francesca tells us about about this proposed regulation thanks andrea thanks and thanks lucy for the very clear you know presentation of this uh very significant document so it's definitely going to be a plus one so the the impression is really very very positive of course these are just initial thoughts because as barry and everybody said you know we need to go into the document much more to understand the fine details but also to understand the implications of all these decisions that are being laid out there so the first thing i had to say is that of course this risk-based approach which was already outlined in the white paper is definitely the way to go so we have been discussing within ibm for a long time we've been publishing about that pushing for what we call precision regulation so definitely definitely we think that the risk-based approach is the only way to regulate ai and when i say regulated yeah of course this is not correct because you don't want to regulate ai which is exactly what not is not done by the document but you want to regulate the ai systems the ai applications the ai use cases and this is really you know very very important difference so in some sense the definition of ai that i will come a little bit later is important but it is not possible as barry mentioned to define clear boundaries between what is a and what is not a it intersects with many other disciplines but it's important to understand the uses whether these uses of this technology are high risk or not so uh the second thing is that in the white paper as andrea mentioned there were only two or barry i think there were only two levels of risks and it was more like a definition of risk which was sector-based which was a bit concerning because of course every sector like healthcare or other may have high risk applications of ai and non-high-risk applications of ai so we really welcome the much more fine-grade crane the definition of risk with the four levels with the long list of the applications that that uh you know generate you know uh uh possible risks so that's really a very very significant step forward and very very important that the commission decided to take this approach then the emphasis on trust and transparency so trust the emphasis on trust which was also in the high level as per group you know we called the trustworthy ai you know we we decided to call it you know the ethics guidelines for trustworthy ai so the realization that without trust not just in the technology but in the whole ai ecosystem then we cannot get these beneficial effects of ai what ai can really provide us in in the positive side so build the building of trust the focus on the building of trust is really very welcome as well as the emphasis on transparency like at ibm we use this idea of the ai factsheet uh and the high level is the group we had the uh where the self assessment list in you know and then there are other uh mechanisms but really the transparency obligation which are for also for lower risk ai systems is very important you know because for example you cannot sometimes um detect and mitigate all possible biases in the eye system application but you should definitely be transparent about what what is there what are the properties the capabilities the limitations of this ai system that you are delivering and using and the the next thing that is very welcome is the fact that the emphasis on the fact that there is an ecosystem it's not just the obligation of the providers but also users and then in some other parts of the document also another stakeholder so this everybody should play his part his or her part in making the whole ecosystem trustworthy so it's not just the providers that have to do their part in how they develop how to design the ai system but the users are the users but also everybody else so that multi-stakeholder approach is really very important and we really welcome that here again the relationship with the fact so i'm very happy that the work that we did together with barry you know andre and many others was really useful uh to contribute to uh for helping the commission to think and frame this new propo regulation proposal so with our requirements that you can find them here and there in in the regulation proposal some of them are you know merged as luchila said but they're all there and with the uh the uh altai you know the uh assessment list for trustworthy ai that we put together um and it's also available and has been used by organizations all over um then it's also we welcome that some things are really unacceptable the famous red lines that as andrea said we didn't feel brave enough to put in the high high level as per group documents um but i think that uh that high even without red lines that document also led to this proposal register uh regulation proposal with these red lines and of course these are consistent with what many including you know ibm has been saying for example you may have not seen that our ceo sent a letter to us congress in already at the end of 2020 saying that we firmly oppose the use of technologies including remote biometrieal-time remote biomaterial identification for things that impact on fundamental rights or human rights or freedoms of people and we don't release general purpose facial recognition tools meaning that we need that we want to have full knowledge of control of the uses so the uses are really important to really assess whether it's high risk or not uh we also welcome the mechanisms that the the commission has be put in together and design like the standards the conformity assessment the code of condos as well as the regulatory sandboxes this is very important let me go to the last slide which has some topics for discussion so these are not negative points but i think topics that will deserve some discussion one is the definition of ai already barely mentioned that is very broad but on the other hand nobody can give us a definition that i mean i i bet that there is nobody in the world that can give a definition that has clear boundaries about what is there what is not a yeah the second one is that there are some things that maybe need clarification like sometime is with in the requirements for high risk application there is a mention of data sets that have to be free of errors and in my view is not clear what these errors are or how they could be free of errors um another one is about when they when the document talks about transparency information to user and it talks about operation that is sufficiently transparent to interpret the system output that to me refers more to explainability than transparency and so i wonder whether there is something that is already tackled by gdpr or it should be added here so there is some discussion there the other one is regarding market surveillance section that talks about the possibility to access the ai source code the first of all is not clear to me what the source code could be because an ai model as data as a source code and then that source code is trained with the data is tested and then you build the model so what is the source code is that circle or is the model and so so i think there needs to be some clarification and then the the last one the last point that i want to point out is that it is in principle very reasonable that that long list of high risk application uh given the ai board and the expert group can be subject to modification because you look around you say oh this thing now i didn't think about it but now i see that it's high risk i want to put in the list but that can generate certainty on providers you know companies that try to enter into a market or not and make some consideration so i guess that there also there is the need to have some discussion about the implication of this agility in uh in changing this list of of in in the pyramid of the high risk of the lower risk thank you thanks a lot uh francesca very good overview and also some some food for thought at the end uh and indeed i forgot to mention that you were uh also leading in the high level extra group the subgroup that actually attempted to give a definition uh it was actually another very good experience and another piece of responsibility that you took on your shoulders very well during the the high-level expert groups uh experience it's not it's it's like the holy grail right it's a and actually it's a constantly uh moving target uh uh but certainly will be one of the central um issues also in the in the debate that will follow now there's someone that you know what francesca is is offering food for thoughts that someone that would actually need this food for thoughts when understanding whether there are changes needed and this is suddenly mia petra together with her colleagues in the parliament because they will need to analyze and study this regulatory proposal uh to decide whether the parliament wants to amend it within the con context of the ordinary legislative procedure i'm sure mia petra will have a very a very important role in all this because she is among the meps one of the one that has a very solid technical background a very good vocation towards policy making and also modernizing policy making so we're very happy to have you here mia petra we're very curious to know what you think about this you're also involved in this work of the aidan committee in the parliament that is trying to sort of coordinate a little bit the work uh from the different committees and political groups uh into uh relatively common positions whenever possible so what's going to happen next uh to as you see it we're curiously here so the floor is yours and thanks for being with us thank you andrea uh and and greetings from helsinki i i came back from brussels and on currently i will have a lot of papers to read so i will not be very precise and detailed and i can't not on behalf of the parliament but on behalf of the parliament i this was very much expected waited for uh regulation as uh when we talk about the on the global level when we talk about the for the future of societies we do want to have a say for the people and it is actually the democratic right of the parliament and commission duty to to have some guiding lies for the future so it is not to let all this power in the hands of the authoritarian states nor to the uh technological companies alone because these all interlink with our futures and and these tools are needed so the first question uh when i started to talk with the straight stakeholders after this was is the balance right shall we get more innovations or will we uh uh regulate so that innovations will not happen all the parliamentarians will now have a calm head a lot of lobbying will be there to say that this is almost impossible or it is like saying too much or saying too little but i also give a plus because it is actually a great piece with the a good combination of the general principles but also very concrete answer for the europeans and i guess when i saw some greetings who's following this uh here that there are people in this planet waiting for someone someone setting the guidance where to go how to go and i'm very what's uh happy that we can see also improvements from the leaked version when it comes to the very concrete ways that when you are a child and your child is like not intended to be the the target of the ai but might be use them and get involved and then at least we have some ideas of think about it engineers everybody encoders uh companies so that uh have this kind of ethical thinking in the whole system so this is what the regulation will bring along even if not precisely uh uh stops everywhere that this gives impetus for the whole process i hope and then also for the working i think we do see um some applications that already passed the red line what is the human centricity uh what is the human dignity because the worker is should also be seen as a human dignity so we are not part of the machinery uh as we were not in the the um when the industrialization started there were a lot of rules a lot of institutions created because of the industrial uh world and now the new era of ai some rules and scope where to go needs to be set so for the balance i think this is uh also important to remind that yes lucia did in the end this is one piece of legislation but at the same time we have the european data governance act going on actually i have been busy with the monday deadline for the data governance act so to get more data to available have the rules how to have it then we do have also more money that we used to be so the tone in europe has been like a panic sometimes that look how many ai companies do we have but at the same time congrats for the universities and and academia we have more papers than u.s but we have less than china we invest more than um china but less than u.s so i think we have all tools at hand to have this balance that this will not stop innovations or use of ai in europe already many panelists took the the definition side and and of course the that will be one one part also that we will have to look for months uh because sometimes when you don't regulate ai but you recollect the course or you regulate the time what actually will happen that same might happen without help of a.i that might was it uh someone said that it might have happened with the help of the statistical calculation without any ai but our own uh calculations uh to say uh and then also that what is the scope of the ai is that moving so fast that uh that it will be out of scope already or bypass very soon but this will not uh be a lobbying way to say that you should not do this i i'm very sure europe will do this and and and will have it right uh those who are good quite quickly that it should be long shorter uh it's too much to read for the sms and companies actually i'm not sure with the 20 years of let's start the experience that that will happen because then you want to be more precise give more clarity and then you end up even more precise definition and that normally takes some more articles or earth phases so uh from the parliament side there will be tens if not hundreds of mep is working on it we have had uh at least 12 initiatives uh uh in this uh less than two years that this parliament has been together uh and i very much have to be happy and say thanks for the commission that it has taken this on board because there were special uh questions for the face recognition or remote biometrical identification as it now called there has been questions for the uh the risky association when it comes to the personal uh rights of the people whether it's counting your social security or your recruitment and others implications that has a clear impact on your rights as a citizen as a person so we will not legislate the whole world but will it legislate this uh part of the world and and and the applications used here or with our uh data so this is uh where we wanna see some cooperation and and see the uh that we can remain open data flows and others and and the source code and all that uh needs to be looked at the way that we can see that it's trustworthy that the systems comply with our legislation and i'm not at all unhappy if this kind of process effect will then uh get more and and that was actually what the both commissioners say we are open for the international agreements international uh minimum standards and and an eu is very active in oscd and and and wedt or wherever we can find people to protect our citizens on the globe and we will start with the europeans thank you thanks a lot petra and uh um i think you know we're all curious to see how how the parliament will approach this but i think i don't know i'm starting to reconsider these conferences these events on the on the late friday afternoon because i think people become a little bit more in a better mood on friday because they sense the weekend is coming you're getting old plus ones so um but also you you mentioned the international dimension which leads me to move directly to josh uh our fourth uh um speaker and and uh uh josh and i and and colleagues from brookings and and sepps have been uh uh since last year organizing what originally was a transatlantic and then overall has become a global uh dialogue on artificial intelligence and all the the the panelists here have been uh participating in in those dialogues i think it's been an extremely rewarding experience so far we're coming up with uh with a report soon also on the prospects for uh global ai cooperation and obviously we'll need to take into account what has happened a couple days ago josh so i'm really curious to see whether the plus one is also is also coming from the other side of the atlantic even francesca is also there but uh but uh um josh really uh could could tell us a little bit what he sees happening in terms of reaction uh in the u.s so yeah thanks andrea and and thank you for the invitation to be here it's great to be here i'm among so many colleagues and i just want to echo everyone's congratulations to to the chiller and her team and and the commission broadly for this the work and putting a marker in the sand in such a such a way on on ai i think is a really important outcome here and it's going to generate a lot of conversation and debate we're at the very early stages of it um but but it's it's it's really a very worthwhile contribution to to to this important endeavor um you know it's still it's we're not even at lunchtime here in the u.s so i hope doesn't that doesn't sort of influence whether i'm a plus one or not i still got an afternoon of work ahead of me but but i think i think it provides a really great foundation for for cooperation internationally i mean look at you know at the end of the day um where we want to cooperate as much as possible within the realities of our politics and the commission has to take into account its priorities and its legal frameworks the us will do the same we build bridges in the process as much as we can and i think this is you know a good um foundation for building on you know and opens up and keeps open many opportunities there um when i think about like the opportunities for cooperation on ai particularly when i think about the us and the eu a couple of likes of headline things come to mind for me at least in terms of what do we want to achieve i think one of them is you know on the big picture of ethical ai getting a common message out there that the us and the eu and other like-minders can essentially land on and i think we've always been on strong footings there we've aligned on the oecd ai principles we've worked together in g pi um in in the g20 and so forth and i think this carries that further um we want to support innovation i mean i i think it's absolutely everyone's interest that the us is at the leading edge of ai that the eu is at the leading edge of ai and so getting that balance right between regulation as a driver of innovation i think is absolutely crucial i think this you know definitely heads in the right direction i mean that regard this is not heavy on innovation this piece obviously there are other sort of mechanisms underway and funding and so forth um that matter more in that respect but the regulatory sandbox piece and so forth i think is is welcome um and then i think the other bit is certainty to businesses because that's where the ai r d and the innovation and where it's ultimately really happening and um you know i think that's about certainty in terms of legal certainty i think it's also about ensuring that our markets don't become too disaggregated because um we need access to each other's markets in order to scale and to be efficient and to really drive those innovative opportunities and again and again i think this is in a good place in terms of making that um happen as well so i'll just make a couple of high-level comments about where i see um some of the similarities developing between the us and the eu there's still a long way to go um on this i mean this has obviously got very big extra territorial effect it's very similar uh to gdpr in that respect i think it's got almost the same um you know uh penalties involved for non-compliance and so forth so it's going to make everyone on the us i think initially take a deep breath um just because of that that um but i think once they get into it people relax a little bit more um you know as francesca said the risk-based approach is crucial i think it the way it's now being articulated it's very flexible i like the four categories i think they provide a lot of scope the boundary issues which we were always going to have to work through which will be challenging um there's no escaping that um but but i think i think that's a good a good place the emphasis on risk and opportunity i think is also very important i think calling out some areas of ai as just being unacceptable particularly around say for instance social credit scoring which is a clear shot across about what china is doing i think is essentially very important as well and again i think aligns with getting a clear message out there on some of those um issues um there's this you know there's i think that a real attempt in the regulation to align as as much as possible with existing international principles regulation um i think the flexibility and compliance again i think the point that francesco made about the opportunities for self-assessment are really a crucial element here and this is actually quite a complex bit because giving companies the opportunity to develop the procedures to actually assess compliance and so forth will be very important so in terms of developing the management and process standards that can then be carried into the international standard setting bodies which will also be important for developing global approaches for certifying ai so there's a lot of sort of developments i think which will flow from allowing companies that flexibility which will also provide a really important sort of foundation for building um better approaches and coordinated approaches in some of the global standards setting up bodies um as well so i think that's going to be a useful sort of feedback mechanism as that develops um you know there is there is always a question i think around the impact of some of this on small and medium-sized enterprises and i know the european commission is is alive to this there's obviously a an article on it in the regulation but it's really quite a small little article tucked away there it's obviously backed up by what you were talking about lucille in terms of there is funding and and there's a lot of this will happen at the members state level um but but i do remain concerned about the the complexity of some of these four um small businesses and what that might mean and so it just simply means that we really need to pay a lot of attention to that because there's always the risks that we lock in a lot of incumbent first mover advantages with with with these types of regulations which is obviously not the intention here i mean making sure that we maintain a really vibrant innovative environment for startups is absolutely going to be crucial um let me just quickly just provide a couple of high-level observations about what's going on in the u.s at the moment because um you know while the u.s is not going to come out with anything like this there's a lot happening on the ground which i think maps quite nicely onto where you've ended up on on the ai regulation and a couple of points i want to make here um the the national ai and d strategic plan there's one in 2016 another one came out in 2019 um you know there's a lot of actually sort of you know coherence there with ai regulation if you look at the um for instance a lot of emphasis on essentially getting to um incorporate ethics by design um and how do you actually do that and i think that will flow naturally from the approach to ai this ai regulation where essentially there'll be a lot of emphasis on the developers of the ai to ensure that the it is essentially ethical by design and so i think those two areas there's a lot of work that needs to be done to make that happen but i think that maps together um quite well um the omb guidance from late last year um there are differences in emphasis around how do you balance risk and opportunity but i think you know they map quite closely and i think that's a good thing um again the whole area of national standards and what the nist process is doing here in the us and the industry-led process i think will be supported by the emphasis in the ai regulation on businesses self-regulating and ensuring they comply so i think that's a very another positive development um as well and i just actually want to flag something that came out this week which i think is actually potentially significant in the us which is the ftc announcement that they may actually apply some of their regulatory authorities um under uh you know section five um and so forth to actually ensure that when you say that your algorithms are unbiased or non-discriminatory that in fact if you're not you may actually be breaching you know requirements for um for for fairness and so forth and there's huge fines that the fdc's leveraged you know over 6 billion in fines in the us for breaches of privacy um so actually when they get involved they've got enormous enforcement power so this is a new development um in the us but it's potentially a very significant one in terms of actually bringing enforcement power to bear in the in the short term on some of these ethical ai issues as well um again i think that that maps on very nicely to where it's where the eu's heading in addition so let me let me leave it at that and say congratulations again thanks a lot josh uh so this sounds very very promising and uh i think uh you know lucy has uh probably reassured by by some of these comments but i think we are uh witnessing here a case of a good good procedure i think because this is something that is it's not been improvised i mean this is something that the commission is organized with uh lots of inputs and lots of discussion uh and lots of uh also changing uh the original approach i mean i've been involved very much in the in the drafting of the text uh i mean not myself but i mean in observing the text says it was evolving over time and uh for example the high risk low risk estate in the in the commission's proposal for quite a long time and uh and then the public consultation the experts the academics were still voicing including including some of the ones that are present here you know the need to be a little bit more nuanced that leads to to give more space to um to understanding that differences in risk uh uh and the types of risk also generated by different types of applications depending on the context and so on and so forth now we have 12 minutes before we wrap up we have 23 questions okay and we have a number of questions so rather than giving the floor um to lucy for feedback on the four um interventions would i say i'd rather maybe uh put together some of these questions uh before i give the opportunity to lucy to um to wrap up and to give us uh her final thoughts uh we certainly we have a um one question that was asked very early on on uh since you know while the ftc has a very encouraging blog post what happens with the edps with its blog post being a little bit disappointed that there's no moratorium on rbi systems uh remote biometric identifications uh identification systems there's a couple of uh of clarification questions and that we'll see probably hundreds of them in the coming weeks on the broad definition of ai that is given but also for example john de san is asking whether uh if the ai is applied as a supporting internal teams so something like an internal decision-making support system uh it's still is it still the finest high risk or if it's only when it's for example consumer-facing or uh citizen-facing that it should be defined as high-risk and whether this applies also to techniques that perhaps are a little a little less let's say cutting edge or frontier but a little bit more legacy techniques that perhaps have uh um uh uh different uh features uh or perhaps ing to some less risky features as for example more modern for example machine learning techniques we have a number of questions on manipulation as well and that is also one thing that i would add to the discussion meaning the prohibited uses and the manipulation how you know probably only the future um board and expert group uh will have the possibility of clarifying where does the boundary lie between uh nudging people which is happening everywhere um with the use of ai systems or hyper nudging as our colleague karin young would put it uh and uh subliminal uh manipulation so um and just many of these questions where many commentators actually say what do i do now i mean how do i apply this and for example some questions on on micro targeting a question on how long do records have to be kept um uh uh our friend vince carnegie was asking a question on what is the degree of automated processing so these are all questions about thresholds meaning where do we start calling it ai or whether we start calling it high risk ai how do we apply this in practice these are all questions that are more rather than on the overall foundation and approach to the to the regulation or more related to the how do we operationalize concretely these provisions which i think is a discussion that is very healthy and we will have it going forward there's a question on sandboxes and how do we ensure that national entities have sufficient capacity to facilitate research and and would it benefit would it be beneficial for large companies to set up start

Read more
be ready to get more

Get legally-binding signatures now!