The Russian AI market is now not regulated in any way by law, which already creates problems for citizens, – the commission on legal support of the digital economy of the Moscow branch (MO) of the AYR warns about this in its conclusion. Letters with a proposal to regulate the AI market were sent to the Federation Council, the State Duma, the Ministry of Digital Industry, Roskomnadzor, a representative of the association listed.
According to the authors of the appeal, the risks of discrimination against AI citizens and organizations arise due to the possible presence of biases in algorithms, intentionally or unconsciously introduced by developers, low-quality or incomplete data used to train the system or make decisions in a particular situation. At the same time, it is extremely difficult to identify AI errors and appeal against them, due to the fact that access of third-party experts to AI systems is closed, the conclusion says.
“In the sphere of public administration, algorithmic decision-making systems are used to bring to administrative responsibility on the basis of video or photographic data, as well as on the basis of other technical data, for example, from cellular operators,” said the representative of the Ministry of Defense of the Russian Federation. – Such systems were used to prosecute in the spring of last year in Moscow for violation of the self-isolation regime. The media featured many curious cases of erroneous prosecution, for example, of paralyzed citizens. “
Often, AI rejects loan applications submitted by quite bona fide borrowers, based on misinterpreted data and crude algorithms, an expert familiar with the work of large companies using AI gives an example: “It could have been different in a human solving this issue.”
The roadmap for the development of neural networks and AI, developed by the Ministry of Telecom and Mass Communications (now the Ministry of Telecom and Mass Communications) in 2019, suggests that in 2021 the volume of this market area should amount to 48 billion rubles. By 2024, it should increase to 160 billion rubles.
“These kinds of decisions should be started with government agencies, not commercial companies, because it is the decisions made by the state that have significant impact and consequences for a person. After testing such regulation on state information systems, it is necessary to consider the possibility of forming legislation for commercial companies ”, – says the conclusion of the AYUR MO.
Lawyers propose to regulate the use of AI in areas where its use can have legally significant consequences. As an example, the authors of the conclusion cite France, where they introduced a ban on the analysis and forecasting by AI methods of the actions of judges in the course of proceedings – it concerns everyone who can influence the outcome of the trial.
According to Boris Edidin, Deputy Chairman of the Commission on Legal Support of the Digital Economy of the Ministry of the Russian Academy of Law, a radical change in the current legislation for the implementation of initiatives will not be required – for example, the law “On Personal Data” can be supplemented: warnings on the websites of government agencies that they use AI, require those implementing AI systems to provide for a contingency plan in case of failures and other unforeseen situations. “
The ministry is studying the proposals of the AYR, said its representative. The conclusion was confirmed by Alexander Khinshtein, Chairman of the Committee on Information Policy, Information Technologies and Communications of the State Duma: “We are considering the Association’s proposals. We will definitely integrate best practices into our work. “
Companies involved in the implementation of AI (among them Yandex) did not respond to requests or declined to comment.
“Developers will resist defining AI as an object of law with all their might – in this case, they will bear full responsibility for its mistakes,” says Sergey Polovnikov, head of Content-Review. “And if such responsibility is recognized, the question of compensation for losses associated with incorrect decisions made by AI will inevitably arise.”
The regulation of technical processes in the field of AI at the level of laws and regulations can negatively affect the development of the market, while the protection of citizens and organizations from the negative consequences of its use will not increase, says Irina Levova, Director of Strategic Projects at the Institute for Internet Research: “Self-regulation must be strengthened. For example, the Big Data Association has a Code of Ethics and a White Paper, a set of best practices that is regularly updated and added with new guidelines for data projects. The recommendations of the MO AYUR can be included as another case in the White Book.
There are certain risks of excesses in connection with the implementation of the AYUR’s proposals, agrees Yaroslav Shitsle, head of the IT & IP Dispute Resolution Department of the Rustam Kurmaev & Partners law firm: “The highest are associated with administrative approval and regulation of the use of personal data in AI processing. If we are talking about tools used in public law relations, for example, cameras that analyze traffic and automatically issue fines, then the regulation is justified, since when using such a tool, citizens will be subject to administrative measures. ” And if we are talking about software products, for example, of banks that use a neural network to check applications for a loan, then excessive regulation can block the possibility of using such a tool in principle, Shitsle sums up.
UPD: “The problem of algorithmic transparency of AI systems is a well-known legal problem of this technology. This issue is now being actively discussed both at the international level, for example, in the Council of Europe in the context of the future European regulation of AI, and at the level of governments of different countries, ”notes Andrey Neznamov, managing director of the AI regulation center of Sberbank. In no case can all AI systems be unified – universal regulation, as recognized in the world, is impractical, he said.