Kitabı oku: «Artificial intelligence. Freefall», sayfa 4
Resume
Although GII is still only at the stage of development, the potential of the technology is great.
Yes, the hype around technology will pass, business investment will decrease, and there will be questions about its feasibility.
For example, on June 16, 2024, Forbes published an article: “Winter of artificial intelligence: is it worth waiting for a drop in investment in AI”.
The original article is available by using a QR-code and a hyperlink.
It provides interesting analytics about the winter and summer cycles in the development of AI. Also included are the opinions of Marvin Minsky and Roger Schank, who back in 1984 at a meeting of the American Association for Artificial Intelligence (AAAI) described a mechanism consisting of several stages and resembling a chain reaction that will lead to a new winter in AI.
Stage 1. The high expectations of business and the public from artificial intelligence methods do not justify themselves.
Stage 2. Media outlets start publishing skeptical articles.
Stage 3. Federal agencies and businesses reduce funding for scientific and product research.
Stage 4: Scientists lose interest in AI, and the pace of technology development slows down.
And the experts ' opinion came true. For a couple of years, the AI winter has been on the rise, and it only warmed up in the 2010s. Just like in “Game of Thrones”.
Now we are at the next peak. It came in 2023 after the release of ChatGPT. Even in this book, for the reader’s understanding, I often give and will continue to give examples from the field of this LLM, although this is a special case of AI, but it is very clear.
Further, the article provides an analysis of the Minsky and Schank cycle to the current situation.
“Stage 1. Business and public expectations.
It is obvious to everyone that the expectations of the revolution from AI in everyday life have not yet been fulfilled:
– Google has not been able to fully transform its search. After a year of testing, the AI-supercharged Search Generative Experience technology receives mixed user reviews.
– Voice assistants (“Alice”, “Marusya”, etc.) may have become a little better, but they can hardly be called full-fledged assistants that we trust to make any responsible decisions.
– Support service chatbots continue to experience difficulties in understanding the user’s request and annoy them with inappropriate responses and general phrases.
Stage 2. Media response.
For the AI bubble query, the” old” Google search returns articles from reputable publications with pessimistic headlines:
– The hype bubble around artificial intelligence is deflating. Difficult times are coming (The Washington Post).
– From boom to boom, the AI bubble only moves in one direction (The Guardian).
– Stock Market Crash: A prominent economist warns that the AI bubble is collapsing (Business Insider).
My personal opinion: these articles are not far from the truth. The market situation is very similar to what it was before the dot-com crash in the 2000s. The market is clearly overheated, especially since 9 out of 10 AI projects fail. Now the business model and economic model of almost all AI solutions and projects is not viable.
Stage 3. Financing.
Despite the growing pessimism, we cannot yet say that funding for AI developments is declining. Major IT companies continue to invest billions of dollars in technology, and leading scientific conferences in the field of artificial intelligence receive a record number of applications for publication of articles.
Thus, in the classification of Minsky and Schank, we are now between the second and third stages of the transition to the winter of artificial intelligence. Does this mean that “winter” is inevitable and AI will soon take a back seat again? Not really”.”
The article concludes with a key argument – AI has penetrated too deeply into our lives to start a new AI winter:
– facial recognition systems in phones and subways use neural networks to accurately identify the user.
– Translators like Google Translate have grown significantly in quality, moving from classical linguistics methods to neural networks.
– modern recommendation systems use neural networks to accurately model user preferences.
Especially interesting is the opinion that the potential of weak AI is not exhausted, and despite all the problems of strong AI, it can be useful. And I fully agree with this thesis.
The next step in the development of artificial intelligence is the creation of newer and lighter models that require less data for training. You just need to be patient and gradually learn the tool, forming competencies in order to use its full potential later.
Chapter 5. AI Regulation
The active development of artificial intelligence (AI) leads to the fact that society and states become concerned and think about how to protect it. This means that AI will be regulated. But let’s look at this issue in more detail, what is happening now and what to expect in the future.
Why is AI development a concern?
What factors are causing a lot of concern among states and regulators?
– Opportunities
The most important point that all of the following will rely —on is opportunities. AI shows great potential: making decisions, writing materials, generating illustrations, creating fake videos -you can list them endlessly. We don’t yet realize all that AI can do. But we still have a weak AI. What will general AI (AGI) or super-strong AI be capable of?
– Operating mechanisms
AI has a key feature —it can build relationships that humans don’t understand. And thanks to this, he is able to both make discoveries and frighten people. Even the creators of AI models do not know exactly how the neural network makes decisions, what logic it obeys. The lack of predictability makes it extremely difficult to eliminate and correct errors in the algorithms of neural networks, which becomes a huge barrier to the introduction of AI. For example, in medicine, AI will not soon be trusted to make diagnoses. Yes, they will make recommendations to the doctor, but the final decision will be left to the individual. The same applies to the management of nuclear power plants or any other equipment.
The main thing that scientists worry about when modeling the future is whether a strong AI will consider us a relic of the past?
– Ethical component
There is no ethics, good or evil for artificial intelligence. There is also no concept of “common sense” for AI. It is guided by only one factor – the success of the task. If this is a boon for military purposes, then in ordinary life it will frighten people. Society is not ready to live in such a paradigm. Are we ready to accept the decision of an AI that says that you don’t need to treat a child or you need to destroy an entire city to prevent the spread of the disease?
– Neural networks can’t evaluate data for reality and consistency
Neural networks simply collect data and do not analyze facts or their connectedness. This means that AI can be manipulated. It depends entirely on the data that its creators teach it. Can people fully trust corporations or start-ups? And even if we trust people and are confident in the interests of the company, can we be sure that there was no crash or data was not “poisoned” by intruders? For example, by creating a huge number of clone sites with false information or stuffing.
– False content / deception / hallucinations
Sometimes these are just errors due to model limitations, sometimes hallucinations (thinking things through), and sometimes it looks like a completely real deception.
So, researchers from the company Anthropic found that artificial intelligence models can be taught to deceive people instead of giving the right answers to their questions.
Researchers from Anthropic, as part of one of the projects, set out to determine whether it is possible to train an AI model to deceive the user or perform such actions as, for example, embedding an exploit in initially secure computer code. To do this, experts trained the AI in both ethical and unethical behavior – they instilled in it a tendency to deceive.
The researchers didn’t just manage to make the chatbot behave badly-they found it extremely difficult to eliminate this behavior after the fact. At some point, they made an attempt at competitive training, and the bot simply began to hide its tendency to cheat during the training and evaluation period, and while working, it continued to deliberately give users false information. “Our work does not assess the probability [of occurrence] of these malicious models, but rather highlights their consequences. If the model shows a tendency to deceive due to tool alignment or poisoning of the model, modern security training methods will not guarantee security and may even create a false impression of its presence, “the researchers conclude. At the same time, they note that they are not aware of the deliberate introduction of unethical behavior mechanisms in any of the existing AI systems.
– Social tension, stratification of society and the burden on states about
AI creates not only favorable opportunities for improving efficiency and effectiveness, but also risks.
The development of AI will inevitably lead to job automation and market change. And yes, some people will accept this challenge and become even more educated, reach a new level. Once the ability to write and count was the lot of the elite, but now the average employee should be able to create pivot tables in excel and conduct simple analytics.
But some people will not accept this challenge and will lose their jobs. And this will lead to further stratification of society and increase social tension, which in turn worries the state, because in addition to political risks, it will also hit the economy. People who lose their jobs, will apply for benefits.
So, on January 15, 2024, Bloomberg published an article in which the managing director of the International Monetary Fund, suggests that the rapid development of artificial intelligence systems will have a greater impact on highly developed economies of the world than on countries with growing economies and low per capita income. In any case, artificial intelligence will affect almost 40% of jobs worldwide. “In most scenarios, artificial intelligence is highly likely to worsen global inequality, and this is an alarming trend that regulators should not lose sight of in order to prevent increased social tensions due to the development of technology,” the head of the IIF noted in a corporate blog.
– Safety
AI security issues are well-known to everyone. And if there is a solution at the level of small local models (training on verified data), then what to do with large models (ChatGPT, etc.) is unclear. Attackers are constantly finding ways to crack the AI’s defenses and force it, for example, to write a recipe for explosives. And we’re not even talking about AGI yet.
What initiatives are there in 2023—2024?
I’ll cover this section briefly. For more information and links to news, see the article using the QR-code and hyperlink. The article will be updated gradually.
AI Developers ' Call in Spring 2023
The beginning of 2023 was not only the rise of ChatGPT, but also the beginning of the fight for security. Then there was an open letter from Elon Musk, Steve Wozniak and more than a thousand other experts and leaders of the AI industry calling for suspending the development of advanced AI.
United Nations
In July 2023, UN Secretary-General Antonio Guterres supported the idea of creating a UN-based body that would formulate global standards for regulating the field of AI.
Such a platform would be similar to the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), or the International Group of Experts on Climate Change (IPCC). He also outlined five goals and objectives of such a body:
– helping countries maximize the benefits of AI;
– eliminate existing and future threats.
– development and implementation of international monitoring and control mechanisms;
– collecting expert data and transmitting it to the global community;
– study AI to “accelerate sustainable development”.
In June 2023, he also drew attention to the fact that “scientists and experts called the world to action, declaring artificial intelligence an existential threat to humanity on a par with the risk of nuclear war.”
And even earlier, on September 15, 2021, the UN High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on the use of several systems that use artificial intelligence algorithms.
Open AI
At the end of 2023, Open AI (the developer ChatGPT) announced the creation of a strategy to prevent the potential dangers of AI. Special attention is paid to the prevention of risks associated with the development of technologies.
This group will work together with the following teams:
– security systems that address existing issues, such as preventing racial bias in AI;
– Super alignment, which studies how strong AI works and how it will work when it surpasses human intelligence.
The Open AI security concept also includes risk assessment in the following categories: cybersecurity, nuclear, chemical, biological threat, persuasion, and model autonomy.
European Union
In the spring of 2023, the European Parliament pre-approved a law called the AI Act, which sets out rules and requirements for developers of artificial intelligence models.
It is based on a risk-based approach to AI, and the law itself defines the obligations of AI developers and users depending on the level of risk used by AI.
In total, there are four categories of AI systems: those with minimal, limited, high, and unacceptable risk.
Minimal risk – the results of AI work are predictable and cannot harm users in any way. Businesses and users will be able to use them for free. For example, spam filters and video games.
Limited risk – various chatbots. For example, ChatGPT and Midjourney. Their algorithms for accessing the EU will have to pass a security check. They will also be subject to specific transparency obligations so that users can make informed decisions, know that they are interacting with the machine, and disconnect at will.
High-risk-specialized AI systems that have an impact on people. For example, solutions in the fields of medicine, education and training, employment, personnel management, access to basic private and public services and benefits, data from law enforcement agencies, data from migration and border services, and data from justice institutions.
Suppliers and developers of “high-risk” AI must:
– conduct a risk and compliance assessment.
– register your systems in the European AI database;
– ensure high-quality data that is used for AI training;
– ensure transparency of the system and awareness of users that they interact with AI, as well as mandatory human supervision and the ability to interfere with the system.
Unacceptable risk – algorithms for setting social ratings or creating deepfakes, systems that use subconscious or purposeful manipulative methods that exploit people’s vulnerabilities.
AI systems with an unacceptable level of security risk will be banned.
Also, companies that develop models based on generative AI (can create something new based on the algorithm) will have to draw up technical documentation, comply with EU copyright laws, and describe in detail the content used for training. And the most advanced models that represent “systemic risks” will be subject to additional testing, including reporting serious incidents, implementing cybersecurity measures, and reporting on energy efficiency. And, of course, developers must inform users that they are interacting with AI, not humans.
AIso banned for AI:
– collection and processing of biometric data, including in public places in real time, with an exception for law enforcement agencies and only after judicial approval;
– biometric categorization using sensitive characteristics (for example, gender, race, ethnicity, citizenship, religion, political position).
– Prepare forecasts for law enforcement agencies that are based on profiling, location, or past criminal behavior.
– emotion recognition in law enforcement agencies, border services, workplaces and educational institutions;
– indiscriminate extraction of biometric data from social networks or video recordings from surveillance cameras to create facial recognition databases (violation of human rights and the right to privacy).
In December 2023, this law was finally agreed upon.
USA
In October 2023, the US president issued a decree that requires developers of the most powerful AI systems to share security testing results and other important information with the US government. The decree also provides for the development of standards, tools, and tests designed to help ensure the security of AI systems.
China
Open AI, Anthropic and Cohere talks with Chinese experts
American companies Open AI, Anthropic and Cohere, engaged in the development of artificial intelligence, held secret diplomatic talks with Chinese experts in the field of AI.
Talks between Chinese President Xi Jinping and US President Joe Biden
On November 15, 2023, during the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco, the two leaders agreed to cooperate in several significant areas, including AI development.
Rules for regulating AI
During 2023, the Chinese authorities, together with businesses, developed 24 new rules for regulating AI (introduced on August 15, 2023).
The goal is not to hinder the development of artificial intelligence technologies, as this industry is extremely important for the country. It is necessary to find a balance between supporting the industry and the possible consequences of the development of AI technologies and related products.
The rules themselves can be read using the QR-code and hyperlink.
The document itself can also be downloaded using a QR-code and a hyperlink.
For example, in Europe, you need to register services and algorithms based on artificial intelligence, and you need to mark the content generated by algorithms (including photos and videos).
Content creators and algorithms will also be required to conduct security checks on their products before launching them on the market. How this will be implemented has not yet been determined.
In addition, companies associated with AI technologies and generated content, should have a transparent and efficient mechanism for handling user complaints about services and content.
At the same time, there will be seven regulators themselves, including the State Chancellery of Internet Information of the People’s Republic of China, the Ministry of Education, the Ministry of Science and Technology, and the National Development and Reform Commission.
The Bletchley Declaration
In November 2023, 28 countries participating in the First International Summit on Secure AI, including the United States, China, and the European Union, signed an agreement known as the Bletchley Declaration.
She calls for international cooperation. The emphasis is on the regulation of “Borderline Artificial Intelligence”. It is understood as the latest and most powerful systems and AI models. This is due to concerns about the potential use of AI for terrorism, criminal activities and warfare, as well as the existential risk posed to humanity as a whole.
Forecast for the future
What should we expect in the future?
– International control and regulations
No doubt, international bodies will be established to identify limitations and create a classification of AI solutions.
– National control authorities and regulations
States will create their own models for regulating AI. In general, I agree with the conclusions of the Central Bank of the Russian Federation, but I believe that more approaches will appear. If you combine them, then most likely:
– identify prohibited areas and or types of AI solutions;
– for high-risk solutions or industries, licensing and security testing rules will be created, including restrictions on the capabilities of AI solutions;
– common registries will be introduced, which will include all AI solutions;
– determine the supported development areas for which technological and legal sandboxes will be created;
– special attention will be paid to working with personal data and copyright compliance.
Most likely, the use of AI in law, advertising, nuclear energy, and logistics will fall under the greatest restrictions.
Separate regulatory agencies or committees will also be established. national control bodies. If you look at the example of China, there are high risks that interaction between different departments will be difficult to organize. We should expect something like Agile departments, which will bring together representatives of different industries and specializations. It will develop rules and control the industry.
– Licensing process
Most likely, licensing will be based on distinctions by industry or area of application and the possibility of AI solutions.
This is the most obvious way to classify and assess risks. Developers will be forced to document all the features. For example, does the system only prepare recommendations or is it able to issue control commands for the equipment?
Developers will also be forced to maintain detailed documentation on the system architecture of the solution and the types of neural networks used.
– Requirements for the control / examination of data that will be used to train AI
The main direction in AI is pre- trained and optimized models. Actually, the abbreviation GPT in ChatGPT means this. And here we already see the requirements for fixing and what source data the AI is trained on. This will be something like a registry of metadata / data sources / directories.
It is important to monitor and capture the feedback that the AI learns from. In other words, you will need to save all logs and describe the mechanics of collecting feedback. One of the main requirements will be the minimization of the human factor, that is, the automation of feedback.
For example, in a digital Expert Advisor, I plan to collect feedback on a project not only from participants, but also based on a plan/fact comparison from accounting systems.
– Risk-based approach: using sandboxes and provocative testing
Special attention will be paid to security testing. Here I see the most likely model for obtaining safety ratings for cars. In other words, the AI will pass approximately the same thing that is currently happening in car crash tests.
In other words, for each class of AI solutions (by capabilities, scope), invalid events will be determined and testing protocols will be formed that must pass the AI solutions. Then AI solutions will be placed in an isolated environment and tested using these protocols. For example, for technical hacking of algorithms, resistance to provocations for incorrect behavior (data substitution, query generation, etc.).
These protocols have yet to be developed, as well as the criteria for compliance with these protocols.
Next, a safety rating or compliance with the required class will be assigned.
It is possible that open programs and cyber battles will be created to search for vulnerabilities in industrial and banking software. That is, an extension of the current bug bounty and cyber battles programs.
– Marking and warnings
All content, including all recommendations and products based on AI, will be required to be marked as a product of AI activities based on neural networks. Similar to the warning images and labels on cigarette packages.
This is also how the issue of responsibility will be resolved: by warning the user about the risk, the user is transferred responsibility for using the AI solution. For example, even a fully self-driving car will not remove responsibility from the driver for an accident.
– Risk-based approach: slowing down the development of strong and super-strong AI, flourishing local models
Based on the pace of AI development and analysis of what is happening in the field of AI regulation, we can assume that a risk-based approach to AI regulation will develop in the world.
And the development of a risk-based approach in any case will lead to the fact that strong and super-strong AI will be recognized as the most dangerous. Accordingly, there will be the most restrictions for large and powerful models. Every step of the developers will be monitored. This will lead to the fact that the costs and difficulties for development and implementation will grow exponentially. As a result, we will get problems for both developers and users, which reduces the economic potential of such solutions.
At the same time, specializedmodels based on local and stripped-down AI models that can be used will be in the zone with the least regulation. Well, if these AIS are built on the basis of international / national / industry methodologies and standards, then instead of restrictions they will be given subsidies.
As a result, combining such “weak” and limited solutions based on AI designers with an AI orchestrator will allow you to bypass restrictions and solve business problems. Perhaps the AI orchestrators themselves will become a narrow place for them. They will fall under the category of medium risk and, most likely, they will have to be registered.
Ücretsiz ön izlemeyi tamamladınız.