Kitabı oku: «Artificial intelligence. Freefall», sayfa 3

Yazı tipi:

Chapter 3. What can weak AI do and general trends?

Weak AI in applied tasks

As you, probably already, understood, I am a proponent of using what is available. Perhaps this is my experience in crisis management, and whether it’s just an erroneous opinion. But still, where can the current weak AI based on machine learning be applied?

The most relevant areas for applying AI with machine learning are:

– forecasting and preparing recommendations for decisions taken;

– analysis of complex data without clear relationships, including for forecasting and decision-making;

– process optimization;

– image recognition, including images and voice recordings;

– automating the execution of individual tasks, including through content generation.

The new direction, which is at its peak of popularity in 2023—2024, is image recognition, including images and voice recordings, and content generation. This is where the bulk of AI developers and most of these services come from.

At the same time, the combination of AI + IoT (Internet of Things) deserves special attention:

– AI receives pure big data, which does not contain human errors, for training and finding relationships.

– The effectiveness of IoT increases, as it becomes possible to create predictive analytics and early detection of deviations.

Key trends

– Machine learning is moving towards an increasingly low entry threshold.

One of the tasks that developers are currently solving, is to simplify the creation of AI models to the level of site designers, where special knowledge and skills are not needed for basic application. The creation of neural networks and data science is already developing according to the “service as a service” model, for example, DSaaS – Data Science as a Service.

You can start learning about machine learning with AUTO ML, its free version, or DSaaS with initial audit, consulting, and data markup. You can even get data markup for free. All this reduces the entry threshold.

– Creating neural networks that need less and less data for training.

A few years ago, to fake your voice, it was necessary to provide a neural network with one or two hours of recording your speech. About two years ago, this indicator dropped to a few minutes. Well, in 2023, Microsoft introduced a neural network that takes just three seconds to fake.

Plus, there are tools, that you can use to change your voice even in online mode.

– Create support and decision-making systems, including industry-specific ones.

Industry-specific neural networks will be created, and the direction of recommendation networks, so-called “digital advisors” or solutions of the class “support and decision-making systems (DSS) for various business tasks” will be increasingly developed.

Practical example

We will look at this case again and again, as it is my personal pain and the product I am working on.

There is a problem in project management – 70% of projects are either problematic or fail:

– the average excess of planned deadlines is observed in 60% of projects, and the average excess is 80% of the original deadline;

– 57% of projects exceed their budgets, while the average excess is 60% of the initial budget;

– failure to meet the success criteria – in 40% of projects.

At the same time, project management already takes up to 50% of managers ' time, and by 2030 this figure will reach 60%. Although at the beginning of the 20th century, this figure was 5%. The world is becoming more and more volatile, and the number of projects is growing. Even sales are becoming more and more “project-based”, that is, complex and individual.

And what does such project management statistics lead to?

– Reputational losses.

– Penalties.

– Reduced marginality.

– Limiting business growth.

The most common and critical errors are:

– unclear formulation of project goals, results, and boundaries;

– insufficiently developed project implementation strategy and plan;

– inadequate organizational structure of project management;

– an imbalance in the interests of project participants;

– ineffective communication within the project and with external organizations.

How do people solve this problem? Either they don’t do anything and suffer, or they go to school and use task trackers.

However, both approaches have their pros and cons. For example, classical training provides an opportunity to ask questions and practice various situations during live communication with the teacher. At the same time, it is expensive and usually does not imply further support after the end of the course. Task trackers, on the other hand, are always at hand, but they do not adapt to a specific project and company culture, do not contribute to the development of competencies, but on the contrary, are designed to monitor work.

As a result, after analyzing my experience, I came up with the idea of a digital advisor – artificial intelligence and predictive recommendations “what to do, when and how” in 10 minutes for any project and organization. Project management becomes available to any manager conditionally for a couple of thousand rubles a month.

The AI model includes a project management methodology and sets of ready-made recommendations. The AI will prepare sets of recommendations and gradually learn itself, finding new patterns, and not be tied to the opinion of the creator and the one who will train the model at the first stages.

Chapter 4. Generative AI

What is generative artificial intelligence?

Earlier, we reviewed the key areas for applying AI:

– forecasting and decisions-making;

– analysis of complex data without clear relationships, including for forecasting purposes.

– process optimization;

– image recognition, including images and voice recordings.

– content generation.

The areas of AI that are currently at the peak of popularity, are image recognition (audio, video, numbers) and content generation based on them: audio, text, code, video, images, and so on. Generative AI also includes digital Expert Advisors.

Generative AI Challenges

As of mid-2014, the direction of generative AI cannot be called successful. For example, in 2022, Open AI suffered a loss of $540 million due to the development of ChatGPT. And for further development and creation of a strong AI, about $ 100 billion more will be needed. This amount was announced by the head of Open AI himself. The same unfavorable forecast for 2024 is also given by the American company CCS Insight.

For reference: the operating cost of Open AI is $ 700,000 per day to maintain the chat bot ChatGPT.

The general trend is supported by Alexey Vodyasov, Technical Director of SEQ: “AI does not achieve the same marketing results that we talked about earlier. Their use is limited by the training model, and the cost and volume of data for training is growing. In general, the hype and boom are inevitably followed by a decline in interest. AI will come out of the limelight as quickly as it entered, and this is just the normal course of the process. Perhaps not everyone will survive the downturn, but AI is really a” toy for the rich”, and it will remain so in the near future.” And we agree with Alexey, after the hype at the beginning of 2023, there was a lull by the fall.

Adding to the picture is an investigation by the Wall Street Journal, according to which the majority of IT giants have not yet learned how to make money on the capabilities of generative AI. Microsoft, Google, Adobe and other companies that invest in artificial intelligence, are looking for ways to make money on their products. Here are some examples:

– Google plans to increase the subscription price for AI-enabled software;

– Adobe sets limits on the number of requests to services with AI during the month.

– Microsoft wants to charge business customers an additional $30 per month for the ability to create presentations using a neural network.

Well, and the icing on the cake-calculations by David Cahn, an analyst at Sequoia Capital, showing that AI companies will have to earn about $600 billion a year to offset the costs of their AI infrastructure, including data centers. The only, one who now makes good money on AI, is the developer of Nvidia accelerators.

More information about the article can be found in the QR-code and hyperlink below.


Computing power is one of the main expenses when working with Gen AI: the larger the server requests, the larger the infrastructure and electricity bills. Only suppliers of hardware and electricity benefit. So, Nvidia in August, 2023 earned about $5 billion thanks to sales of the accelerators for AI A100 and H100 only to the Chinese IT sector.

This can be seen in two examples on practice.

First is Zoom tries to reduce costs by using a simpler chatbot developed in-house and requiring less computing power compared to the latest version of ChatGPT.

Second is the most well-known AI developers (Microsoft, Google, Apple, Mistral, Anthropic, and Cohere) began to focus on creating compact AI models, as they are cheaper and more cost-effective.

Larger models, such, as Open AI’s GPT-4, which has more than 1 trillion parameters and is estimated to cost more than $ 100 million to build, do not have a radical advantage over simpler solutions in applications. Compact models are trained on narrower data sets and can cost less than $ 10 million, while using less than 10 billion parameters, but solve targeted problems.

For example, Microsoft introduced a family of small models called Phi. According to CEO Satya Nadella, the model’s solutions are 100 times smaller than the free version ChatGPT, of ChatGPT, but they handle many tasks almost as efficiently. Yusuf Mehdi, Microsoft’s chief commercial officer, said the company quickly realized that operating large AI models is more expensive than initially thought. So, Microsoft started looking for more cost-effective solutions.

Apple also plans to use such models to run AI directly on smartphones, which should increase the speed and security. At the same time, resource consumption on smartphones will be minimal.

Experts themselves believe that for many tasks, for example, summarizing documents or creating images, large models may generally be redundant. Ilya Polosukhin, one of the authors of Google’s seminal 2017 article on artificial intelligence, figuratively compared using large models for simple tasks to going to the grocery store on a tank. “Quadrillions of operations should not be required to calculate 2 +2,” he stressed.

But let’s look at everything in order, why did this happen and what restrictions threaten AI, and most importantly, what will happen next? Sunset of generative AI with another AI winter or transformation?

AI limitations that lead to problems

Earlier, I gave the “basic” problems of AI. Now, let’s dig a little deeper into the specifics of generative AI.

– Companies ' concerns about their data

Any business strives to protect its corporate data and tries to exclude it by any means. This leads to two problems.

First, companies prohibit the use of online tools that are located outside the perimeter of a secure network, while any request to an online bot is an appeal to the outside world. There are many questions about how data is stored, protected, and used.

Secondly, it limits the development of any AI at all. All companies from suppliers want IT solutions with AI-recommendations from trained models, which, for example, will predict equipment failure. But not everyone shares their data. It turns out a vicious circle.

However, here we must make a reservation. Some guys have already learned how to place Chat GPT level 3 – 3.5 language models inside the company outline. But these models still need to be trained, they are not ready-made solutions. And internal security services will find the risks and be against it.

– Complexity and high cost of development and subsequent maintenance

Developing any” general” generative AI is a huge expense-tens of millions of dollars. In addition, the am needs a lot of data, a lot of data. Neural networks still have low efficiency. Where 10 examples are enough for a person, an artificial neural network needs thousands, or even hundreds of thousands of examples. Although yes, it can find such relationships, and process such data arrays that a person never dreamed of.

But back to the topic. It is precisely because of the data restriction that ChatGPT also thinks “better” if you communicate with it in English, and not in Russian. After all, the English-speaking segment of the Internet is much larger than ours.

Add to this the cost of electricity, engineers, maintenance, repair and modernization of equipment, and get the same $ 700,000 per day just for the maintenance of Chat GPT. How many companies can spend such amounts with unclear prospects for monetization (but more on this below)?

Yes, you can reduce costs if you develop a model and then remove all unnecessary things, but then it will be a very highly specialized AI.

Therefore, most of the solutions on the market are actually GPT wrappers-add-ons to ChatGPT.

– Public concern and regulatory constraints

Society is extremely concerned about the development of AI solutions. Government agencies around the world do not understand what to expect from them, how they will affect the economy and society, and how large-scale the technology is in its impact. However, its importance cannot be denied. Generative AI is making more noise in 2023 than ever before. They have proven that they can create new content that can be confused with human creations: texts, images, and scientific papers. And it gets to the point where AI is able to develop a conceptual design for microchips and walking robots in a matter of seconds.

The second factor is security. AI is actively used by attackers to attack companies and people. So, since the launch of ChatGPT c, the number of phishing attacks has increased by 1265%. Or, for example, with the help of AI, you can get a recipe for making explosives. People come with original schemes and bypass the built-in security systems.

The third factor is opacity. Sometimes even the creators themselves don’t understand how AI works. And for such a large-scale technology, not understanding, what and why AI can generate, creates a dangerous situation.

The fourth factor is dependence on training resources. AI models are built by people, and it is also trained by people. Yes, there are self-learning models, but highly specialized ones will also be developed, and people will select the material for their training.

All this means that the industry will start to be regulated and restricted. No one knows exactly how. We will supplement this with a well-known letter in March 2023, in which well-known experts around the world demanded to limit the development of AI.

– Lack of the chatbot interaction model

I assume you’ve already tried interacting with chatbots and were disappointed, to put it mildly. Yes, a cool toy, but what to do with it?

You need to understand that a chatbot is not an expert, but a system that tries to guess what you want to see or hear, and gives you exactly that in the am.

And to get practical benefits, you must be an expert in the subject area yourself. And if you are an expert in your topic, do you need a Gen AI? And if you are not an expert, then you will not get a solution to your question, which means that there will be no value, only general answers.

As a result, we get a vicious circle – experts do not need it, and amateurs will not help. Then who will pay for such an assistant? So, at the exit we have only a toy.

In addition, in addition to being an expert on the topic, you also need to know, how to formulate a request correctly. And there are only a few such people. As a result, even a new profession appeared – industrial engineer. This is a person who understands how the machine thinks, and can correctly compose a query to it. And the cost of such an engineer on the market is about 6000 rubles per hour (60$). And believe me, they won’t find the right query for your situation the first time.

Do businesses need such a tool? Will the business want to become dependent on very rare specialists who are even more expensive than programmers, because ordinary employees will not benefit from it?

So, it turns out that the market for a regular chatbot is not just narrow, it is vanishingly small.

– The tendency to produce low-quality content, hallucinations

In the article Artificial intelligence: assistant or toy? I noted that neural networks simply collect data and do not analyze the facts, their coherence. That is, what is more on the Internet / database, they are guided by. They don’t evaluate what they write critically. In toga, GII easily generates false or incorrect content.

For example, experts from the Tandon School of Engineering at New York University decided to test Microsoft’s Copilot AI assistant from a security point of view. As a result, they found that in about 40% of cases, the code generated by the assistant contains errors or vulnerabilities. A detailed article is available here.

Another example of using Chat GPT was given by a user on Habre. Instead of 10 minutes and a simple task, we ended up with a 2-hour quest.

And AI hallucinations – have long been a well-known feature. What they are and how they arise, you can read here.

And this is good when the cases are harmless. But there are also dangerous mistakes. So, one user asked Gemini how to make a salad dressing. According to the recipe, it was necessary to add garlic to olive oil and leave it to infuse at room temperature.

While the garlic was being infused, the user noticed strange bubbles and decided to double-check the recipe. It turned out that the bacteria that cause botulism were multiplying in his bank. Poisoning with the toxin of these bacteria is severe, even sweeping away.

I myself periodically use GII, and more often it gives, let’s say, not quite correct results. And sometimes even frankly erroneous. You need to spend 10—20 requests with absolutely insane detail to get something sane, which then still needs to be redone / docked.

That is, it needs to be rechecked. Once again, we come to the conclusion that you need to be an expert in the topic in order to evaluate the correctness of the content and use it. And sometimes it takes even more time than doing everything from scratch and by yourself.

– Emotions, ethics and responsibility

A Gen AI without a proper query will tend to simply reproduce information or create content, without paying attention to emotions, context, and tone of communication. And from the series of articles about communication, we already know that communication failures can occur very easily. As a result, in addition to all the problems above, we can also get a huge number of conflicts.

There are also questions about the possibility of determining the authorship of the created content, as well as the ownership rights to the created content. Who is responsible for incorrect or malicious actions performed using the GII? And how can you prove that you or your organization is the author? There is a need to develop ethical standards and legislation regulating the use of GII.

– Economic feasibility

As we’ve already seen, developing high-end generative AI yourself can be a daunting task. And many people will have the idea: “Why not buy a ‘box’ and place it at home?” But how much do you think, this solution will cost? How much will the developer’s request?

And most importantly, how big should the business be to make it all pay off?

What should I do?

Companies are not going to completely abandon large models. For example, Apple will use ChatGPT in Siri to perform complex tasks. Microsoft plans to use the latest Open AI model in the new version of Windows as an assistant. At the same time, Experian from Ireland and Salesforce from the United States have already switched to using compact AI models for chatbots and found that they provide the same performance as large models, but at significantly lower costs and with lower data processing delays.

A key advantage of small models is the ability to fine-tune them for specific tasks and data sets. This allows them to work effectively in specialized areas at a lower cost and easier to solve security issues. According Yoav Shoham, co-founder of Tel Aviv-based AI21 Labs, small models can answer questions and solve problems for as little as one-sixth the cost of large models.

– Take your time

You should not expect the AI to decline. Too much has been invested in this technology over the past 10 years, and it has too much potential.

I recommend that you remember the 8th principle from the Toyota DAO, the basics of lean manufacturing and one of the tools of my system approach: “Use only reliable, proven technology.” You can find a number of recommendations in it.

– Technology is designed to help people, not replace them. Often, you should first perform the process manually before introducing additional hardware.

– New technologies are often unreliable and difficult to standardize, and this puts the flow at risk. Instead of an untested technology, it is better to use a well-known, proven process.

– Before introducing new technology and equipment, you should conduct real-world testing.

– Reject or change a technology that goes against your culture, may compromise stability, reliability, or predictability.

– Still, encourage your people not to forget about new technologies when it comes to finding new paths. Quickly implement proven technologies that have been tested and make the flow more perfect.

Yes, in 5—10 years generative models will become mass-produced and affordable, meticulously smart, cheaper, and eventually reach a plateau of productivity in the hype cycle. And most likely, each of us will use the results from the GII: writing an article, preparing presentations, and so on ad infinitum. But to rely on AI now and reduce people will be clearly redundant.

– Improve efficiency and safety

Almost all developers are now focused on making AI models less demanding on the quantity and quality of source data, as well as on improving the level of security-AI must generate safe content and be resistant to provocations.

– Master AI in the form of experiments and pilot projects

To be prepared for the arrival of really useful solutions, you need to follow the development of the technology, try it out, and form competencies. It’s like digitalization: instead of, diving headlong into expensive solutions, you need to play with budget or free tools. Thanks to this, by the time the technology reaches the masses:

– you and your company will have a clear understanding of the requirements that need to be laid down for commercial and expensive solutions, and you will approach this issue consciously. A good technical task – 50% success rate.

– you will already be able to get effects in the short term, which means, that you will be motivated to go further.

– the team will improve its digital competencies, which will remove restrictions and resistance due to technical reasons.

– incorrect expectations will be eliminated, which means, that there will be less useless costs, frustrations, and conflicts.

– Transform user communication with AI

I am developing a similar concept in my digital advisor. The user should be given ready-made forms where they simply enter the necessary values or mark items. And already give this form with the correct binding (prompt) to the AI. Or deeply integrate solutions into existing IT tools: office applications, browsers, answering machines in your phone, etc.

But this requires careful study and understanding of the user’s behavior, requests, and whether they are standardized. In other words, either this is no longer a stumpy solution that still requires development costs, or we lose flexibility.

– Develop highly specialized models

As with humans, teaching AI everything is very labor-intensive and has low efficiency. If you create highly specialized solutions based on the engines of large models, then training can be minimized, and the model itself will not be too large, and the content will be less abstract, more understandable, and there will be fewer hallucinations.

Visual demonstration – people. Who makes great progress and can solve complex problems? Someone who knows everything, or someone who focuses on their own direction and develops in depth, knows various cases, communicates with other experts, and spends thousands of hours analyzing their own direction?

An example of a highly specialized solution:

– expert advisor for project management;

– tax consultant;

– lean manufacturing advisor;

– a chatbot for industrial safety or an assistant for an industrial safety specialist.

– chat bot for IT technical support.

₺691,90
Yaş sınırı:
12+
Litres'teki yayın tarihi:
19 aralık 2024
Hacim:
310 s. 85 illüstrasyon
ISBN:
9785006509900
İndirme biçimi: