Link to main version

200

In the season of viruses, people are being treated en masse with ChatGPT

What penalties does the legislation against "deep fake" videos provide

Кадър NOVA NEWS

It is becoming increasingly common for people to trust artificial intelligence like ChatGPT for their health - from checking medications to diagnosing by symptoms. At the same time, the legislation that should regulate both the dangers and the positive application of technologies is delayed. Cybersecurity expert Lyubomir Tulev spoke on the topic in the show "Your Day" on NOVA NEWS.

According to cybersecurity expert Lyubomir Tulev, such a practice is extremely risky. “Treating with ChatGPT, without offending anyone, is like looking at comments on the BG-mama forum and getting treated. It's more or less identical“, he said. Tulev explained that the application simply collects information from the Internet based on the question asked, but cannot perform a real analysis of the condition of a specific person.

The problem with disinformation and malicious use of artificial intelligence, such as “deep fake“ videos, is the basis of the new European law, known as the EU AI Act. It entered into force for the entire European Union on August 1, 2024, but has not yet been transposed in our country.

“This law divides the use of artificial intelligence into four categories“, Tulev explained.

Unacceptable risk: These are applications that are totally prohibited. For example, those for social exclusion, as well as technologies that can change people's behavior. “Imagine a teddy bear, a children's toy, inside there is a mechanism with which the bear in question speaks. If there is artificial intelligence inside, we realize that this bear can provoke our child in one direction or another, which may be different from what the parent wants to educate“, the expert gave an example.

High-risk use: This includes sectors such as medicine. The law requires a very good risk analysis and strict control measures.

Acceptable risks: This category includes “deep fake“ videos. According to the law, any content generated through artificial intelligence must be marked with a watermark so that it is clear to users that it is fake.

Acceptable applications: Here, artificial intelligence can be used freely, without special markings.

Lyubomir Tulev specified that by August 1 of this year, Bulgaria had to designate three bodies to Brussels that would be responsible for implementing the law. However, our country has not yet designated authorities for two of the categories. There is speculation that one of the responsible authorities could be the Ministry of Innovation and Growth.

The deadline for the full operational implementation of the law by all member states is August 1, 2026. “We are already behind schedule”, Tulev said, warning that if we do not catch up, our country could be subject to sanctions.