Hi all,
aaaaaannddd
(please donβt unsubscribe πππ)
To start this week weβll talk about something that needs to be talked about π
AI regulation
Ever since ChatGPT showed its promise there has been a lot of talk about whether the AI will completely replace humans and whether the world will look something like this π
So naturally, as world saviors, the EU decided to do what they do bestβ¦to write laws and regulations so that we could all be safe πππ
Howeverβ¦
What exactly is EU regulating π€
Letβs try to find it out!
Before we do that, if you got here via social media, and you are still not our subscriber you can change that here π
(We have the free and paid version)
Also, if you like this type of content, feel free to share us π
Ok, now that we got formalities out of the way, letβs move on!
EU regulation explained
Now, the whole thing started back in April 2021 when the EU AI Act was first proposed. This act is one of the first comprehensive legal frameworks for AI globally.
Full text of 2021 act you can read HERE
(itβs a slow read let me warn you in advance)
However, what is the point of a newsletter if I donβt summarize the act π
Here are some key aspects of the EU AI Act:
Risk-Based Approach: The EU AI Act categorizes AI systems based on the level of risk they pose. These categories range from "unacceptable risk" (banned practices), "high risk," "limited risk," to "minimal risk."
Banned Practices: The act identifies certain AI practices as unacceptable due to their detrimental impact on fundamental rights. This includes AI systems that manipulate human behavior to circumvent users' free will and systems that allow 'social scoring' by governments.
High-Risk AI Systems: For AI systems classified as high-risk (e.g., those used in critical infrastructure, employment, essential private and public services, law enforcement, migration, asylum and border control management), the act sets strict requirements. These include adequate risk assessment and mitigation systems, high-quality datasets to train AI, transparency measures, and human oversight.
Conformity Assessments: High-risk AI systems must undergo rigorous testing and certification to ensure compliance with the act's requirements before being put on the market.
Transparency Obligations: AI systems intended to interact with humans, those used to detect emotions or determine association with (social) categories based on biometrics, and AI-enabled chatbots must be designed to inform users that they are interacting with an AI system.
Enforcement and Penalties: The act provides for enforcement mechanisms, including penalties for non-compliance. Penalties for non-compliance can be substantial, potentially reaching up to 6% of a company's annual global turnover.
Data Governance: The act emphasizes the importance of using high-quality data sets, minimizing biases, and ensuring that AI systems do not discriminate.
International Impact: While the act is specific to the EU, its impact is expected to be global, as non-EU companies doing business in the EU will also need to comply.
So, all this was proposed back in 2021 but the ChatGPT craze made the Europen Commission work even harder (bless them), and on December 9th, 2023 they finally reached the final version of the EU AI Act.
Compared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:
rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems
a revised system of governance with some enforcement powers at EU level
extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
What common people want to really know is βWill I be able to use ChatGPTβ
Yes you will if ChatGPT βplays by the rulesβ
Specific rules have been also agreed for foundation models, large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for βhigh impactβ foundation models. These are foundation models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.
Also, EU is taking care of the innovation as well π
Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions. Furthermore, new provisions have been added allowing testing of AI systems in real world conditions, under specific conditions and safeguards. To alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and clearly specified derogations.Β
In a nutshell, the EU wants to make sure that the current AI solutions are regulated but also that the new AI solutions will be transparent and safe to use.
Howeverβ¦
Will all of this increase EU-based AI solutions?
Status of the AI in the EU
Back in September 2022, McKinsey published a really interesting report called βSecuring Europeβs competitivenessβ where they are addressing Europeβs technology gap when compared to other parts of the world.
The full report you can download HERE, and yes, itβs free πΈπΈπΈ
Now, McKinsey in this 124 page long report talks about many areas (not just tech), and one of the areas is AI.
The thing about AI (which people leading Europe most likely donβt understand) is that there are many connected industries to it, and many connected skill sets.
For instance, you canβt have top-notch AI startups without people skilled in data, cloud engineering etc because AI needs all of those.
Take a look at how βgoodβ Europe is in βup and comingβ competences π
Source: βSecuring Europeβs competitivenessβ by McKinsey
In categories:
βDistributed infrastructureβ,
βTrust architectureβ,
βApplied AIβ,
βNext gen computingβ,
βFuture of programmingβ
Europe is to put it mildly βleft in the dustβ by sector leaders (USA in this case).
You might be thinkingβ¦things canβt be that bad, right?
Source: βSecuring Europeβs competitivenessβ by McKinsey
So, the whole of Europe has a problem keeping up not only with the US (undisputed leader) but with China as wellβ¦
Source: βSecuring Europeβs competitivenessβ by McKinsey
The whole of Europe is a continent that is living off its βpast gloryβ, and slowly we are turning into a continent where only old industries βliveβ.
But, maybe not even that in the future?
Source: βSecuring Europeβs competitivenessβ by McKinsey
Source: βSecuring Europeβs competitivenessβ by McKinsey
So, the question isβ¦what exactly is the EU regulating with their EU AI Act? How US and Chinese AI companies will do their business in Europe?