The EU’s AI Act is nearing implementation, posing the greatest compliance challenges not for IT departments, but for human resources, finance, and insurance sectors.
AI Act Approaching Businesses
Companies utilizing artificial intelligence in recruitment, customer assessment, credit scoring, or insurance risk evaluation should already assess whether they fall under the high-risk systems regime. The official EU timeline still indicates August 2, 2026, as the key date for most AI Act provisions, though work is underway on the Digital Omnibus, which may alter some deadlines.
Beyond Tech Departments
The AI Act is no longer solely a concern for legal departments of large tech firms. New obligations may also affect businesses that do not develop their own AI models but utilize readily available tools in their business processes. This particularly applies to applications impacting individuals: employment, access to financial services, risk assessment, or insurance terms.
Implementation Timeline
According to the European Commission’s official timeline, the AI Act is being implemented in stages. General provisions, AI competency requirements, and bans on selected practices have been in effect since February 2, 2025. Rules regarding general-purpose models and the EU governance system apply from August 2, 2025. The majority of provisions, including rules for high-risk systems from Annex III and transparency requirements from Article 50, are scheduled to take effect on August 2, 2026, with full implementation by August 2, 2027.
Obligations Extend to AI Users
The AI Act doesn’t solely target AI producers. Companies using off-the-shelf AI tools may also have obligations, especially when the system influences decisions regarding employees, job applicants, customers, or access to financial services.
Application, Not Industry, Matters
A key shift in thinking about the AI Act is that industry affiliation alone doesn’t determine obligations. The system’s application is crucial. A company may not be a technology provider but still fall under regulations if it uses AI in processes deemed particularly sensitive.
Annex III (EUR-Lex) identifies tools used in employment and workforce management as high-risk systems, including those for recruitment, candidate selection, application analysis, and employee evaluation. This includes solutions impacting promotions, terminations, task assignments, and performance monitoring.
This means an HR department using a tool for automated CV pre-selection cannot assume the AI Act applies only to the software manufacturer. In many cases, the tool’s implementation, who makes the final decision, what data is processed, and the company’s ability to demonstrate control over the process will be significant.
Key Consideration: System Application
Assessing obligations under the AI Act hinges on the system’s application. A company’s non-participation in the tech industry doesn’t automatically exempt it from the regulations. Systems used in HR, credit, insurance, or essential services may fall into the high-risk category.
Financial Sector Under Scrutiny
Banks, fintech companies, and insurers are under particular scrutiny. The AI Act covers systems used to assess individuals’ creditworthiness or calculate credit scores, excluding tools for detecting financial fraud. These systems can determine access to financial resources or essential services like housing, energy, and telecommunications.
Systems used to assess risk and determine pricing in life and health insurance may also be considered high-risk. EU lawmakers emphasize that poorly designed or misused tools can lead to discrimination, financial exclusion, or other serious consequences for customers.
This requires banks, fintechs, and insurers to review not only systems directly making decisions but also tools supporting analysts, underwriters, advisors, and consultants. The line between a “supportive system” and a solution genuinely influencing a customer decision may be a key point of contention.
August 2026 Remains Tentative
Currently, the European Commission’s official timeline still points to August 2, 2026, as the date for implementing provisions regarding high-risk systems from Annex III. However, the Commission notes that the Digital Omnibus package proposes linking the application of high-risk system provisions to the availability of support tools, including relevant harmonized standards.
The Digital Omnibus was published by the Commission on November 19, 2025, as a package of targeted simplifications to ensure smoother and more proportionate application of parts of the AI Act. The EU Council adopted its position on March 13, 2026, and the European Parliament’s IMCO and LIBE committees adopted a joint report on March 18, 2026, with parliamentary documentation indicating the proposal includes delays in standards, guidelines, and compliance tools for high-risk requirements.
This creates uncertainty for businesses, but not a reason for inaction. If the deadline is postponed, companies will gain additional time to adapt. Otherwise, a lack of prior AI system inventory could become an organizational, legal, and reputational problem.
Poland Developing AI Oversight
Work is underway on national regulations concerning AI systems. The Council of Ministers adopted a draft law on March 31, 2026. According to the Chancellery of the Prime Minister, a new institution – the Commission for the Development and Security of Artificial Intelligence – will be responsible for controlling AI technology and supporting its development. The draft law aims to implement the EU AI Act in Poland.
According to government information, the KRiBSI will be an independent unit organizationally serviced by the Ministry of Digital Affairs. Its composition will include representatives from the President of UOKiK, the President of UKE, the Financial Supervision Commission, and KRRiT. The Commission will be able to verify whether AI systems meet legal and safety requirements and, in case of violations, restrict their use or withdraw them from the market.
Significant Penalties Anticipated
The AI Act provides for sanctions that are effective, proportionate, and deterrent. According to Article 99, the maximum penalty for violations regarding prohibited AI practices can be up to €35 million or up to 7% of the company’s total worldwide annual turnover from the previous financial year, whichever is higher. For other violations, including those related to obligations of high-risk system operators or transparency requirements, the limit is up to €15 million or 3% of turnover. Providing inaccurate, incomplete, or misleading information to authorities can result in a penalty of up to €7.5 million or 1% of turnover.
First Steps for Businesses
The first step should be an inventory of AI tools used within the organization. This includes not only large analytical systems or proprietary models but also purchased tools, AI functions in HR systems, solutions for customer service, document automation, scoring, marketing, debt collection, fraud detection, and risk analysis.
The second step is to determine the system’s actual application. A supplier’s declaration that a tool “supports efficiency” or “automates analysis” is insufficient. It must be verified whether the system impacts decisions concerning individuals, is used in recruitment, employment, credit, insurance, public services, or other areas mentioned in the AI Act.
The third step should be assigning responsibility. The company should know who approves the use of a given system, who supervises its operation, who assesses risk, who contacts the supplier, and who documents how the tool is used. Without such an organizational map, it will be difficult to demonstrate that AI is used in a controlled manner.
The “It’s Just a Support Tool” Fallacy
The most risky assumption may be that the AI Act doesn’t matter if a human makes the final decision. In practice, a system that responds, filters, ranks candidates, calculates a score, or flags a customer as riskier can strongly influence a human’s decision. These applications will require particular attention.
Business Takeaway
The AI Act won’t automatically encompass every company in Poland. It’s also not exclusively a regulation for Big Tech. The specific application of artificial intelligence will be most important. If a system helps select job candidates, assesses a customer, impacts access to credit, or insurance terms, the company should check now if it’s dealing with a high-risk system.



