Our Expert in Poland
No results available
Last updated: 13 May 2026
Poland’s amended Act on the National Cybersecurity System (the KSC Act), which transposes the EU NIS2 Directive, entered into force on 3 April 2026, bringing a new wave of cybersecurity obligations for technology companies operating in the country. At the same time, Regulation (EU) 2024/1689, the EU AI Act, is phasing in its most demanding requirements for high-risk AI systems throughout 2026 and 2027. For AI startups headquartered or operating in Poland, the overlap between these two regimes creates an urgent and complex compliance landscape that demands immediate attention.
This guide addresses AI and NIS2 compliance in Poland from a startup-specific perspective: it maps the obligations, identifies where the two frameworks converge, and delivers a prioritised twelve-step checklist that founders, CTOs, and in-house counsel can begin executing today.
Consider a common scenario. A Warsaw-based startup has built a machine-learning platform that analyses medical imaging for radiology departments across the EU. The company has 60 employees, processes health data, and delivers its product as a cloud-hosted SaaS solution to hospitals in Poland and Germany. Before April 2026, that startup’s primary regulatory concerns were GDPR and medical-device rules. Today, it faces at least three additional compliance obligations: designation as an “important entity” under the amended KSC Act, classification of its AI system as “high-risk” under the EU AI Act, and mandatory incident-reporting duties under both regimes, each with its own deadlines, formats, and enforcement authorities.
This is not a hypothetical edge case. Poland’s tech sector now includes hundreds of AI-driven startups providing SaaS platforms in healthcare, fintech, logistics, and cybersecurity, sectors that the KSC Act amendment explicitly brings within the NIS2 perimeter. Industry observers expect that many of these companies will discover they are in scope for the first time only when they begin the self-assessment process described in this guide. The practical effect is that Polish startups compliance teams must now manage overlapping cybersecurity and AI-governance obligations simultaneously, and the penalties for inaction under either regime are significant.
Read this guide if you run an AI startup in Poland, sit on the board of one, invest in one, or advise one. The sections below provide the legal status snapshot, a scope self-assessment, a side-by-side comparison of obligations, and the twelve concrete steps you should take, prioritised by urgency and mapped to the next 30, 90, and 180 days.
Poland transposed the NIS2 Directive (Directive (EU) 2022/2555) by amending its Act on the National Cybersecurity System (the KSC Act). The amendment entered into force on 3 April 2026. According to the European Commission’s NIS2 implementation tracker for Poland, the country is now among the Member States that have completed transposition into national law. The amended KSC Act introduces several changes of direct importance to technology companies:
The EU AI Act (Regulation (EU) 2024/1689) applies directly across all Member States without the need for national transposition. Its obligations are being phased in according to a staged enforcement timeline. The regulation applies to providers who place AI systems on the EU market or put them into service in the EU, and to deployers located within the EU, meaning a Polish AI startup is caught both as a provider (if it develops AI systems) and potentially as a deployer (if it uses third-party AI components). The AI Act’s requirements are especially demanding for high-risk AI systems, which include AI used in critical infrastructure, employment, education, law enforcement, and healthcare, areas in which many Polish startups operate.
| Date | NIS2 / KSC Act (Poland) Milestone | EU AI Act Milestone |
|---|---|---|
| 1 August 2024 | , | AI Act enters into force (Regulation (EU) 2024/1689 published) |
| 2 February 2025 | , | Prohibitions on unacceptable-risk AI practices apply |
| 2 August 2025 | , | Obligations for general-purpose AI (GPAI) models apply; governance rules and penalties take effect |
| 3 April 2026 | Amended KSC Act (transposing NIS2) enters into force | , |
| 2 August 2026 | , | Majority of AI Act obligations apply, including high-risk AI system requirements (conformity assessment, registration, documentation) |
| 2 August 2027 | , | Obligations for high-risk AI systems that are also regulated products (e.g., medical devices, machinery) fully apply |
Before committing resources to compliance, every Polish AI startup should spend fifteen minutes running through a structured self-assessment. The goal is to determine whether the company falls within scope of either regime, or both, and to identify the risk tier that applies. The following questions form the core of that assessment:
Mapping these tests to real startup archetypes illustrates how AI and NIS2 compliance Poland obligations crystallise in practice:
For startups uncertain about their AI risk classification, a detailed step-by-step classification worksheet is essential. Industry observers expect that a significant proportion of Polish AI startups building products in healthcare, finance, and critical infrastructure will fall into the high-risk tier, making AI Act compliance Poland’s most pressing regulatory workstream for the technology sector in 2026.
Polish AI startups that are in scope for both regimes will quickly discover that several compliance workstreams overlap. Understanding these intersections is essential to avoiding duplicated effort and to designing a single, unified governance framework rather than two parallel ones.
| Obligation | NIS2 / KSC Act (Poland) | EU AI Act |
|---|---|---|
| Incident reporting deadline | Early warning to CSIRT within 24 hours; full notification within 72 hours of becoming aware of a significant incident affecting service continuity or data integrity. | Providers of high-risk AI systems must report serious incidents, those causing death, serious damage to health, property, or the environment, to the market surveillance authority of the Member State where the incident occurred. Reporting is required without undue delay after the provider becomes aware of a causal link between the AI system and the incident. |
| Risk management / security measures | Mandatory technical and organisational measures proportional to risk, covering network security, access control, encryption, supply-chain security, vulnerability handling, and business continuity. | Providers of high-risk AI systems must establish, implement, and document a risk-management system throughout the AI system lifecycle, including identification of known and foreseeable risks, estimation and evaluation of risks, and adoption of suitable risk-management measures. |
| Scope / entity test | Entities designated as essential or important based on sector and size thresholds (generally 50+ employees or €10 million+ turnover in covered sectors). | Applicability is determined by the AI system, not the entity: providers and deployers of high-risk AI systems are in scope irrespective of company size, although certain SME-friendly provisions (e.g., regulatory sandboxes, proportionate conformity assessment) provide some relief. |
| Documentation and record-keeping | Entities must maintain records of cybersecurity measures, incident reports, and risk assessments; supervisory authorities may audit these records. | Extensive technical documentation requirements for high-risk AI systems, including data governance practices, training methodologies, testing and validation results, and automatic logging of system operations. |
| Supply-chain and third-party obligations | Entities must address cybersecurity risks in their supply chain and contractual relationships with direct suppliers and service providers. | Providers must ensure that components, data, and third-party tools integrated into high-risk AI systems meet the applicable conformity requirements; deployers must ensure proper use conditions are followed. |
| Penalties | Administrative fines and corrective measures under the KSC Act; the NIS2 Directive sets maximum fine thresholds of €10 million or 2% of global annual turnover for essential entities. | Tiered fines under the AI Act: up to €35 million or 7% of global annual turnover for prohibited-practice violations; up to €15 million or 3% for other non-compliance; lower caps for SMEs and startups in certain circumstances. |
Where both regimes impose parallel requirements, incident reporting is the clearest example, the practical approach is to design a single internal process that satisfies the stricter obligation and then map outputs to each regulatory channel. For incident reporting, this means building a response plan around the 24-hour early-warning obligation under NIS2, which is typically the tightest deadline, and then assessing in parallel whether the incident also qualifies as a “serious incident” under the AI Act requiring notification to the market surveillance authority.
The same principle applies to risk management: the AI Act’s lifecycle-based risk-management system is more granular and AI-specific, but it should be integrated into the broader cybersecurity risk-management framework required by the KSC Act rather than maintained in isolation.
Early indications suggest that regulators will look favourably on companies that demonstrate an integrated compliance framework. Duplication of governance roles, separate cybersecurity leads and AI-risk owners who never coordinate, is a red flag. The recommended structure is a single compliance committee with clear RACI assignments, reporting to the management body that bears ultimate accountability under both regimes.
The following twelve steps are ordered by priority. Steps 1–5 are immediate actions (complete within 30 days). Steps 6–9 are near-term workstreams (complete within 90 days). Steps 10–12 are medium-term governance items (complete within 180 days). Each step identifies the responsible owner, the estimated effort, and a sample deliverable.
Run the scope self-assessment (NIS2 + AI Act).
Owner: Legal / CTO. Time: 15–30 minutes. Cost: Low.
Use the self-assessment worksheet above to determine whether the company is an essential or important entity under the KSC Act and whether any of its AI systems qualify as high-risk, limited-risk, or general-purpose under the AI Act. Document the conclusion and the reasoning. Deliverable: Completed scope-determination memo (1–2 pages).
Create a critical-systems and AI-component inventory.
Owner: CTO / Engineering Lead. Time: 2–5 days. Cost: Low.
List every network and information system that supports the company’s essential services. For each AI system, record the purpose, data inputs, outputs, third-party components (pre-trained models, APIs, training datasets), and deployment environment. Deliverable: Systems-and-AI inventory register (spreadsheet).
Appoint compliance leads, cybersecurity lead and AI-risk owner.
Owner: CEO / General Counsel. Time: 1 week. Cost: Low to Medium.
The KSC Act requires management-body involvement, and the AI Act requires an identifiable person or role responsible for the risk-management system. Appoint a cybersecurity lead (may be the CTO or a dedicated CISO) and an AI-risk owner (may be a senior ML engineer, product lead, or compliance officer). In smaller startups, one person may hold both roles. Deliverable: Board resolution or written appointment with role descriptions.
Map personal data flows and check GDPR intersections.
Owner: DPO / Legal. Time: 3–5 days. Cost: Low.
Both regimes interact with GDPR. Map where personal data enters, flows through, and exits each AI system. Identify any processing that requires a Data Protection Impact Assessment (DPIA) under GDPR Article 35, particularly processing of health data, biometric data, or large-scale profiling. This exercise also supports the AI Act’s data-governance requirements for high-risk systems. Deliverable: Updated data-flow map and DPIA register.
Implement basic cyber hygiene, quick wins in 30 days.
Owner: CTO / DevOps. Time: Ongoing (first pass: 2 weeks). Cost: Low to Medium.
Before building out full compliance programmes, ensure that foundational cybersecurity measures are in place: automated patching, multi-factor authentication, role-based access control, encrypted communications, and centralised logging. These measures satisfy both the KSC Act’s technical-measures requirement and the AI Act’s expectation of secure development practices. Deliverable: Cyber-hygiene baseline assessment and remediation action log.
Create or update the incident response plan with NIS2 reporting triggers.
Owner: Security Lead / Legal. Time: 1–2 weeks. Cost: Medium.
Draft an incident response plan that incorporates the KSC Act’s 24-hour early-warning and 72-hour full-notification obligations, the AI Act’s serious-incident reporting duty, and any contractual notification commitments to clients. Include a decision tree to help on-call engineers determine whether an event qualifies as a reportable incident under either regime. Deliverable: Incident response plan document, notification template, and decision tree.
Build the AI risk-management file and technical documentation.
Owner: ML Engineer + Legal. Time: 2–4 weeks per system. Cost: Medium to High.
For each high-risk AI system, create the technical documentation required by the AI Act: a general description of the system, design specifications, development methodology, data-governance practices, training and testing procedures, performance metrics, and a description of the risk-management measures in place. This file is the core compliance artefact for AI Act conformity assessment. Deliverable: AI technical documentation file (per system).
Conduct vendor and supply-chain checks.
Owner: Procurement / Legal. Time: 2–3 weeks. Cost: Medium.
Both the KSC Act and the AI Act impose supply-chain obligations. Review contracts with key suppliers, cloud providers, data brokers, pre-trained model vendors, API providers, for cybersecurity assurances, incident-notification pass-through clauses, and conformity commitments. Where gaps exist, negotiate amendments. Deliverable: Vendor risk-assessment register and updated contract clauses.
Undertake a DPIA or AI impact assessment where relevant.
Owner: DPO / Legal. Time: 2–4 weeks per assessment. Cost: Medium.
Where an AI system processes personal data in a manner that triggers GDPR Article 35, conduct a DPIA. Where the AI Act’s risk-management system requires an assessment of foreseeable risks to health, safety, and fundamental rights, integrate this into the same exercise. A combined DPIA-and-AI-impact-assessment process saves time and produces a single, auditable document. Deliverable: Combined impact assessment report.
Test the incident-reporting process, run a tabletop exercise.
Owner: Security Lead / CTO. Time: Half-day workshop. Cost: Low.
Simulate a dual-trigger incident, for example, a data breach affecting a high-risk AI medical-imaging system, and walk through the incident response plan end to end. Test whether the team can produce the 24-hour early warning, the 72-hour full report, and the AI Act serious-incident notification within the required windows. Deliverable: Tabletop exercise after-action report with identified gaps.
Prepare conformity evidence and internal audit trails.
Owner: Compliance / Engineering. Time: Ongoing. Cost: Medium.
For high-risk AI systems, the AI Act requires that providers keep automatically generated logs for a period appropriate to the intended purpose of the system (and in any event no less than six months). Under the KSC Act, cybersecurity-incident logs and risk-assessment records must be maintained and made available to supervisory authorities. Establish centralised, tamper-evident log storage. Deliverable: Log-retention policy and audit-trail architecture document.
Communicate compliance readiness to clients and investors.
Owner: CEO / PR / Commercial. Time: 2 weeks. Cost: Low.
Compliance is a commercial differentiator. Prepare a brief compliance-readiness summary for inclusion in investor materials, due-diligence data rooms, and client-facing security documentation. Highlight the company’s NIS2 status, AI Act classification, and governance structure. Deliverable: One-page compliance fact sheet and updated security-documentation portal.
Getting incident reporting right is one of the most operationally urgent aspects of AI and NIS2 compliance Poland. The two regimes have overlapping but distinct notification obligations, and missing a deadline can trigger enforcement action independently of the underlying incident. The table below maps the reporting requirements side by side.
| Reporting step | NIS2 / KSC Act obligation | EU AI Act obligation |
|---|---|---|
| Early warning | Within 24 hours of becoming aware of a significant incident, notify the designated CSIRT (in Poland, CSIRT NASK, CSIRT GOV, or CSIRT MON depending on sector). | No separate early-warning requirement, but providers must report serious incidents “without undue delay” after establishing a causal link between the AI system and the serious incident. |
| Full incident notification | Within 72 hours, submit a complete incident notification including impact assessment, severity, cross-border effects, and mitigation measures. | Submit the report to the market surveillance authority of the Member State where the incident occurred, including details of the AI system, the nature of the incident, and corrective actions taken. |
| Final report / follow-up | Within one month, submit a final report with root-cause analysis and lessons learned. | Ongoing cooperation with the market surveillance authority as required; update the EU database registration if applicable. |
A startup’s internal incident notification template should include the following minimum fields: incident reference number, date and time of detection, date and time of occurrence (if different), affected services and systems, classification (cybersecurity incident, AI serious incident, or both), estimated impact (users affected, data compromised, harm caused), immediate mitigation steps taken, responsible contact person, and regulatory recipients (CSIRT and/or market surveillance authority). Maintaining a pre-populated template, approved by legal counsel, reduces the risk of missed deadlines and incomplete filings.
The AI Act’s data-governance requirements for high-risk systems, covering training data, validation data, and testing data, dovetail with GDPR’s data-minimisation and purpose-limitation principles. Polish startups that have already implemented GDPR-compliant data-management practices hold a significant advantage: the AI Act’s technical documentation requirements build directly on the records of processing activities, DPIAs, and data-flow maps that GDPR demands.
A minimum documentation package for a Polish AI startup subject to both the KSC Act and the AI Act should include:
Establishing a governance RACI (Responsible, Accountable, Consulted, Informed) matrix that maps each documentation obligation to a named role, DPO, cybersecurity lead, AI-risk owner, CTO, prevents ownership gaps and ensures that audit readiness is continuous rather than reactive.
This article was produced by Global Law Experts. For specialist advice on this topic, contact Jakub Koziol at The Heart Legal, a member of the Global Law Experts network.
For startups that have completed the scope self-assessment and identified gaps, the next step is to engage targeted external support. The most valuable external resources for Polish AI startups at this stage include:
Polish startups can explore the Global Law Experts lawyer directory to connect with technology-law practitioners experienced in NIS2 Poland and AI Act compliance Poland.
posted 23 minutes ago
posted 46 minutes ago
posted 1 hour ago
posted 2 hours ago
posted 2 hours ago
posted 3 hours ago
posted 3 hours ago
posted 4 hours ago
posted 4 hours ago
posted 4 hours ago
posted 5 hours ago
posted 5 hours ago
No results available
Find the right Advisory Expert for your business
Sign up for the latest advisor briefings and news within Global Advisory Experts’ community, as well as a whole host of features, editorial and conference updates direct to your email inbox.
Naturally you can unsubscribe at any time.
Global Law Experts is dedicated to providing exceptional legal services to clients around the world. With a vast network of highly skilled and experienced lawyers, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.
Global Advisory Experts is dedicated to providing exceptional advisory services to clients around the world. With a vast network of highly skilled and experienced advisors, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.
Send welcome message