DORA Fundamentals

DORA RTS Incident Reporting (2026 Guide)

M
ByMatevž RostaherLast updatedApril 27, 2026
dora-rts-incident-reporting-workflow-scene-with-compliance-dashboards-and-incide.jpg

You have an incident on your hands, your technical team is still figuring out scope, senior management wants answers, and compliance is already asking the hard question: does this trigger DORA reporting? That moment is where the DORA RTS on incident reporting becomes very real. For many financial entities, the challenge is not understanding that reporting matters. It is understanding exactly how the technical standards shape classification, timing, content, and governance around a reportable incident.

That is why the dora rts incident topic deserves careful attention in 2026. The first wave of DORA implementation was about getting ready for January 17, 2025. The next phase is about proving your process works under pressure. Regulators increasingly expect consistency between your internal incident handling, your reporting logic, your Register of Information, and your broader ICT risk management framework. If you need a stronger foundation first, it helps to start with what is dora. From there, this article will show you what the RTS is trying to achieve, what teams often miss in practice, and how to build a reporting process that holds up when the pressure is on.

DORApp was built to simplify DORA compliance for EU financial institutions through a modular approach, turning complex regulatory requirements into structured, manageable workflows with guaranteed technical report acceptance.

Contents

  • What the RTS actually covers
  • DORA RTS vs ITS: what is the difference for incident reporting?
  • Why the incident RTS matters in practice
  • Which DORA technical standards touch incident reporting (and why teams miss the dependencies)
  • The reporting elements your team needs to get right
  • Classification and timelines under pressure
  • Reportable incidents under DORA: categories, thresholds, and the “8 types” question
  • How institutions usually operationalize the RTS
  • What changed in 2026
  • Frequently Asked Questions
  • Key Takeaways
  • Conclusion
  • What the RTS actually covers

    The Regulatory Technical Standards, or RTS, sit underneath the broader digital operational resilience act dora framework. They provide more detailed rules on how certain obligations should work in practice. In the case of incident reporting, the RTS helps turn high-level legal duties into a more standardized reporting process across the EU.

    Think of it this way: the Regulation tells you that financial entities must report major ICT-related incidents. The RTS helps define how that decision is made and what information should be included in the reporting flow. This usually covers the criteria, thresholds, data fields, and procedural logic that support consistent reporting across competent authorities.

    If you need the broader legal context, dora regulation explained is a useful companion read. It helps place the RTS in the wider structure of DORA, which rests on five pillars: ICT risk management, incident reporting, resilience testing, third-party oversight, and information sharing.

    The reality is that many teams assume the RTS is only a reporting template issue. It is not. It also affects governance, escalation, evidence quality, internal roles, and how quickly you can move from technical detection to a defensible regulatory decision.

    DORA RTS vs ITS: what is the difference for incident reporting?

    Here is the thing: many teams use “RTS” as shorthand for “the reporting package.” Under DORA, that usually blurs two different types of standards that show up in day-to-day reporting work, RTS and ITS.

    Regulatory Technical Standards (RTS) typically define the requirements at a rules-and-method level. In incident reporting, this often means the “what” and the “how” behind major incident classification and reporting, such as what criteria matter, how thresholds are applied, what content is expected at different stages, and how the overall logic should operate across the EU.

    Implementing Technical Standards (ITS) are usually more about standardizing the submission mechanics. Think forms, templates, and procedures for how information is structured and exchanged. In an incident context, that often shows up as standardized reporting formats that make it easier for competent authorities to compare incidents across entities, and for institutions to submit consistent datasets instead of narrative-only reports.

    This distinction matters because incident reporting breaks down in two common ways. Some institutions focus on templates and forget the underlying decision logic, so the form looks correct but the classification rationale is hard to defend. Others focus on the rationale but do not align early to the template fields, so they scramble during the first report deadline because the information was collected as free text, not as structured data that maps cleanly into the submission format.

    What to do in practice: track both RTS and ITS updates as part of your change management, align your internal incident data model to the likely template fields early, and avoid treating reporting as a narrative-only exercise. If your incident record is structured from the start, you can typically move faster, update more safely, and reconcile reporting fields back to evidence when supervisors ask follow-up questions.

    dora-rts-incident-reporting-scope-and-governance-visual-with-structured-complian.jpg

    Why the incident RTS matters in practice

    For compliance officers and ICT leaders, the rts dora incident reporting discussion is really about operational discipline. During a live event, you rarely have complete facts. Systems may still be unstable. Vendor input may be missing. Internal teams may disagree on impact. The RTS matters because it pushes institutions toward a structured method for deciding whether an incident is major, what must be reported, and when updates are required.

    That structure is especially important because incident reporting under DORA does not happen in isolation. It connects to governance, outsourcing, critical services, and in many cases the identity of the legal entity involved. If your entity mapping is messy, your reporting may become messy too. That is one reason why related data points like the lei matter more than some teams initially expect.

    From a regulatory standpoint, consistency is now a bigger issue than first-time readiness. In 2026, supervisors are less interested in whether you have read the rules and more interested in whether your incident decisions align with your logs, your risk records, your third-party register, and your internal approvals.

    For readers following topic-specific updates, the Incident Reporting section and the DORA Fundamentals category are useful places to keep track of related guidance.

    Which DORA technical standards touch incident reporting (and why teams miss the dependencies)

    The difference often comes down to whether your institution treats incident reporting as a single workstream, or as the intersection point of several DORA workstreams. In most cases, major incident reporting decisions depend on upstream data that lives in other parts of your DORA operating model.

    A useful way to think about it is as a small ecosystem of technical standards and dependencies that feed incident reporting. The incident reporting RTS tends to sit at the center, but it relies on consistent inputs from:

  • the RTS that define incident classification criteria and the thresholds that support the “major” decision
  • the RTS that shape reporting timelines and what content is expected at each stage of an incident report
  • the ITS that standardize forms, templates, and procedural mechanics for submitting incident reports
  • the RTS that shape the ICT risk management framework, which often defines how incidents are recorded, escalated, and governed internally
  • the RTS and related expectations around ICT third-party risk, including subcontracting and concentration, which influence what you can credibly say about provider involvement and service impact early on
  • What many people overlook is the operational dependency angle: incident reporting fields are often only as good as your inventories. If you cannot quickly map an incident to a service, a legal entity, a critical or important function, and the supporting providers, your first report tends to become slower and less consistent. This is also where teams feel the knock-on effects of the Register of Information, because it is often the quickest way to prove “who provides what, to whom, under which contract” during an active incident.

    How to use this map in practice: assign an internal owner per standard, keep a single reporting data dictionary that defines each reporting field and its source of truth, and run cross-pillar consistency checks during simulations. If your incident simulation shows that the same service is named differently across your incident system, your provider register, and your internal service catalog, that is a fix worth making before a real reportable event forces the issue.

    The reporting elements your team needs to get right

    Now, when it comes to dora incident rts obligations, most institutions need to focus on four areas at once: trigger logic, timing, data quality, and approval control. Missing any one of them can create problems.

    1. A clear trigger for major incidents

    Your institution needs a repeatable way to determine whether an ICT-related incident meets the threshold for major incident reporting. This is closely linked to dora incident classification. If your classification framework is vague, your reporting process will usually become inconsistent too.

    2. Timelines that work in real life

    Under DORA, reporting follows staged timelines. As currently defined in the standards and implementation practice, institutions need to be prepared for an initial report, a follow-up intermediate report, and a final report. The challenge is that deadlines start running even when facts are incomplete. In practice, this means your internal process has to support estimated values, controlled updates, and clear ownership for revisions.

    3. Structured data, not only narrative

    What many people overlook is that competent authorities increasingly expect structured, reconcilable information. A polished written summary is not enough if key fields are missing, inconsistent, or disconnected from your master data. This becomes even more important where reporting data later needs to align with other DORA deliverables.

    4. Evidence of decision-making

    A well-run process should show who classified the incident, what data was available at the time, whether any override was made, and who approved the submission. This is one of the clearest lines between a spreadsheet-based process that barely survives and a controlled workflow that may stand up during supervisory review.

    Platforms like DORApp streamline the creation and maintenance of the Register of Information process through a 5-step approach: importing existing data, managing it through an intuitive interface, auto-enriching from public sources, validating against ESA rules, and generating compliant reports with one click.

    dora-rts-incident-reporting-comparison-visual-showing-strategic-rules-versus-rep.jpg

    Classification and timelines under pressure

    Here is where the dora rts incident topic becomes operational rather than theoretical. Your biggest reporting risk often appears in the first hours after detection. A team may know something serious happened, but not yet know if customer impact, service downtime, transaction volume, cross-border effect, or data integrity consequences are high enough to support a major classification.

    That is why many institutions create an early triage layer before formal classification. The aim is not to delay reporting. It is to gather enough structured information fast enough to make a defensible call. This usually means collecting at least:

  • awareness time and detection time
  • affected services and systems
  • legal entities involved
  • customer or transaction impact
  • third-party provider involvement
  • current containment status
  • If you want a dedicated view of the process flow, dora incident reporting covers the broader reporting lifecycle in more depth. For foundational context on how DORA fits together, the post DORA Pillars Explained: Complete Breakdown (2026) is also useful.

    From a practical standpoint, institutions tend to struggle in one of two ways. Some report too slowly because they wait for perfect certainty. Others overclassify too early and then have trouble supporting the rationale. A mature process usually aims for disciplined escalation, documented assumptions, and staged reporting updates rather than false precision at the start.

    Reportable incidents under DORA: categories, thresholds, and the “8 types” question

    A common question behind dora rts incident searches is: “What are the 8 types of reportable incidents?” The honest answer is that there is not always a single universal list you can copy into a policy and call it done. Under DORA, reporting is driven by whether something qualifies as a major ICT-related incident, based on criteria and thresholds defined in the applicable technical standards and how competent authorities implement the reporting approach.

    In practice, reporting templates and supervisory practice often break incidents into standardized categories or impact dimensions so the authority can compare cases across the market. That is usually what people mean by “types.” The categories may evolve over time and can vary depending on the current RTS and ITS package, so your team should align to the latest version and any local authority guidance rather than relying on older checklists.

    Think of it this way: you can triage quickly if you group early symptoms into practical buckets that map to major impact dimensions. Your internal labels do not have to be perfect on day one, but they should help you collect the structured facts you will likely need for classification and reporting. Teams often use buckets such as:

  • availability and service disruption incidents, including outages, severe degradation, or loss of access to critical services
  • integrity incidents, where data or processing outcomes may have been altered or corrupted
  • confidentiality incidents, including potential data exposure or unauthorized access
  • operational or security events linked to ICT third-party providers, including failures that affect your services even if your own systems are stable
  • incidents affecting transaction processing or operational security controls, which may be especially relevant where ICT supports time-sensitive financial operations
  • cross-entity or cross-border incidents, where the same root cause affects multiple legal entities or jurisdictions
  • A simple triage approach that typically works under pressure is: map symptoms to the impact dimensions used for major classification, log assumptions early, and be ready to update classification as facts stabilize. If your first report relies on estimates, the key is to control those estimates, document how you arrived at them, and ensure later updates reconcile cleanly to the final confirmed picture.

    How institutions usually operationalize the RTS

    A strong reporting process usually depends less on legal interpretation alone and more on operating design. In practice, this means connecting your incident management process with your data model, approval chain, and reporting output.

    What a workable setup often includes

  • a single incident record of truth
  • predefined incident stages and owners
  • maker-checker approval for reporting decisions
  • links to services, providers, contracts, and entities
  • deadline tracking with escalation logic
  • evidence storage tied directly to the incident
  • Consider this: if your incident sits in one system, your provider information sits in another, and your regulatory reporting logic lives in an unmanaged document, your team is effectively rebuilding the case every time something serious happens. That may work once. It usually does not scale.

    With features like automated workflows, non-blocking validation, a streamlined data model that auto-converts to XBRL, and full-text search across all records, DORApp allows compliance teams to start working immediately rather than waiting for perfect data.

    This kind of setup also matters beyond incident reporting. It supports consistency across DORA pillars, which is why some teams also review the historical context in DORA European Commission Timeline and History (2026) to understand how supervisory expectations have tightened over time.

    dora-rts-incident-classification-and-timelines-scene-for-financial-incident-repo.jpg

    What changed in 2026

    The shift in 2026 is subtle but important. Early DORA projects focused on readiness, policy drafting, and initial submissions such as the Register of Information by April 30, 2025. Now the focus is moving toward proof of compliance. Supervisors increasingly want to see whether your controls operate in practice, especially where incident reporting intersects with third-party risk, critical services, and management oversight.

    Under DORA, this means your incident reporting process may be reviewed not only for legal compliance but for operational coherence. Can you show a timeline of the event? Can you support the classification logic? Can you reconcile report content to source records? Can you explain changes between the initial and final report?

    There is also a broader regulatory context. The designation of Critical Third-Party Providers in late 2025, evolving subcontracting expectations under Delegated Regulation (EU) 2025/532, and continued cloud outsourcing scrutiny all increase the pressure on institutions to understand third-party involvement quickly during incident analysis.

    If your team is still building the bigger picture, articles like dora regulation explained and digital operational resilience act dora can help connect incident RTS requirements back to the wider DORA operating model.

    Disclaimer: The information in this article is intended for general informational and educational purposes only. It does not constitute professional technical, legal, financial, or regulatory advice. Website performance outcomes, platform capabilities, and business results will vary depending on your specific circumstances, goals, and implementation. Always evaluate tools and platforms based on your own needs and, where relevant, seek professional guidance.

    Regulatory note: This article is for informational purposes only and does not constitute financial, legal, or regulatory advice. DORA compliance requirements may vary based on your institution type, size, and national regulatory framework. Content referencing regulated industries is provided for general context only and should not be interpreted as legal, regulatory, compliance, or financial advice. If you operate in a regulated sector, always consult qualified financial, legal, and compliance professionals for guidance specific to your situation.

    Frequently Asked Questions

    What does dora rts incident reporting actually refer to?

    It refers to the Regulatory Technical Standards that provide detailed rules for how major ICT-related incident reporting should work under DORA. The Regulation sets the high-level obligation, while the RTS helps define the reporting logic, data requirements, and procedural structure. In practice, this affects how your institution identifies reportable incidents, what information it gathers, and how it manages staged reporting. For many teams, the RTS is where legal duty becomes operational process.

    What is RTS under DORA?

    RTS stands for Regulatory Technical Standards. Under DORA, RTS typically provide more detailed requirements that sit under the Regulation, translating high-level obligations into clearer rules and methods that institutions can implement. For incident reporting, RTS often influence how you classify a major ICT-related incident, what information you are expected to provide at different stages, and what governance and decision logic should support those submissions.

    What is RTS on reporting of major incidents?

    It commonly refers to the set of Regulatory Technical Standards that define how major ICT-related incident reporting should work in practice. This often includes the classification logic and criteria, the staged reporting approach, and the content expectations for initial, intermediate, and final reporting. The exact scope you should follow depends on the current technical standards package and how it is implemented by your competent authority, so teams usually track updates to both the RTS and any related ITS templates and procedures.

    What is a DORA incident?

    In day-to-day terms, a “DORA incident” is an ICT-related incident that your institution records and manages under its DORA-aligned operational resilience framework. Not every incident becomes reportable. Reporting typically focuses on incidents that meet the “major ICT-related incident” criteria under the applicable standards. That is why teams separate three ideas in practice: the technical event, the business impact, and the regulatory classification decision, which may change as facts become clearer.

    What are the 8 types of reportable incidents?

    People often ask this because they are looking for a quick way to categorize incidents for triage and reporting. Under DORA, reportability is generally driven by whether an ICT-related incident qualifies as a major incident based on defined criteria and thresholds, rather than by a single fixed “list of 8” that applies in all situations. Some reporting templates and supervisory practices may group incidents into standardized categories or impact dimensions so reports are comparable across institutions. Your safest approach is to follow the current RTS and ITS and any competent authority guidance, and use internal triage buckets that map to those impact dimensions, such as availability disruption, integrity issues, confidentiality concerns, and third-party provider related failures.

    Does every ICT incident need to be reported under DORA?

    No. DORA does not require every incident to be reported to the competent authority. The reporting obligation focuses on major ICT-related incidents, based on the applicable criteria and thresholds. That is why internal triage and classification matter so much. Your team still needs to record and manage non-major incidents, because they may reveal patterns, recurring weaknesses, or concentration issues. But regulatory reporting generally depends on whether the incident meets the major threshold under the current framework.

    How is DORA incident classification connected to the RTS?

    The RTS and classification are tightly linked. You cannot report consistently if you cannot classify consistently. The standards shape the information your team needs to assess severity and regulatory relevance. This may include service disruption, customer impact, transaction effect, data impact, or cross-border consequences. In real operations, classification is rarely just a technical judgment. It usually combines technical facts, business impact, and compliance review. That is why documented decision-making and clear escalation rules are so important.

    What are the typical reporting stages under DORA?

    Institutions generally prepare for three reporting stages: an initial report, an intermediate report, and a final report. The exact operation should follow the current technical standards and competent authority expectations. The initial report captures the early known facts, the intermediate report updates the authority as the situation develops, and the final report closes the loop with confirmed impact, root cause, and remediation information. Your process should support updates because early incident data is often incomplete or provisional.

    Why do institutions struggle with incident reporting timelines?

    The problem is usually not the deadline itself. The problem is data readiness during the first hours of an incident. Teams may still be investigating scope, waiting for vendor input, or trying to map the incident to affected services and entities. If governance, source data, and approvals are fragmented, the reporting clock becomes much harder to manage. Institutions that perform better usually have predefined triage fields, named owners, escalation rules, and a controlled handoff between technical responders and compliance reviewers.

    How does the Register of Information relate to incident reporting?

    The Register of Information supports incident reporting indirectly but importantly. If a reportable incident involves an ICT third-party provider, your team may need to identify the provider, service, contract, entity, and dependency chain quickly. That is much easier when your Register of Information is complete and maintained well. In 2026, regulators are paying more attention to consistency across DORA deliverables, so incident reports that conflict with Register of Information data may attract more scrutiny than before.

    What should compliance teams document during a major incident?

    At a minimum, teams should document awareness time, detection time, affected services, legal entities, impact estimates, third-party involvement, classification rationale, decision owners, approvals, submissions, and later changes to the facts. They should also preserve evidence that supports both operational actions and regulatory decisions. This is not only for audit comfort. It helps explain why a team made a given classification at a specific moment, especially if the incident evolved quickly and the final picture looked different from the initial one.

    What does proof of compliance mean for incident reporting in 2026?

    It means regulators are increasingly interested in whether your process works in practice, not only whether your policy says the right things. They may expect institutions to show evidence of operating controls, consistent data, traceable approvals, and reconciled reporting records. For incident reporting, this could mean demonstrating how a case moved from intake to classification to submission, who approved what, and how later updates were handled. In other words, operational evidence now matters as much as written intent.

    Can a software platform make DORA compliance automatic?

    No responsible team should assume that. Software can support structure, validation, workflow control, and technical output, but it does not replace institution-specific judgment, governance, or legal interpretation. A platform may reduce manual work and improve consistency, especially for XBRL generation, evidence handling, and approval tracking. Still, your institution remains responsible for the underlying decisions and the quality of the information submitted. Tools support compliance processes. They do not remove the need for expert oversight.

    Where should a compliance officer start if the incident reporting process is still immature?

    Start by mapping the current process end to end. Identify where incidents are recorded, who classifies them, what minimum data is required, who approves reports, how deadlines are tracked, and where evidence is stored. Then test the process using realistic scenarios, especially ones involving third parties or incomplete early information. Many weaknesses show up only during a timed simulation. From there, you can decide whether to improve your internal workflow, strengthen your data model, or evaluate a dedicated platform such as DORApp.

    Key Takeaways

  • The dora rts incident framework is not just about form submission, it shapes classification, governance, and evidence handling.
  • Most reporting failures start with weak triage, fragmented data, or unclear ownership in the first hours of an incident.
  • Staged reporting under DORA requires institutions to work with structured data and controlled updates, not perfect certainty from the start.
  • In 2026, proof of compliance means showing that your incident reporting process actually operates consistently under pressure.
  • DORApp is one platform worth exploring if you want more control over DORA workflows, XBRL-ready reporting, and structured incident operations.
  • Conclusion

    The DORA RTS on incident reporting matters because it turns a broad legal obligation into a real operational test. When an incident hits, your institution does not have the luxury of building a process from scratch. You need clear trigger logic, disciplined triage, dependable data, and a reporting workflow that can stand up to supervisory scrutiny.

    That is the practical takeaway for 2026. The question is no longer only whether your institution understands DORA. It is whether your reporting process can produce a timely, defensible, and internally consistent result when facts are incomplete and the clock is moving. Teams that treat incident reporting as part of a broader resilience system usually cope better than teams that treat it as a stand-alone compliance task.

    If you are reviewing your incident operating model, explore how DORApp can support your DORA compliance journey with a 14-day free trial. If you prefer a walkthrough first, you can also book a demo and see how Dorapp approaches structured reporting, Register of Information management, and ongoing resilience workflows.

    M

    About the Author

    Matevž Rostaher is Co-Founder and Product Owner of DORApp. He brings deep experience in building secure and compliant ICT solutions for the financial sector and is positioned by DORApp as an expert trusted by financial institutions on complex regulatory and operational challenges. DORApp’s own webinar materials list him as CEO and Co-Founder of Skupina Novum d.o.o. and CEO and Co-Founder of FJA OdaTeam d.o.o. His articles should carry the voice of someone who understands not just compliance requirements, but the systems and delivery realities behind them.