What the Data Act Really Means for your Product in Practical Terms
Moving from “We Need to be Compliant” to a Workable Plan
Thank you for reaching out to Sigma Software!
Please fill the form below. Our team will contact you shortly.
Sigma Software has offices in multiple locations in Europe, Northern America, Asia, and Latin America.
USA
Sweden
Germany
Canada
Israel
Singapore
UAE
Australia
Austria
Ukraine
Poland
Argentina
Brazil
Bulgaria
Colombia
Czech Republic
Hungary
Mexico
Portugal
Romania
Uzbekistan
If you own a product line, run an embedded or connectivity team, or look after the cloud and data stack for connected devices, the EU Data Act probably didn’t arrive at your desk as a project with a clear roadmap. It felt more like that early knock at the door in a film. Quiet and polite. Still, enough to tell you something was about to change, and you’d be the one to deal with whatever waited on the other side. In this article, we look at what that shift means in practice. Not the headline obligations, but the engineering work beneath them. We’ll outline where teams typically struggle, what needs early attention, and how to approach the change without turning it into a disruptive parallel initiative.
What the Data Act Really Means for your Product in Practical Terms
Moving from “We Need to be Compliant” to a Workable Plan
You’re already juggling roadmaps, customer commitments, technical debt, and half-finished platform work. Now there’s a new regulation that cuts across devices, cloud, data, and portals, with most of the pressure sliding your way.

While this is a joint effort between legal, compliance, security, and engineering, most of the real changes will still land inside the product stack. That’s why it helps to look at the Data Act from an engineering point of view. The regulation becomes clearer when seen through the lens of how your devices produce data, how your cloud platform structures it, and how your interfaces expose it.
In a nutshell, it’s all about making the data your products already generate accessible, consistent, and safe to share, without rebuilding the entire system. And here is what that looks like in practice, layer by layer.
Your equipment produces operational data every time it runs, and customers should be able to pull that information in a format they can actually use. When they want a service partner or another software vendor to work with the same data, there needs to be a reliable way to hand it over. That’s the basic expectation.
Things change once you look at how existing products behave. Most connected systems are built on years of accumulated decisions. We still see devices reporting values named after firmware from 2014. Backend services keep legacy API structures, and many portals were never meant to expose operational data. This isn’t unusual, but once a third party relies on that data, the gaps surface.
That’s where regulatory requirements start to have a real impact. Beyond the main articles, several obligations directly affect how data is structured and governed. These demand closer attention on the engineering side:
The EU Data Act assumes that once data leaves your system, another provider’s software can read it without decoding your internal habits. That sounds reasonable, but most products aren’t built that cleanly. Data might technically be “available,” yet still unusable until someone aligns units, maps internal codes to domain conventions, writes proper metadata, and resolves conflicts across versions.
Many industries also expect you to produce established formats (BACnet, Haystack, OPC UA) rather than invent your own. Moving toward those standards usually exposes mismatches you didn’t know existed. A sensor that reports humidity as an integer needs to be converted to a float. A flag that only makes sense if you remember the boot sequence from five releases ago becomes a problem.
Interoperability ends up being less about “publishing a schema” and more about closing the gap between your system and real-world expectations. That usually forces cleanup, and it’s better to do that before any legal deadline arrives.
The Act targets raw and pre-processed operational data, while allowing manufacturers to withhold algorithms and core logic. In practice, systems don’t separate these neatly. Raw data and computed values come mixed, and some calculations can reveal more than intended.
Once teams start pulling these pieces apart, they usually realize the export surface isn’t clean enough. A new boundary is needed, or the data model has to shift. Simply marking a few fields as “protected” rarely does the job.
The customer decides which third parties can access their data, and that can change with very little notice.
Most connected products weren’t built for this kind of constant delegation churn. Many older platforms still rely on broad, long-lived credentials because the original architecture had no practical way to issue per-partner, scoped access. When access needs to be revoked, things break. Removing a partner’s key might disable an unrelated workflow because multiple tenants ended up sharing the same token.
A more stable approach relies on short-lived keys, narrow scopes, clear deactivation paths, and an audit trail that settles questions about who had access at any given moment.
The Act gives customers the right to see who accessed their data, under what permission, and when. That’s easy to imagine in a greenfield build, but much harder in a product that has lived through several architectural eras.
That’s why the teams need predictable rules for logging, retention, and tracking data origins. Without that, audit support turns into guesswork. Two customers may ask why their exported data differs across periods, and answering cleanly requires more than intuition.
Governance doesn’t fix technical debt, but it makes the debt visible. It gives teams a way to answer questions cleanly instead of relying on institutional memory. And once external parties rely on your data, that difference becomes critical.
Once external partners use your interfaces, your security depends on theirs. You often don’t know how they store credentials, who has access to integration keys, or how quickly they react to a breach.
The Act assumes third parties meet certain expectations, and your legal team will put that in contracts, but none of that helps in the first 15 minutes after their environment is breached.
If a partner is breached, your API is at risk unless boundaries are strong. Protect it with tight token scopes, per-partner limits, and isolation at the edge. In theory, every partner should match your security standards. In real life, they won’t, so it’s safer to assume the weakest link sets the risk.
The EU Data Act expects customers to move their data to another provider without being locked into your internal structures.
On paper, that looks simple, but in real systems, the data tends to be scattered across different systems and layers. So, meeting portability requirements means gathering it from multiple sources and reshaping it without disrupting live operations.
The cost is another factor. Large exports put strain on storage and bandwidth. Until January 2027, you can pass direct costs plus a margin of up to 20% to the customer. Legal and commercial teams will handle the pricing, but engineering needs to quantify the underlying work. Otherwise, pricing becomes a guesswork.
When the compliance note arrives, most teams pause for the same reason: it’s not obvious where to begin. You read the regulation, break it into practical terms, and for a moment, it all makes sense. Then the real question hits: how on earth do you fit this into a roadmap that’s already full? Usually, it takes a bit of common sense, a bit of patience, and a bit of magic. And a structured approach, you have always relied on when working with your roadmap.
Look at the device layer, the connectivity stack, cloud services, the data model, APIs, and customer-facing interfaces. Most of this comes down to basics:
You do not need a long audit for this. A short review from the engineers closest to each part of the system usually gives enough signal. The goal is clarity, not polish.
Once the current state is clear, look at what’s absent. Do not create a single list. Split it. The two lists will matter when you try to fold this into real delivery.
| List 1 | List 2 |
| Small, high-impact items that are simple to fix and risky to ignore | Heavier pieces that require a bigger structural change |
| These issues usually sit close to the surface: incorrect units, blended values, missing metadata, outdated tokens, or incomplete logs. | These issues require deeper work in the stack: cleaning up a schema that drifted across versions, consolidating scattered data, redesigning exports, isolating partners, and untangling cross-service flows. |
| None of these takes much time to correct, but any one of them can create friction the moment another company starts relying on your data. | The changes need to be planned, not rushed. |
By splitting the gaps this way, you know what can move immediately and what needs a slot on the roadmap. It turns a vague list of issues into something you can actually sequence.
Once the deeper changes are on the table, place them next to the work your team is already committed to. Most teams don’t have spare capacity waiting for a new stream. People are juggling feature delivery, platform fixes, support pressure, and upgrades that have been pushed forward from past quarters. Anything that lives outside that flow tends to slip.
That’s why the structural items need to travel with work that’s already in motion:
If a firmware release is planned, it’s usually the safest place to straighten out naming or units. The testing and validation loops are already set up, and you won’t need to coordinate a separate rollout.
If an API update is coming, it’s a good moment to tighten scopes, remove the wide tokens, and clean up the boundary before new consumers lock in assumptions. If the data model is being touched, that’s the window to address schema drift or consolidate stores. If observability work is happening, it’s the right time to add the logs that will matter later, instead of trying to wedge them in mid-quarter.
This approach also reduces context switching. When people are already working in a part of the system, it’s much easier to fold in the structural changes there. Once you see how the pieces line up, the work stops looking like a second backlog competing for attention. It becomes part of the roadmap you already have.
Even after you align the major structural items with work that is already in motion, a few tasks usually remain on the side. They end up there for different reasons.
Some are deeper technical issues that were never addressed because they crossed old boundaries or did not have a clear owner. Others come from new regulatory expectations. Things that worked when data stayed internal no longer work once it’s shared, and some requirements fall outside the original product strategy.
The reasons differ, but the situation is the same. These items will not move unless you give them space.
The practical path is to break them into small, contained pieces and place them into the roadmap where they will not disrupt your main delivery streams. They do not need a separate project. They need a slot, an owner, and room to be completed without competing for attention.
Taking care of this work raises the baseline quality of the product in ways that already matter to customers, support teams, and internal analytics. The regulation does not create that need. It simply makes it visible.
There’s also the situation no one likes to admit out loud.
You can have a plan and structure, but still lack the people to execute it. Some teams are already at capacity, and others lack the security or data expertise they never needed before.
All this boils down to one question: how can the team absorb the work that needs to be done without dropping commitments or burning weekends? The most typical ways to solve the challenge are:
| 1. Split the work into parts and assign clear owners | 2. Form a small cross-functional group with shared responsibility | 3. Add external support when the team lacks capacity |
| Decide who handles which slice and let them run. But it has limits. Every owner still needs availability, the right depth of knowledge, and the authority to make decisions without things bouncing between teams | Not a side project, but coordinated work across teams to keep decisions moving. It may improve alignment, but it doesn’t free up time. Busy teams still need space to focus, and success depends on someone protecting that time. | This option requires extra investment, which can be difficult, but it gives the most breathing room. It works best when the external team knows the domain and can operate independently, giving your team what it lacks most: time and focus. |
None of these options is perfect. Each solves one constraint and exposes another.
But the common thread is simple: the work only moves when someone has the space and mandate to go deep enough. Everything else is wishful thinking disguised as planning.
If you need support, we can plug into the parts of the work your team doesn’t have the time or depth to cover:
We step in where additional capacity or expertise is needed. Get in touch to discuss your next steps under the EU Data Act.
Sigma Software Group provides IT services to enterprises, software product houses, and startups. Working since 2002, we have build deep domain knowledge in AdTech, automotive, aviation, gaming industry, telecom, e-learning, FinTech, PropTech. We constantly work to enrich our expertise with machine learning, cybersecurity, AR/VR, IoT, and other technologies. Here we share insights into tech news, software engineering tips, business methods, and company life.
Linkedin profileWhat the Data Act Really Means for your Product in Practical Terms
Moving from “We Need to be Compliant” to a Workable Plan
At Sigma Software, we constantly explore how emerging technologies can amplify engineering efficiency. Recently, a team from Sigma Software that develops Dovkol...
Connected vehicles generate enormous amounts of data, but accessing real-world fleets for testing is costly, time-consuming, and often impractical in the early ...
On November 27, in Lviv, Forbes AI Summit brought together entrepreneurs, technology leaders, and scientists for an honest conversation about how AI is reshapin...
Would you like to view the site in German?
Switch to German