Logo - Keyrus
  • Playbook
  • Services
    Data advisory & consulting
    Data & analytics solutions
    Artificial Intelligence (AI)
    Enterprise Performance Management (EPM)
    Digital & multi-experience
  • Insights
  • Partners
  • Careers
  • About us
    What sets us apart
    Company purpose
    Innovation & Technologies
    Committed Keyrus
    Regulatory compliance
    Investors
    Management team
    Brands
    Locations
  • Contact UsJoin us

Blog post

AI and Data Engineering: Enhancing Analytics & Data Governance

The Growing Importance of Data Engineers in Today’s AI-Driven Automation Landscape

From automated data products to AI-generated code, intelligent automation is reshaping the data engineering landscape. But rather than making data engineers obsolete, these innovations are highlighting their strategic role in ensuring data quality, scalability, and governance in modern data ecosystems.

Changing data engineering: towards self-engineering

Modern businesses treat data as a strategic asset - fueling innovation, operational efficiency, and competitive advantage. This value is unlocked through robust data products that collect, transform, and deliver high-quality data across the organisation. Traditionally, designing and maintaining these pipelines was the responsibility of the data engineer.

Today, the rise of tools like dbt, Infrastructure-as-Code, and generative AI is transforming this landscape. Data engineering is evolving into a new era of auto-engineering, where automation streamlines key tasks, but the strategic oversight of skilled data engineers remains essential.

The Evolution of the Role of the Data Engineer

The role of the data engineer isn’t fading—it’s evolving. No longer just a behind-the-scenes technician, today’s data engineer is stepping into a more strategic and visible position, serving as:

  • Architect of automated data systems

  • Supervisor of AI-generated data products

  • Guardian of data quality, compliance, and performance

  • Interpreter of business needs and translator of technical requirements

Automating does not mean blindly delegating

Automation brings clear advantages—faster development, increased productivity, and lower total cost of ownership (TCO). But it also introduces critical challenges, including:

  • Ensuring the quality and reliability of generated code

  • Aligning data workflows with evolving business requirements

  • Meeting non-functional constraints such as performance, security, and regulatory compliance

Even in automated environments, data products must be validated, documented, and continuously monitored. This is where the data engineer remains indispensable - providing the human oversight needed to ensure trust, consistency, and control.

Technologies that are redefining data engineering

Business Manifestos

Business manifestos are formal descriptions of functional requirements. Usable by automation tools, they allow pipelines to be generated directly from business objectives. However, their technical interpretation requires expertise possessed only by data engineers.

Modern Frameworks (dbt, Python, etc.)

These tools enable modularisation, increased collaboration, and easier versioning. The data engineer orchestrates their use and ensures consistency.

Self-Service Approach and Governance

Empowering businesses through simplified interfaces is promising. But it requires safeguards to avoid:

  1. Duplication of indicators

  2. Data silos

  3. Non-compliance with security or compliance standards

Ephemeral and Governed Pipelines

Pipelines are no longer frozen in time. They are deployed on demand, used, and then deleted. This approach requires intelligent orchestration that only data engineers can ensure.

Building a Data Products Factory: A Systems Approach

The Dead End of Manual Data Products

Manually creating data products for each business need creates technical debt, redundancy, and a lack of governance.

The Data Products Factory: Industrialising Data

Keyrus offers a systemic approach, with three key steps:

  1. Business Ideation Workshops

  2. Creation of Structured Manifests

  3. Automated Product Orchestration

These data products become natively documented, observable, and traceable, without manual additions.

Orchestrated automation, not blind automation

It's not about delegating everything to a generic AI, but rather combining:

  1. DBT for transformation

  2. Python for interpretation

  3. Text2SQL for certain targeted automations

  4. APIs for governance

All under the supervision of the data engineer.

Measurable benefits for the business

Acceleration and agility

  1. Drastic reduction in production time

  2. Freeing data teams from repetitive tasks

  3. Better responsiveness to business needs

Reduced TCO

  1. Ephemeral use of resources (temporary data products)

  2. Optimisation of licenses, storage, and computing

  3. Less maintenance effort

Native governance

  1. Automatically generated data lineage, monitoring, and documentation

  2. Enhanced consistency and continuous traceability

Optimised performance

  1. Automatic aggregates via tools like Indexima

  2. Better user experience and analysis fluidity

What AI does not replace: the essential missions of the data engineer

Even in an automated environment, certain responsibilities remain human:

  1. Business interpretation: clarifying and completing manifests

  2. Validation of generated pipelines: ensuring their robustness

  3. Data modeling: creating reliable schemas

  4. Continuous monitoring: adjusting and correcting flows

The data engineer becomes the guarantor of controlled automation.

A product-oriented and business-use approach

Keyrus proposes a methodological breakthrough:

  • No longer starting from raw data;

  • But from the expressed business need to generate the technical components, then automatically

This reverse logic refocuses data engineering on its purpose: creating business value, not just producing code.

Intelligent orchestration, driven by specialised AI

Keyrus has designed a metamodel of AI co-pilots, capable of:

  • Generate code

  • Document

  • Create tests

  • Monitor the entire stack

All driven by the business manifesto, which has become the single source of truth.

A field-proven approach

With more than 40 generative AI projects carried out, Keyrus demonstrates that this approach is:

  • Replicable

  • Scalable

  • Industrialisation-ready

It meets the expectations of decision-makers in 2025: rationalise costs, gain agility, and guarantee data quality.

Conclusion

Automating data engineering marks a significant leap forward in technology, but it doesn't eliminate the need for human expertise. Instead, it redefines the role of the data engineer. Today’s data engineers are not just builders; they are strategic thinkers, system architects, and guardians of data integrity. Without their guidance, automated data products risk becoming unreliable. With them, data infrastructure transforms into a powerful engine for business acceleration, innovation, and performance.

Ready to future-proof your data strategy? Discover how Keyrus can help you harness intelligent automation while keeping data quality and governance at the core. Contact us today to speak with our experts.

Logo - Keyrus
London

One Canada Square Canary Wharf London E14 5AA