¡¡We are looking for your talent!! ?
*** Data Engineer Senior**
**?**
**El perfil deseado debe tener al menos 5 años de experiência práctica en el diseño, establecimiento y mantenimiento de sistemas de gestión y almacenamiento de datos. Hábil en la recopilación, procesamiento, limpieza y despliegue de grandes conjuntos de datos, la comprensión de los modelos de datos ER, y la integración con múltiples fuentes de datos. Eficaz en el análisis, la comunicación y la propuesta de diferentes formas de crear almacenes de datos, lagos de datos, conductos de extremo a extremo y soluciones de Big Data para los clientes, ya sea en estrategias por lotes o en streaming.**
Será muy importante que tengas los siguientes conocimientos/experiência:
- ** Inglés B2+ o más** (llevarás proyectos 100% con el idioma, por lo que será indispensable el dominio hablado y escrito)
**Technical Proficiencies**:
- **SQL**:
Data Definition Language, Data Manipulation Language, Intermediate/advanced queries for analytical purpose, Subqueries, CTEs, Data types, Joins with business rules applied, Grouping and Aggregates for business metrics, Indexing and optimizing queries for efficient ETL process, Stored Procedures for transforming and preparing data, SSMS, DBeaver
- **Python**:
Experience in object-oriented programming, Management and processing datasets, Use of variables, lists, dictionaries and tuples, Conditional and iterating functions, Optimization of memory consumption, Structures and data types, Data ingestion through various structured and semi-structured data sources, Knowledge of libraries such as pandas, numpy, sqlalchemy, Must have good practices when writing code
- **Databricks / Pyspark**:
Intermediate knowledge in
Understanding of narrow and wide transformations, actions, and lazy evaluations
How DataFrames are transformed, executed, and optimized in Spark
Use DataFrame API to explore, preprocess, join, and ingest data in Spark
Use Delta Lake to improve the quality and performance of data pipelines
Use SQL and Python to write production data pipelines to extract, transform, and load data into
tables and views in the Lakehouse
Understand the most common performance problems associated with data ingestion and how to
mitigate them
Monitor Spark UI: Jobs, Stages, Tasks, Storage, Environment, Executors, and Execution Plans
Configure a Spark cluster for maximum performance given specific job requirements
Configure Databricks to access Blob, ADL, SAS, user tokens, Secret Scopes and Azure Key Vault
Configure governance solutions through Unity Catalog and Delta Sharing
Use Delta Live Tables to manage an end-to-end pipeline with unit and integrations test
- **Azure**:
Intermediate/Advanced knowledge in
**Azure Storage Account**:
Provision Azure Blob Storage or Azure Data Lake instances
Build efficient file systems for storing data into folders with static or parametrized names, considering possible security rules and risks
Experience identifying use cases for open-source file formats like parquet, AVRO, ORC
Understanding optimized column-oriented file formats vs optimized row-oriented file formats
Implementing security configurations through Access Keys, SAS, AAD, RBAC, ACLs
**Azure Data Factory**:
Provision Azure Data Factory instances
Use Azure IR, Self-Hosted IR, Azure-SSIS to establish connections to distinct data sources
Use of Copy or Polybase activities for loading data
Build efficient and optimized ADF Pipelines using linked services, datasets, parameters, triggers, data movement activities, data transformation activities, control flow activities and mapping data flows
Build Incremental and Re-Processing Loads
- **Apache Kafka, Azure Event Hubs or AWS Kinesis**
Intermediate/Advanced knowledge in
Architecture and fundamental concepts of event streaming platforms, including producers, consumers, topics, partitions, and consumer groups
Configuration, deployment, and management of event streaming clusters/services for high availability, scalability, and fault tolerance
Performance tuning and optimization of event streaming clusters, including message retention, partition sizing, and data replication
Implementing common usage patterns such as asynchronous messaging, real-time stream processing, and end-to-end data pipelines for real-time data ingestion and processing
Security best practices for event streaming platforms, including encryption, authentication, and access control mechanisms
**Además, valoramos mucho es que a nível personal encajes con la cultura de Derevo**:
- Capacidad de adaptación y superación. Buscamos personas que se quieran comer el mundo, proactivas y flexibles, a las que no les importe adaptarse a los cambios tecnológicos y metodologías existentes.
- Capacidad analítica y capaz de transmitir confianza en entornos de incertidumbre: debes tener capacidad para gestionar los problemas y verlos como punto de partida para la mejora. Tener y generar