Big Data Engineer

Detalles de la oferta

Intugo in Partnership with a leading company in the technology industry are looking for a **Data Engineer** with a diverse background in data integration to join the Data Services team. Some data are small, some data are very large (1 trillion+ rows), some data are structured, and some data are not. Our data comes in all kinds of sizes, shapes and formats. Traditional RDBMS like PostgreSQL, Oracle, SQL Server, MPPs like StarRocks, Vertica, Snowflake, Google BigQuery, and unstructured, key-value like MongoDB, and Elasticsearch, to name a few.

We are looking for individuals who can design and solve any data problems using different types of databases and technologies supported within our team. We use MPP databases to analyze billions of rows in seconds. We use Spark and Iceberg, batch or streaming to process whatever the data needs are. We also use Trino to connect all different types of data without moving them around.

Besides a competitive compensation package, you'll be working with a great group of technologists interested in finding the right database to use and the right technology for the job in a culture that encourages innovation. If you're ready to step up and take on some new technical challenges at a well-respected company, this is a unique opportunity for you.

**Responsibilities**:

- Implement ETL/ELT processes using various tools and programming languages (Scala, Python) against our MPP databases StarRocks, Vertica and Snowflake.
- Work with the Hadoop team and optimize Hive and Iceberg tables.
- Contribute to the existing Data Lake and Data Warehouse imitative using Hive, Spark, Iceberg, Presto/Trino.
- Analyze business requirements, design and implement required data models.

**Qualifications: (must have)**
- BA/BS in Computer Science or related field.
- 1+ years of experience with MPP databases such as StarRocks, Vertica, Snowflake.
- 3+ years of experience with RDBMS databases such as Oracle, MSSQL or PostgreSQL.
- Programming background with Scala, Python, Java or C/C++.
- Strong in any of the Linux distributions, RHEL, CentOS or Fedora.
- Experience working in both OLAP and OLTP environments.
- Experience working on-prem, not just cloud environments.

**Desired: (nice to have)**
- Experience with Elasticsearch or ELK stack.
- Working knowledge of streaming technologies such as Kafka.
- Working knowledge of orchestration tools such as Oozie and Airflow.
- Experience with Spark. PySpark, SparkSQL, Spark Streaming, etc.
- Experience using ETL tools such as Informatica, Talend and/or Pentaho.
- Understanding of Healthcare data.
- Data Analyst or Business Intelligence would be a plus.

**Benefits**
- Income of MXN 65,000 monthly, before tax
- Law and higher benefits
- 100% payroll scheme

**Location in Zapopan, near plaza del sol (locations)**

**Send your resume**:
**Salary**: $50,000.00 - $65,000.00 per month

Work Location: In person


Salario Nominal: A convenir

Fuente: Whatjobs_Ppc

Requisitos

Coordinador De Sistemas O Ti

_Bienestar, calidad y servicio_. **Laboratorio Santo Domingo**, empresa líder nacional en servicios de salud, análisis clínicos, salud laboral. Empresa, cmás...


Laboratorio Santo Domingo - Jalisco

Publicado 12 days ago

Auditor Operativo Jr - Guadalajara, Jal

No requieres experiência, nosotros te capacitamos. ¿Te gusta viajar? **¡RGIS Especialistas en inventarios te está buscando!** Forma parte de nuestro equipo...


Rgis Inventory Specialists - Jalisco

Publicado 12 days ago

Senior Software Quality Engineer - Eu Data Center

**_ Responsibilities:_** - Design and implement test cases, both manual and automated, to ensure quality of software solutions - Design, build and improve te...


Hewlett Packard - Jalisco

Publicado 12 days ago

Software Quality Engineer - Eu Data Center

**_ Responsibilities:_** - Design and implement test cases, both manual and automated, to ensure quality of software solutions - Design, build and improve te...


Hewlett Packard - Jalisco

Publicado 12 days ago

Built at: 2024-12-27T21:34:18.773Z