We are looking for Big Data developers and Analytics skills. Preferred experience in Cloud development, Spark, SQL, Python, Data Bases and scripting.
Career Level - IC2
**Responsibilities**:
Data Engineering:
1. Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store large volumes of data from diverse sources.
2. Implement and optimize distributed computing frameworks such as Apache Hadoop, Apache Spark, or Apache Flink to handle big data processing tasks efficiently.
3. Collaborate with cross-functional teams to understand data requirements and deliver innovative solutions for data ingestion, transformation, and storage.
4. Monitor and troubleshoot data pipeline performance issues, implementing solutions to improve reliability and efficiency.
Data Warehousing:
1. Design dimensional data models and database schemas optimized for analytical queries and reporting needs.
2. Develop and maintain ETL workflows to extract, transform, and load data from source systems into the data warehouse, ensuring data quality and consistency.
3. Implement and optimize data warehouse architectures and indexing strategies for efficient data storage and retrieval.
4. Collaborate with business analysts and stakeholders to understand reporting and analytics requirements and translate them into technical solutions.
**Qualifications**:
Technical Skills:
1. Proficiency in programming languages such as Python
2. Knowledge in SQL databases.
Analytical Skills:
1. Strong analytical and problem-solving skills, with the ability to understand complex data requirements and translate them into technical solutions.
Communication and Collaboration:
1. Communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Preferred Qualifications:
1. Any experience with cloud platforms such as OCI, AWS, Azure, or Google Cloud Platform.
2. Familiarity with any data visualization tools.
3. Knowledge in Python/Java and SQL