.**Databricks Developer****Location: Remote**We are seeking a talented Databricks Developer to join our team and contribute to the design, development, and implementation of Databricks solutions. As a Databricks Developer, you will play a crucial role in building and optimizing data pipelines, creating scalable data models, and enabling advanced analytics and machine learning capabilities using Databricks and associated technologies. This is an exciting opportunity to work with cutting-edge technology and collaborate with cross-functional teams to deliver impactful data-driven solutions.**Key responsibilities**- Develop and maintain data pipelines and ETL processes using Databricks platform and related technologies.- Design, implement, and optimize scalable data models and schemas for data lakes and warehouses.- Collaborate with data scientists and analysts to enable advanced analytics and machine learning capabilities using Databricks notebooks and MLflow.- Perform data ingestion, data cleansing, and data transformation tasks to ensure data quality and integrity.- Develop and maintain documentation for data pipelines, data models, and processes.- Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions using Databricks.- Monitor and troubleshoot performance issues, optimize queries, and ensure efficient data processing and storage.- Stay up-to-date with the latest advancements and best practices in Databricks and related technologies.- Provide technical guidance and support to junior developers or team members as needed. Procedures and the ability to meet deadlines.**Requirements**:- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).- Proven experience as a Databricks Developer or similar role, with a strong understanding of the Databricks platform.- Solid programming skills in Python, Scala, or SQL.- Experience designing and implementing data pipelines and ETL processes using Databricks or similar technologies.- Proficient in data modeling and schema design for data lakes and warehouses.- Familiarity with advanced analytics and machine learning concepts, and experience with Databricks notebooks and MLflow.- Strong understanding of cloud platforms (e.G., AWS, Azure, or GCP) and related services (e.G., S3, Blob Storage, or Data Lake Storage).- Excellent problem-solving skills and the ability to troubleshoot and optimize data processing and performance issues.- Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams.- Self-motivated, detail-oriented, and able to manage multiple priorities in a fast-paced environment.- Familiarity with Agile methodologies and version control systems (e.G., Git) is a plus.**Preferred Qualifications**:- Experience with Big Data technologies such as Apache Spark, Hadoop, or Hive