**Duties**:- Support the design and building of innovative framework and tools to support all phases of delivering Data Value- Support the design and building data pipeline architectures- Aid in systems optimization: optimize streaming products, optimize data analytic platforms, optimize data processing workflows: all are enhanced for robustness, scalability, and performance- Maintain, enhance, and perform research in support of the Big Data environment- Develop High Performance Computing Solutions to process TBs of data at scale**Main Accountabilities**:- Assist in the design, develop, and test big data solutions- Install and integrate various technologies in the big data echo system- Troubleshoot issues in big data framework- Aid in the implementation of analytical models- Deliver solutions while using an Agile Development model with iterative deliverables- Create quality deliverables to communicate technical solutions to appropriate audiences- Understand issues, problem solving and design/architect solutions- Build and collaborate with business and technical teams to deliver software- Learn continuously, leveraging training resources and self-directed training, sharing knowledge and skills with others- Provide mentoring and leadership to more junior resources- Passion for technology and willingness to learn is required- Have ability to work in a fast paced and dynamic work environment and be able to produce efficient and robust solution- High energy, confidence, and agility to drive a team- Candid and direct communication- A creative thinker who can bring in new ideas and innovations to the company**Required Qualifications**:- 2+ years of Java development experience in large scale enterprise development for Undergraduates- 1+ years of Java development experience in large scale enterprise development for Masters/PHD- Produce high-quality, maintainable software- Knowledgeable of developing in the Cloud- Knowledge of multiple threading development and performance tuning- Knowledge of implementing efficient logic using collections and data structures- Knowledge of some of the following Big Data Technologies (e.G., Elastic Search, Kafka, Spark, HDFS and Hive)- Knowledge in Design Patterns, OOP/OOD, Software Architecture- Bachelor's in computer science, engineering, information technology, or related degree and/or equivalent work experience- Good communication skills with both Technical and Business audience**Preferred Qualifications**:- Knowledgeable in Scala or Python programming- Knowledgeable in the field of big data on a TB or PB Scale environment (e.G. Hadoop, MongoDB, Couch Base, Cassandra)- Knowledgeable of Spark Based Technologies (e.G. Spark Data frames, Spark Streaming and Spark SQL)- Experience in the following: C, C++, Perl, PHP, Scala or Python- Experience in frontend development (e.G., JavaScript, jQuery, AngularJS, ReactJS)- Web service protocols such as REST