Data Architect, Data Lake, Professional ServicesAre you a Data Analytics specialist? Do you have Data Lake/Hadoop experience? Do you like to solve the most complex and high scale (billions + records) data challenges in the world today? Do you like to work on-site in a variety of business environments, leading teams through high impact projects that use the newest data analytic technologies? Would you like a career path that enables you to progress with the rapid adoption of cloud computing?At Amazon Web Services (AWS), we're hiring highly technical data architects to collaborate with our customers and partners on key engagements. Our consultants will develop, deliver and implement data analytics projects that help our customers leverage their data to develop business insights. These professional services engagements will focus on customer solutions such as Data and Business intelligence, machine Learning and batch/real-time data processing.Responsibilities include:Delivery - Help the customer to define and implement data architectures (Data Lake, Lake House, Data Mesh, etc). Engagements include short on-site projects proving the use of AWS Data services to support new distributed computing solutions that often span private cloud and public cloud services.Solutions - Deliver on-site technical assessments with partners and customers. This includes participating in pre-sales visits, understanding customer requirements, creating packaged Data & Analytics service offerings.Innovate - Engaging with the customer's business and technology stakeholders to create a compelling vision of a data-driven enterprise in their environment. Create new artifacts that promote code reuse.Expertise - Collaborate with AWS field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Athena, Glue, Lambda, S3, DynamoDB, Amazon EMR and Amazon Redshift.Since this is a customer-facing role, you might be required to travel to client locations and deliver professional services when needed, up to 50%.Minimum Requirements:- Experience implementing AWS services in a variety of distributed computing environments- 3+ years of experience of Data Lake/Hadoop platform implementation- 2+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.- Experience with Apache Hadoop and the Hadoop ecosystem- Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro)#J-18808-Ljbffr