Experience Inc. Jobs

Job Information

Amazon Data Engineer II, ROW Central Data Engineering Team in Beijing, China

Description

Are you interested in rapidly growing business? The Amazon ARTS team is responsible for creating core analytics tech capabilities and data engineering for ROW(The Rest Of World Business). It comprises of platforms development, research science, and data engineering. ARTS develops scalable analytics applications and research modeling to optimize operation processes. The team standardizes and optimizes data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart.

At ARTS team, Data engineers, data scientists, business intelligence engineers, Software development engineers and product managers use rigorous quantitative approaches to ensure that we target to provide high quality data tech products to our customers around the world, including India, JP, Australia, Brazil, Mexico, Singapore and MENA.

As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. You are expected being passionate about working with huge data sets and be someone who loves to bring datasets together to create dashboards and answer business questions. You are required to have deep expertise in creation and management of datasets. You will build data analytical solutions that will address increasingly complex business questions. You should be expert at implementing and operating stable, scalable data flow solutions from production systems into end-user facing applications/reports. These solutions will be fault tolerant, self-healing and adaptive.

You will be working on developing solutions that provide some of the unique challenges of space, size and speed. You will implement data analytics using cutting edge analytics patterns and technologies that are inclusive of but not limited to various AWS Offerings –Redshift, S3 and RDS. You will work with partner teams to create dashboards for our customers. You are required to be detail-oriented and must have an aptitude for solving unstructured problems, and working in a self-directed environment, own tasks and drive them to completion.

You are required to have excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions. You will own customer relationship about data and execute tasks that are manifestations of such ownership, like ensuring high data availability, low latency, documenting data details and transformations and handling user notifications and training.

Key job responsibilities

The L5 role is responsible to drive large cross team programs with limited guide from manager: 1) contribute constructive ideas in the team tech discussion (e.g. Redshift infra design and optimization) and think about most of the users and not blocked by edge cases in tech design; 2) execute the tech implementation with a clear milestone planning and lead it to smooth launch with limited manager guidance; 3) help stakeholders and junior teammates ramp up with ARTS internal tech and Amazon wide tech products (e.g., Redshift Cluster, Datanet, Query Performance, CDK, NAWS, MAWS); 4) develop data pipelines (daily/intra-day/real-time) independently; 5) routine KTLO on data products (Redshift, RDS, Tableau Server, data pipelines) and dive deep for cost/performance optimization opportunities.

A day in the life

The Data Engineer in ARTS DE team will leverage 30% time to monitor, maintain and optimize the team data infra health and performance, mainly on Redshift and data pipelines. For the rest of the time, the L5 DE will work on coding work to support the cross functional analytics and data science programs.

About the team

ARTS team is responsible for creating core analytics tech capabilities and data engineering. It comprises of platforms development, research science, and data engineering. We develops scalable analytics applications and research modeling to optimize operation processes. The team standardizes and optimizes data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart.

Basic Qualifications

  • 4+ years of data engineering experience

  • Experience with data modeling, warehousing and building ETL pipelines

  • Bachelor's degree

  • Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam

  • Knowledge of distributed systems as it pertains to data storage and computing

  • Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby

Preferred Qualifications

  • 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience

  • Experience with big data technologies such as: Hadoop, Hive, Spark, EMR

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

DirectEmployers