Big Data stack environments (EMR, Hadoop, MapReduce, Hive)
Experience building on AWS using S3, EC2, Aurora, EMR, Lambda, Step functions, etc preferred.
Experience with Hive, Pyspark and Python are preferred.
Good analytical skills with excellent knowledge of SQL.
Experience using software version control tools (Git)
AWS certifications or other related professional technical certifications
3+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)
3+ years of work experience with very large data warehousing environment
1+ years of experience data modelling concepts
3+ years of Python and/or Java development experience
2+ years of experience in Test Driven Development for Pyspark code
Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
Excellent communicator (written and verbal formal and informal).
Ability to multi-task under pressure and work independently with minimal supervision.
Must be a team player and enjoy working in a cooperative and collaborative team environment.
Adaptable to new technologies and standards.
Experience working with other engineers in defining data engineering best practices and leveraging software development life cycle best practices such as agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations.
Desired Candidate Profile
To qualify for the role, you must have
Minimum 3 years hand-on experience in one or more key areas.
Minimum 5 to 14 years industry experience
One of the leading client of Pylon Consulting
Upload CV to get Started
or drag and drop your CV here
(Please upload a doc, docx or pdf file)
Please upload a pdf, doc, docx or txt document less than 2 MB in size.