Skip to content


Randolph, New Jersey Full Time Posted: Friday, 14 September 2018
Job Description


Reports To:

Position Description:

Responsible for designing and implementing big data engineering solutions for clients leveraging hands-on experience in data ingestion, transformation, and data flow design. Capable of working independently as well as with team members to drive customer satisfaction and successful consulting engagements. Collaborate with Marketing, Sales, Delivery, and Operations to achieve goals.

Functional Responsibilities:

- Provide a solution design that meets both the business requirements and the best Hadoop technical practices

- Perform collection, cleansing, processing, and analysis of new and existing data sources, defining and reporting data quality and consistency metrics

- Acquire Big Data Certifications if required

- Participate in working sessions with technical executives and experts

- Learn & stay current on Big Data techniques developments/improvements

Work Location:

Remote/Occasional travel to client worksites.


Years of Relevant Experience:

- 2+ years in Big Data Engineering, including data export/import from RBDMS to HDFS, and real-time/near-real-time streaming data ingestion and transformation.

Work Experience:

- Demonstrated knowledge and proven hands-on experience with HDFS, YARN, MapReduce, Hive/Impala, Pig, Sqoop, Flume/Kafka, Solr, Oozie (particularly, knowledge of Cloudera stack is a plus)

- Demonstrated knowledge and hands-on experience with AWS EC2 and S3

- Hands-on experience on working with large complex datasets


- Bachelor's Degree in Computer Sciences or a relevant technical field, advanced degree preferred


- Strong SQL and HiveQL skills (Java/MapReduce, Python are a plus)

- Understanding of major RDBMS systems like Oracle, MySQL, PostgreSQL, SQL Server, DB2, & Sybase

- Working knowledge of data compression and partitioning techniques, Avro/Parque formats, optimization tuning

- Ability to debug, understanding Hadoop/YARN log files

- Working knowledge of automating/scheduling data flows with Oozie (both utilizing GUI and scripting)

- Working knowledge of Hadoop eco-system: YARN, HDFS, Sqoop, Hive/Impala, Oozie, Flume, Kafka, Solr

- Proficient in Linux OS, bash Scripting (AWK and/or SED is a plus)

- A strong understanding of data profiling and data cleansing techniques

- Solid understanding of ETL architectures, data movement technologies, and working knowledge of building data flows.

- Proven tracking record on driving rapid prototyping and designs

- Strong analytical and problem-solving skills with proven communication and consensus building abilities.

- Proven skills to work effectively across internal functional areas in ambiguous situations.

- Excellent organization and planning skills

- High degree of professionalism

- Ability to thrive in a fast-pace environment


- Microsoft Office; QlikView and Qlik Sense as plus


- English 5: IRL scale rating 1-5

Randolph, New Jersey, United States of America
Bardess Group, Ltd.
Bardess Group, Ltd.
9/14/2018 3:41:41 PM

We strongly recommend that you should never provide your bank account details to an advertiser during the job application process. Should you receive a request of this nature please contact support giving the advertiser's name and job reference.