• Aptask
  • $106,390.00 -159,250.00/year*
  • Jersey City , NJ
  • Information Technology
  • Full-Time
  • 10 Bayside Terrace

Big Data Engineer, with expertise in solutions for processing large volumes of data, using tools ETL, BI and Big Data platforms. Proven experience of at least 3-4 years in designing and development of different processes around Big Data platforms. Hadoop Distribution Cloudera Hadoop - Hands on experience in Big Data solutions (Hadoop Ecosystem) ETLs and Data Marts. - Hadoop Framework Experience Hive, Impala Kerberos, HDFS, Kafka, Spark, will be valued. Knowledge and experience working with ETL solutions - Talend, - Understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS, SQL experience - Understanding of major programmingscripting languages like Java, Linux andor PythonScala Also valuable - Real-Time analytics and BI platforms such as Tableau Software - Other professional skills - To be able to work in teams with different disciplines - To be a self-managed person - To be able to perform detailed analysis of business problems and use designing in Big Data solutions - To be able to work creatively in a problem-solving environment - To be able to do the analysis, processing and visualization of data for processing large volumes of data Big data engineer will translate logs and data from various data ingestion, storage, and viz applications to present in a front end tool. Given the constant flux of the big data analytics market, new tools will likely be introduced so the ability and enthusiasm for new technology is also important. The role will also be responsible to assist with Hadoop Platform Support and Perform Administrative on Production Hadoop clusters. At least 8 years of experience with Information Technology. At least 7 years of experience in Project life cycle activities on development and maintenance projects. At least 4 years of experience in developing Data Warehouse and ETL in Talend ETL environment At least 4 years of experience in Java, Unix scripting, Oracle SQL At least 3 years of experience in Clourdera, Hive, Spark, from dev-op, to designingarchitecting a secure Big Data environment Proficient with SQL, Complex SQL Tuning etc. Hands on experience in Talend Big Data edition and solutions a strong Experience in Relational Modeling, Dimensional Modeling, and Modeling of Unstructured Data Experience in Design and architecture review a . Good understanding of Data integration, Data Quality, and data architecture Good expertise in impact analysis due to changes or issues Prepared test scripts and test cases to validate data and maintaining the data quality Ability to work with Senior Enterprise Architects to develop a Big Data platform
Associated topics: data analyst, data architect, data integration, data integrity, data management, data quality, data warehouse, database administrator, erp, mongo database administrator

* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.

Launch your career - Upload your resume now!

Upload your resume

Loading some great jobs for you...